CN114463752A - Vision-based code spraying positioning method and device - Google Patents
Vision-based code spraying positioning method and device Download PDFInfo
- Publication number
- CN114463752A CN114463752A CN202210063581.8A CN202210063581A CN114463752A CN 114463752 A CN114463752 A CN 114463752A CN 202210063581 A CN202210063581 A CN 202210063581A CN 114463752 A CN114463752 A CN 114463752A
- Authority
- CN
- China
- Prior art keywords
- code spraying
- target
- code
- information
- workpiece
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005507 spraying Methods 0.000 title claims abstract description 191
- 238000000034 method Methods 0.000 title claims abstract description 63
- 238000010586 diagram Methods 0.000 claims abstract description 47
- 230000008569 process Effects 0.000 claims abstract description 16
- 238000013528 artificial neural network Methods 0.000 claims abstract description 9
- 230000000007 visual effect Effects 0.000 claims abstract description 8
- 238000003062 neural network model Methods 0.000 claims description 27
- 238000012549 training Methods 0.000 claims description 18
- 238000005259 measurement Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 5
- 239000007787 solid Substances 0.000 claims description 5
- 238000007641 inkjet printing Methods 0.000 claims 1
- 230000005540 biological transmission Effects 0.000 abstract description 5
- 238000001514 detection method Methods 0.000 description 10
- 230000009466 transformation Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003702 image correction Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Spray Control Apparatus (AREA)
Abstract
The invention relates to a code spraying positioning method and a device based on vision, on one hand, a real-time image of a code spraying site can be collected to reflect the current real-time information on a transmission device in real time, and when a workpiece to be code sprayed is conveyed on a conveying device, a target block diagram in the real-time image is intercepted through a neural network algorithm to position the specific position of the target block diagram; on the other hand, target information can be determined according to the target block diagram; and then actual code spraying information is determined according to the target position and the target information, and the code spraying process is completed. Generally speaking, the code spraying positioning method is a visual-based follow-up code spraying method, can position an actual code spraying position according to the actual condition of a code spraying site, is not limited by the placing position, placing posture, workpiece type and the like of a workpiece to be sprayed, solves the problems of low code spraying efficiency, single code spraying type, fixed code spraying size and the like of manual and traditional automatic methods, improves code spraying efficiency and automation degree, and has higher universality and generalization.
Description
Technical Field
The invention relates to the field of automatic control, in particular to a vision-based code spraying positioning technology.
Background
Under current industrial automation and intelligent trend, spout the sign indicating number to the industry part, be work piece sign, categorised, track an indispensable link, common spout a yard mode can divide into the manual work and spout a yard with automatic, and the manual work spouts a yard can deal with different kinds, shape work piece, but work efficiency is lower, and workman working strength is big, and puts for a long time and there is great safe risk in this kind of environment to the workman. Compared with a manual spraying mode, the automatic spraying mode can greatly improve the production efficiency on the one hand, reduce the production period of products and further facilitate the development of enterprises and industries, and secondly, the automatic code spraying can greatly reduce the number of people who spray code operation and the chance that the operation personnel contact with the spraying environment, so that the harm of the code spraying operation to the health of the personnel can be avoided. However, the current mainstream automatic code spraying method is mainly applied to the conditions of single type, specific appearance, fixed code spraying area and single code spraying type, cannot treat the conditions of multiple types of workpieces, large appearance difference and the like in industrial production, and has low universality and generalization.
Therefore, how to provide a code spraying technology which is not limited by the placement position, the number of types, the shape difference and the like of workpieces is a technical problem to be solved urgently in code spraying at present.
Disclosure of Invention
In order to solve the technical problem, the invention provides a visual-based code spraying positioning method, which comprises the following steps:
s1: collecting a real-time image of a code spraying field;
s2: intercepting a target block diagram in the real-time image by adopting a neural network algorithm according to the real-time image so as to position a target position;
s3: determining target information according to the target block diagram;
s4: and determining actual code spraying information according to the target position and the target information.
Further, step S2 includes:
s21: acquiring a training sample set;
s22: constructing a neural network model, and inputting a training sample set into the neural network model to obtain a trained neural network model;
s23: and inputting the real-time image into the trained neural network model, and intercepting a target block diagram in the real-time image to position the target position.
Further, step S21 includes:
s211: acquiring a background image of a code spraying site;
s212: acquiring a workpiece template image to be code-sprayed;
s213: and superposing the template image on the background image according to different placing information to obtain a training sample set.
Further, in step S22, the neural network model includes: the device comprises an input module, a feature extraction module, a feature fusion module and a prediction module.
Further, in step S3, the target information includes a target category; step S3, including:
s31: extracting edge features of a workpiece to be code-sprayed according to the target block diagram;
s32: according to the edge characteristics of the workpiece to be code-sprayed, performing similarity measurement on cosine values of the gradient direction of the edge points of the workpiece to be code-sprayed and the gradient direction of the edge points of the template workpiece;
s33: matching the workpiece to be sprayed with the template workpiece corresponding to the edge point with the maximum similarity measurement, and determining the target type.
Further, in step S3, the target information further includes target characteristics; step S3, further including:
s34: acquiring matching information of a template workpiece;
s35: and determining the target characteristics according to the matching information and the target block diagram.
Further, step S4 includes:
s41: selecting a maximum code spraying area on the workpiece to be code sprayed as self code spraying information of the workpiece to be code sprayed according to the target information;
s42: and according to the target position, self code spraying information of the workpiece to be code sprayed is changed to obtain actual code spraying information.
Further, step S41 includes:
s411: acquiring a binary image of a workpiece to be code-sprayed according to the target information;
s412: in the definition of the binary image, a solid area is a target pixel, and a hollow area is a background pixel;
s413: calculating the distance between each target pixel and the nearest background pixel;
s414: and selecting the area between the target pixel corresponding to the maximum distance and the background pixel of the target pixel as the maximum code spraying area.
Further, the method further comprises:
s5: verifying whether the actual code spraying information meets the preset requirement, and if so, completing the code spraying process according to the actual code spraying information; and if not, adjusting the actual code spraying information to finish the code spraying process.
In another aspect, the present invention further provides a visual-based inkjet positioning apparatus for performing any of the above methods, including:
the image acquisition module is used for acquiring real-time images of the code spraying site;
the target position positioning module is connected with the image acquisition device and used for intercepting a target block diagram in the real-time image by adopting a neural network algorithm according to the real-time image so as to position a target position;
the target information determining module is connected with the target position positioning device and used for determining target information according to the target block diagram;
and the actual code spraying information determining module is connected with the target position positioning module and the target information determining module and is used for determining actual code spraying information according to the target position and the target information.
On one hand, the vision-based code spraying positioning method and the vision-based code spraying positioning device can acquire real-time images of code spraying sites, reflect the current real-time information on a transmission device in real time, intercept a target block diagram in the real-time images through a neural network algorithm when a workpiece to be code sprayed is conveyed on a conveying device, and position the specific position of the target block diagram; on the other hand, target information can be determined according to the target block diagram; and then actual code spraying information is determined according to the target position and the target information, and a code spraying process is completed. Generally speaking, the code spraying positioning method is a visual-based follow-up code spraying method, can position an actual code spraying position according to the actual condition of a code spraying site, is not limited by the placing position, placing posture, workpiece type and the like of a workpiece to be sprayed, solves the problems of low code spraying efficiency, single code spraying type, fixed code spraying size and the like of manual and traditional automatic methods, improves code spraying efficiency and automation degree, and has higher universality and generalization.
Drawings
FIG. 1 is a flow chart of one embodiment of a vision-based inkjet positioning method of the present invention;
FIG. 2 is a schematic diagram of a code spraying system of the present invention;
FIG. 3 is a flowchart of one embodiment of step S1 of the vision-based inkjet positioning method of the present invention;
FIG. 4 is a flowchart of one embodiment of step S2 of the vision-based inkjet positioning method of the present invention;
FIG. 5 is a flowchart of one embodiment of step S21 of the vision-based inkjet positioning method of the present invention;
FIG. 6 is a block diagram of a neural network model for a vision-based inkjet positioning method of the present invention;
FIG. 7 is a flowchart of one embodiment of step S3 of the vision-based inkjet positioning method of the present invention;
FIG. 8 is a flowchart of another embodiment of step S3 of the vision-based inkjet positioning method of the present invention;
FIG. 9 is a flowchart of one embodiment of step S4 of the vision-based inkjet positioning method of the present invention;
FIG. 10 is a flowchart of one embodiment of step S41 of the vision-based inkjet positioning method of the present invention;
FIG. 11 is a flowchart of one embodiment of step S5 of the vision-based inkjet positioning method of the present invention;
FIG. 12 is a flowchart of another embodiment of step S5 of the vision-based inkjet positioning method of the present invention;
FIG. 13 is a block diagram of an embodiment of a vision-based inkjet positioning apparatus according to the present invention;
Detailed Description
As shown in fig. 1, the present invention provides a visual-based inkjet positioning method, which includes:
s1: collecting a real-time image of a code spraying field; as shown in fig. 2, a code spraying system is illustrated, which includes: conveyer, truss, spout a yard device, image acquisition device and controlling means. Specifically, the conveying device can be selected but not limited to comprise a driving assembly, a transmission assembly and a conveying belt, the driving assembly such as a motor drives the transmission assembly such as a belt pulley to move, and then the conveying belt is driven to move, so that a workpiece to be code-sprayed, which is placed on the conveying belt, is conveyed to a code-spraying point position to complete a code-spraying process; the truss can be selected and not limited to be arranged above the conveying device, and comprises a front-back moving assembly, a left-right moving assembly, an up-down moving assembly and a driving assembly so as to realize three-dimensional movement; the code spraying device is arranged at the tail end of the truss and driven by the front, rear, left, right, up and down moving assemblies of the truss to realize three-dimensional movement and reach any position of the conveying device to finish the code spraying process. The image acquisition device can be selected from, but not limited to, a camera (optionally a 2D camera), a camera and other devices with an image acquisition function, and acquires a real-time image of a code spraying site (around the conveyor). The control device can be selected but not limited to a single chip microcomputer, a control terminal and the like, positions of workpieces to be sprayed with codes and code spraying information are located by analyzing real-time images collected by the image collecting device, and the truss is controlled to drive the code spraying device to move to code spraying point positions to complete a code spraying process.
S2: and intercepting a target block diagram in the real-time image by adopting a neural network algorithm according to the real-time image so as to position the target position. Specifically, the real-time image may be optionally but not limited to be input into a trained neural network model, and a target block diagram of an area where the workpiece to be code-sprayed is located in the real-time image is captured, so as to locate a target position (a position of the workpiece to be code-sprayed). Specifically, according to the situation of the code spraying site, one or more workpieces to be code-sprayed may appear on the conveyor belt at the same time, and the target block diagram is optionally, but not limited to, one or more workpieces to be code-sprayed. The multiple target block diagrams are selected from the same target block diagram or different target block diagrams according to the type of the workpieces to be code-printed on the conveyor belt (which may be the same workpiece or different workpieces).
S3: and determining target information according to the target block diagram. Specifically, optionally but not limited to the target block diagram(s) intercepted according to the neural network model, the target information corresponding to each target block diagram is respectively determined, that is, which type of workpiece to be code-sprayed corresponds to the target block diagram, the code-spraying information of the code-spraying workpiece itself (for example, where the code-spraying position itself is, how the code-spraying angle is set), and the like.
S4: and determining actual code spraying information according to the target position and the target information. Specifically, the actual code spraying information (the code spraying point position, the code spraying angle, the code spraying size and the like which the truss needs to drive the code spraying device to actually reach) can be determined optionally but not limited to according to the target position (the position of the workpiece to be code sprayed on the conveyor belt) and the target information (the type of the workpiece to be code sprayed and the code spraying information of the truss).
In this embodiment, a code-spraying positioning method based on vision is provided, on one hand, a real-time image of a code-spraying site can be collected to reflect the current real-time information on a transmission device (a conveyer belt) in real time, and when a workpiece to be code-sprayed is conveyed on the conveyer, a target block diagram (the position of the workpiece to be code-sprayed) in the real-time image is intercepted through a neural network algorithm to position the specific position of the workpiece to be code-sprayed; on the other hand, target information (the type of a workpiece to be code-sprayed, the code-spraying position, the angle and the like) can be determined according to the target block diagram; and then actual code spraying information (code spraying point positions which the truss needs to drive the code spraying device to actually reach) is determined according to the target position and the target information, and a code spraying process is completed. Generally speaking, the code spraying positioning method is a visual-based follow-up code spraying method, can position an actual code spraying position according to the actual condition of a code spraying site, is not limited by the placing position, placing posture, workpiece type and the like of a workpiece to be sprayed, solves the problems of low code spraying efficiency, single code spraying type, fixed code spraying size and the like of manual and traditional automatic methods, improves code spraying efficiency and automation degree, and has higher universality and generalization.
Specifically, as shown in fig. 3, the step S1 of acquiring the real-time image of the code-spraying scene may optionally but not limited to include S11: collecting a real-time image of a code spraying field; s12: carrying out image correction on the real-time image; s13: and carrying out image enhancement on the real-time image.
More specifically, S12: the image correction is performed on the real-time image because due to the inherent structure of an image acquisition device (such as a camera and a lens), the image directly acquired from the image acquisition device has optical distortion with different degrees from the ideal imaging, and subsequent tasks such as target detection and matching (target position positioning and target information determining) are directly influenced. And (3) calculating the distortion parameter of the image acquisition device in advance by adopting a high-precision calibration method, correcting the real-time image (original image) acquired by the camera before a subsequent task, and taking the corrected real-time image as a real-time image.
More specifically, S13: the image enhancement is carried out on the real-time image because the background of the code spraying working scene is complex, the environment has the phenomenon of over-brightness or over-darkness, and the effect of directly obtaining the real-time image is poor, so the image enhancement (optional but not limited to the contrast enhancement method for enhancing the image) is adopted, the image display details are enhanced, the workpiece to be sprayed with the code has larger discrimination in the image, and the detection and matching tasks of the workpiece to be sprayed with the code are more facilitated.
In this embodiment, a preferred mode of step S1 is given, which further processes the acquired real-time image through image rectification/enhancement steps, etc. to further improve the accuracy of subsequent model detection, matching identification, positioning, etc.
Specifically, as shown in fig. 4, step S2 may optionally but not exclusively include:
s21: acquiring a training sample set;
s22: constructing a neural network model, and inputting a training sample set into the neural network model to obtain a trained neural network model;
s23: and inputting the real-time image into the trained neural network model, and intercepting a target block diagram in the real-time image to position the target position.
In this embodiment, a specific embodiment of step S2 is given, in which the neural network model is trained by training the sample set to optimize parameters at each level, so as to obtain a trained neural network model, then the real-time image is input into the neural network model, and a target block diagram is obtained by capturing, so as to position a target position according to the target block diagram.
Specifically, as shown in fig. 5, step S21 may optionally but not exclusively include: s211: acquiring a background image of a code spraying site; s212: acquiring a workpiece template image to be code-sprayed; s213: and superposing the template image on the background image according to different placing information to obtain a training sample set. More specifically, the code-spraying field background image in S211 is specifically an image of the conveyor belt itself when no workpiece to be code-sprayed is placed on the conveyor belt, and is optionally but not limited to be obtained by shooting the conveyor belt in real time by using image acquisition equipment before code spraying; in step S212, the template image of the workpiece to be code-sprayed is obtained by optionally, but not limited to, shooting the workpiece to be code-sprayed (possibly hundreds of workpieces to be code-sprayed) in real time by using an image acquisition device before code spraying; in step S213, the different placement information may be selected, but not limited to, status information such as different placement positions and placement angles of the workpieces to be code-printed on the conveyor, so as to generate an infinite possible image of the workpieces to be code-printed on the conveyor.
In this embodiment, a specific embodiment (S211-S213) of how to obtain the training sample set in step S21 is given, and first, by obtaining a background image of a code spraying site, an actual image of the code spraying site is collected, and scratches, stains, and speckles of the actual conveying device (conveying belt) are fully considered, so that situations that an environment background is complex, a workpiece to be code sprayed is not obviously compared with the background, and missing recognition and erroneous recognition are easily caused when the workpiece is recognized by using a conventional visual processing technology are avoided, so as to improve accuracy of a detection and positioning method of a subsequent real-time image; the template images of the workpieces to be sprayed with the codes are obtained again, the shapes, the sizes and the like of all the workpieces to be sprayed with the codes are comprehensively mastered, the template database can be correspondingly expanded according to the types of the workpieces to be sprayed with the codes on site, the types of the workpieces to be sprayed with the codes at present can be adaptively adjusted, particularly, the textures of the workpieces to be sprayed with the codes can be truly reflected, and the template images are more real; finally, the template image is superposed on the background image according to different placing information, the infinite possibility of the workpiece to be sprayed on the conveying device can be fully considered, the workpiece to be sprayed is placed at any position in any posture, and compared with the method for obtaining the training sample set by shooting in an actual scene, more and more real training samples can be provided by the method so as to generate the training sample set, so that the dilemma that a large number of training samples cannot be collected in a short time, the training samples are insufficient, and the neural network model cannot be trained is avoided; on the other hand, due to the fact that enough training samples exist, the prediction accuracy of the neural network model can be further improved.
Specifically, as shown in fig. 6, in step S22, the neural network model, optionally but not limited to, the network model for target detection, includes: the input module, the feature extraction module, the feature fusion module and the prediction module are used for inputting training samples into the neural network model through the input module in sequence, and parameters of all levels are optimized to obtain the trained neural network model.
Specifically, in step S23, optionally but not limited to, the real-time image is input into the neural network model through the input module, and then the multi-stage feature extraction and fusion are performed through the feature extraction module and the feature fusion module, so as to intercept the target block diagram in the real-time image, and obtain the position of the target block diagram in the real-time image by positioning. More specifically, optionally but not limited to, the central point or a certain angular point coordinate of the target frame is used as output data, and the target position (the position of the workpiece to be code-sprayed in the real-time image) is returned.
In this embodiment, a specific embodiment of step S2 is given, a one-stage target detection network model is adopted, the method has the advantages of high calculation speed and high detection accuracy, and provides end-to-end target frame prediction, the target detection network based on deep learning and the detection method for a workpiece (target) to be code-sprayed can meet the requirement of high real-time requirement on detection of the workpiece to be code-sprayed in a real-time image in an actual code-spraying application scene, and even under the condition of low discrimination, the neural network model can accurately and completely frame a target frame diagram of the workpiece to be code-sprayed and output the target position of the workpiece to be code-sprayed.
More specifically, as shown in fig. 7, in step S3, the target information includes a target category; step S3, optionally but not limited to, includes:
s31: extracting edge features of a workpiece to be code-sprayed according to the target block diagram;
s32: according to the edge characteristics of the workpiece to be code-sprayed, performing similarity measurement on cosine values of the gradient direction of the edge points of the workpiece to be code-sprayed and the gradient direction of the edge points of the template workpiece;
s33: matching the workpiece to be sprayed with the template workpiece corresponding to the edge point with the maximum similarity measurement, and determining the target type.
More specifically, in order to fully consider the actual situation that the workpiece to be code-printed (target block diagram) in the real-time image is not placed on the conveyor belt in a square manner, the pixel scaling size of the image acquisition device, and the like, in step S3, the target information may optionally but not limited to include target features, such as a target angle, a target size, and the like. As shown in fig. 8, step S3 may optionally, but not exclusively, include:
s34: acquiring matching information of a template workpiece; such as the matching center, matching angle, matching size, etc. of the matched template workpiece.
S35: and determining the target characteristics according to the matching information and the target block diagram. Specifically, affine transformation is carried out on the matched template workpiece according to the matching center, the matching angle, the matching size and the like, and the matched template workpiece is transformed to the placing pose (the placing center, the placing angle, the size of the workpiece zoomed under the camera pixel and the like) of the workpiece to be code-sprayed in the real-time image.
In this embodiment, a specific example of how to identify the object information (object type, object feature) according to the object block diagram in S3 is given, which is identified by template matching. Specifically, the edge matching algorithm based on the geometric features can be selected but not limited to be adopted, the gradient direction of the image feature points is used as the matching features, similarity measurement is carried out by calculating the cosine value between the gradient direction of the template workpiece feature points and the gradient direction of the edge points of a target (workpiece to be code-sprayed), and the method has the advantages of being insensitive to illumination change, strong in anti-jamming capability and the like. Furthermore, affine transformation is carried out according to the matching information of the template workpieces, the matched template workpieces are transformed to the placing pose of the workpieces to be sprayed in the real-time image, and the types and the positions of the workpieces to be sprayed on the conveying belt can be further accurately positioned.
More specifically, after the workpieces to be code-sprayed on the conveyor belt are detected and identified, the most critical step is how to find a suitable code-spraying point on the workpieces to be code-sprayed. Specifically, as shown in fig. 9, step S4 may be, but is not limited to: s41: selecting a maximum code spraying area on the workpiece to be code sprayed as self code spraying information of the workpiece to be code sprayed according to the target information; specifically, the maximum code spraying area on the workpiece to be code sprayed is determined optionally but not limited to according to the target type, the target size and the like, and the maximum code spraying area is determined as the self optimal code spraying position of the workpiece to be code sprayed. Optionally, but not limited to, the center coordinate of the maximum code spraying area is used as the code spraying center of the printer.
S42: and according to the target position, self code spraying information of the workpiece to be code sprayed is changed to obtain actual code spraying information. Specifically, code spraying information of the workpiece to be code sprayed under a coordinate system of the workpiece to be code sprayed is converted into actual code spraying information (optionally but not limited to code spraying coordinates, code spraying angles and the like) under a coordinate system of a conveyer belt (a truss or a code spraying device) optionally but not limited to target positions (positions of the workpiece to be code sprayed on the conveyer belt), so that the control device controls the truss to drive the code spraying device to move to the optimal code spraying point position to complete the code spraying process.
In this embodiment, a specific embodiment of how to position the actual code spraying information in step S4 is given, which is based on a criterion that the code spraying region must be completely located on the workpiece to be code sprayed, selects a region with the largest code spraying region on the workpiece to be code sprayed as the self code spraying information (the optimal code spraying position) of the workpiece to be code sprayed, and converts the self code spraying information under the coordinate system of the workpiece to be code sprayed to the actual code spraying information under the coordinate system of the conveyor belt (the truss or the code spraying device) according to the target position (the position of the workpiece to be code sprayed on the conveyor belt).
More specifically, as shown in fig. 10, step S41 optionally but not limited to includes:
s411: acquiring a binary image of a workpiece to be code-sprayed according to the target information; specifically, the target type (the type of the workpiece to be code-sprayed) can be determined optionally but not limited to according to the target information, and the workpiece binary image of the workpiece is extracted by calling the database;
s412: in the definition of the binary image, a solid area is a target pixel, and a hollow area is a background pixel;
s413: calculating the distance between each target pixel and the nearest background pixel;
s414: and selecting the area between the target pixel corresponding to the maximum distance and the background pixel of the target pixel as the maximum code spraying area.
In this embodiment, a specific example of how to select the maximum code spraying region on the workpiece to be code sprayed in step S41 is given, which calculates the maximum distance between the target pixel representing the solid region of the workpiece to be code sprayed and the background pixel of the hollow region through the binary image, determines the maximum code spraying region, and is simple and convenient to calculate.
Specifically, optionally but not exclusively, assuming that the workpiece template map is a binary map with a size of m × n, any pixel I (x, y) e (0,1) on the image, where x (1 ≦ x ≦ m) represents the image abscissa, y (1 ≦ y ≦ n) represents the ordinate, and I (x, y) represents the pixel value of the point. The target pixel is a non-zero point, the corresponding set Ob { (x, y) | I (x, y) ═ 1}, the background pixel is a zero point, and the corresponding set Bg { (x, y) | I (x, y) } 0. And (3) performing distance transformation on the pixel (x, y) in the binary image I by adopting a formula (1):
and calculating the distance between each target pixel and the nearest background pixel of the target pixel in the binary image of the workpiece to be sprayed with the code (namely the distance between the point and the edge of the workpiece, namely the distance between the solid point and the nearest hollow point), and selecting the area where the target pixel with the largest distance and the background pixel are located as the largest code spraying area (the largest continuous area point), namely the optimal code spraying area. More specifically, optionally, but not limited to, by the distance transformation, a gray image similar to the real-time image is obtained, but the gray value only appears in the foreground region, and the larger the gray value of the pixel farther from the background edge, the distance transformation example is. And finally, selecting the point with the maximum gray value from the image after the distance conversion, namely the point with the maximum continuous area on the workpiece, and taking the point as the optimal code spraying point.
More specifically, as shown in fig. 11, the method for positioning a code-spraying based on vision further includes: s5: verifying whether the actual code spraying information meets the preset requirement, and if so, completing the code spraying process according to the actual code spraying information; and if not, adjusting the actual code spraying information to finish the code spraying process. Specifically, due to the difference in the type and the placement posture of the workpiece to be code-sprayed, the actual code-spraying information (code-spraying position (code-spraying point and code-spraying area), code-spraying angle, code-spraying size, and the like) needs to be further verified.
Specifically, as shown in fig. 12, step S5 may optionally but not exclusively include: s51: calculating whether the code spraying area is completely positioned on the workpiece to be code sprayed according to a preset code spraying size (a preset code spraying size range) by taking the center of the actual code spraying information as the center of the code spraying area; s52: if so, completing a code spraying process according to the actual code spraying information; s53: if not, adjusting the code spraying size according to the step length, and then finishing the code spraying process according to the actual code spraying information and the adjusted code spraying size. The method can adaptively adjust the size, type and the like of the code spraying area according to the type and the placing posture of the workpiece, so as to further avoid the problems of wrong spraying, missing spraying, incomplete code spraying and the like and ensure accurate code spraying.
On the other hand, as shown in fig. 13, based on the above code-spraying positioning method, the present invention further provides a code-spraying positioning device based on vision, including:
the image acquisition module 100 is used for acquiring real-time images of code spraying sites;
the target position positioning module 200 is connected with the image acquisition device and is used for intercepting a target block diagram in the real-time image by adopting a neural network algorithm according to the real-time image so as to position a target position;
a target information determining module 300, connected to the target location positioning device, for determining target information according to the target block diagram;
and the actual code spraying information determining module 400 is connected with the target position positioning module and the target information determining module and is used for determining actual code spraying information according to the target position and the target information.
The above device is created based on the above method, and its technical functions and advantages are not described herein again, and various technical features of the above embodiments may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the combination should be considered as the scope of the description in this specification.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A visual-based code spraying positioning method is characterized by comprising the following steps:
s1: collecting a real-time image of a code spraying field;
s2: intercepting a target block diagram in the real-time image by adopting a neural network algorithm according to the real-time image so as to position a target position;
s3: determining target information according to the target block diagram;
s4: and determining actual code spraying information according to the target position and the target information.
2. The method for positioning the code spraying according to claim 1, wherein the step S2 includes:
s21: acquiring a training sample set;
s22: constructing a neural network model, and inputting a training sample set into the neural network model to obtain a trained neural network model;
s23: and inputting the real-time image into the trained neural network model, and intercepting a target block diagram in the real-time image to position the target position.
3. The method for positioning the code spraying according to claim 2, wherein the step S21 includes:
s211: acquiring a background image of a code spraying site;
s212: acquiring a workpiece template image to be code-sprayed;
s213: and superposing the template image on the background image according to different placing information to obtain a training sample set.
4. The method for positioning the inkjet printing according to claim 2, wherein in step S22, the neural network model comprises: the device comprises an input module, a feature extraction module, a feature fusion module and a prediction module.
5. The method for positioning the code spraying according to claim 1, wherein in step S3, the target information includes a target type; step S3, including:
s31: extracting edge features of a workpiece to be code-sprayed according to the target block diagram;
s32: according to the edge characteristics of the workpiece to be code-sprayed, performing similarity measurement on cosine values of the gradient direction of the edge points of the workpiece to be code-sprayed and the gradient direction of the edge points of the template workpiece;
s33: matching the workpiece to be sprayed with the template workpiece corresponding to the edge point with the maximum similarity measurement, and determining the target type.
6. The method for positioning the sprayed code according to claim 5, wherein in step S3, the target information further includes target characteristics; step S3, further including:
s34: acquiring matching information of a template workpiece;
s35: and determining the target characteristics according to the matching information and the target block diagram.
7. The method for positioning the code spraying according to claim 1, wherein the step S4 includes:
s41: selecting a maximum code spraying area on the workpiece to be code sprayed as self code spraying information of the workpiece to be code sprayed according to the target information;
s42: and according to the target position, self code spraying information of the workpiece to be code sprayed is changed to obtain actual code spraying information.
8. The method for positioning the code spraying according to claim 7, wherein the step S41 includes:
s411: acquiring a binary image of a workpiece to be code-sprayed according to the target information;
s412: in the definition of the binary image, a solid area is a target pixel, and a hollow area is a background pixel;
s413: calculating the distance between each target pixel and the nearest background pixel;
s414: and selecting the area between the target pixel corresponding to the maximum distance and the background pixel of the target pixel as the maximum code spraying area.
9. The code-spraying positioning method according to any one of claims 1 to 8, characterized by further comprising:
s5: verifying whether the actual code spraying information meets the preset requirement, and if so, completing the code spraying process according to the actual code spraying information; and if not, adjusting the actual code spraying information to finish the code spraying process.
10. A vision-based inkjet positioning device for performing the method of any one of claims 1-9, comprising:
the image acquisition module is used for acquiring real-time images of the code spraying site;
the target position positioning module is connected with the image acquisition device and used for intercepting a target block diagram in the real-time image by adopting a neural network algorithm according to the real-time image so as to position a target position;
the target information determining module is connected with the target position positioning device and used for determining target information according to the target block diagram;
and the actual code spraying information determining module is connected with the target position positioning module and the target information determining module and is used for determining actual code spraying information according to the target position and the target information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210063581.8A CN114463752A (en) | 2022-01-20 | 2022-01-20 | Vision-based code spraying positioning method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210063581.8A CN114463752A (en) | 2022-01-20 | 2022-01-20 | Vision-based code spraying positioning method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114463752A true CN114463752A (en) | 2022-05-10 |
Family
ID=81409646
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210063581.8A Pending CN114463752A (en) | 2022-01-20 | 2022-01-20 | Vision-based code spraying positioning method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114463752A (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105868766A (en) * | 2016-03-28 | 2016-08-17 | 浙江工业大学 | Method for automatically detecting and identifying workpiece in spraying streamline |
CN107175938A (en) * | 2017-05-25 | 2017-09-19 | 深圳市光彩凯宜电子开发有限公司 | A kind of method and system of use robot coding |
CN108555908A (en) * | 2018-04-12 | 2018-09-21 | 同济大学 | A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras |
CN109513557A (en) * | 2018-12-27 | 2019-03-26 | 海安科大机器人科技有限公司 | A kind of robot autonomous spraying method of ship segment spray painting of view-based access control model guidance |
CN109591472A (en) * | 2019-01-08 | 2019-04-09 | 五邑大学 | A kind of digital ink-jet printed method of warp knit vamp of view-based access control model |
CN110111383A (en) * | 2018-05-08 | 2019-08-09 | 广东聚华印刷显示技术有限公司 | The offset correction method of glass substrate, device and system |
CN110647821A (en) * | 2019-08-28 | 2020-01-03 | 盛视科技股份有限公司 | Method and device for object identification by image recognition |
CN113034600A (en) * | 2021-04-23 | 2021-06-25 | 上海交通大学 | Non-texture planar structure industrial part identification and 6D pose estimation method based on template matching |
CN113591923A (en) * | 2021-07-01 | 2021-11-02 | 四川大学 | Engine rocker arm part classification method based on image feature extraction and template matching |
CN113657564A (en) * | 2021-07-20 | 2021-11-16 | 埃华路(芜湖)机器人工程有限公司 | Dynamic part following code spraying system and code spraying method thereof |
CN113838144A (en) * | 2021-09-14 | 2021-12-24 | 杭州印鸽科技有限公司 | Method for positioning object on UV printer based on machine vision and deep learning |
-
2022
- 2022-01-20 CN CN202210063581.8A patent/CN114463752A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105868766A (en) * | 2016-03-28 | 2016-08-17 | 浙江工业大学 | Method for automatically detecting and identifying workpiece in spraying streamline |
CN107175938A (en) * | 2017-05-25 | 2017-09-19 | 深圳市光彩凯宜电子开发有限公司 | A kind of method and system of use robot coding |
CN108555908A (en) * | 2018-04-12 | 2018-09-21 | 同济大学 | A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras |
CN110111383A (en) * | 2018-05-08 | 2019-08-09 | 广东聚华印刷显示技术有限公司 | The offset correction method of glass substrate, device and system |
CN109513557A (en) * | 2018-12-27 | 2019-03-26 | 海安科大机器人科技有限公司 | A kind of robot autonomous spraying method of ship segment spray painting of view-based access control model guidance |
CN109591472A (en) * | 2019-01-08 | 2019-04-09 | 五邑大学 | A kind of digital ink-jet printed method of warp knit vamp of view-based access control model |
CN110647821A (en) * | 2019-08-28 | 2020-01-03 | 盛视科技股份有限公司 | Method and device for object identification by image recognition |
CN113034600A (en) * | 2021-04-23 | 2021-06-25 | 上海交通大学 | Non-texture planar structure industrial part identification and 6D pose estimation method based on template matching |
CN113591923A (en) * | 2021-07-01 | 2021-11-02 | 四川大学 | Engine rocker arm part classification method based on image feature extraction and template matching |
CN113657564A (en) * | 2021-07-20 | 2021-11-16 | 埃华路(芜湖)机器人工程有限公司 | Dynamic part following code spraying system and code spraying method thereof |
CN113838144A (en) * | 2021-09-14 | 2021-12-24 | 杭州印鸽科技有限公司 | Method for positioning object on UV printer based on machine vision and deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111496770B (en) | Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method | |
CN109035204B (en) | Real-time detection method for weld joint target | |
CN113146172B (en) | Multi-vision-based detection and assembly system and method | |
CN105225225B (en) | A kind of leather system for automatic marker making method and apparatus based on machine vision | |
CN110666801A (en) | Grabbing industrial robot for matching and positioning complex workpieces | |
CN113643280A (en) | Plate sorting system and method based on computer vision | |
CN113878576B (en) | Robot vision sorting process programming method | |
Hsu et al. | Development of a faster classification system for metal parts using machine vision under different lighting environments | |
CN114758236A (en) | Non-specific shape object identification, positioning and manipulator grabbing system and method | |
CN110640741A (en) | Grabbing industrial robot with regular-shaped workpiece matching function | |
CN112497219A (en) | Columnar workpiece classification positioning method based on target detection and machine vision | |
CN113822810A (en) | Method for positioning workpiece in three-dimensional space based on machine vision | |
CN116337887A (en) | Method and system for detecting defects on upper surface of casting cylinder body | |
CN113012228B (en) | Workpiece positioning system and workpiece positioning method based on deep learning | |
CN111275758A (en) | Hybrid 3D visual positioning method and device, computer equipment and storage medium | |
CN114419437A (en) | Workpiece sorting system based on 2D vision and control method and control device thereof | |
CN112588621B (en) | Agricultural product sorting method and system based on visual servo | |
CN111389750B (en) | Vision measurement system and measurement method | |
Gao et al. | An automatic assembling system for sealing rings based on machine vision | |
CN114463752A (en) | Vision-based code spraying positioning method and device | |
CN116823708A (en) | PC component side mold identification and positioning research based on machine vision | |
CN113808206B (en) | Typesetting system and method based on vision tracking robot | |
CN205552536U (en) | Four -axis parallel robot letter sorting system based on machine vision | |
CN115164751A (en) | Riveting aperture size detection system and method | |
CN114926531A (en) | Binocular vision based method and system for autonomously positioning welding line of workpiece under large visual field |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |