CN116168027B - Intelligent woodworking machine cutting method based on visual positioning - Google Patents
Intelligent woodworking machine cutting method based on visual positioning Download PDFInfo
- Publication number
- CN116168027B CN116168027B CN202310442633.7A CN202310442633A CN116168027B CN 116168027 B CN116168027 B CN 116168027B CN 202310442633 A CN202310442633 A CN 202310442633A CN 116168027 B CN116168027 B CN 116168027B
- Authority
- CN
- China
- Prior art keywords
- edge
- image
- obtaining
- path
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30161—Wood; Lumber
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P70/00—Climate change mitigation technologies in the production process for final industrial or consumer products
- Y02P70/10—Greenhouse gas [GHG] capture, material saving, heat recovery or other energy efficient measures, e.g. motor control, characterised by manufacturing processes, e.g. for rolling metal or metal working
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Abstract
The invention relates to the technical field of image data processing, and provides an intelligent woodworking machine cutting method based on visual positioning, which comprises the following steps: acquiring a plate image; obtaining a path image, and obtaining a cutting path according to the adjacent frame plate image; obtaining an edge image by detecting the edge of the plate image, obtaining an atomization normalization value of each pixel point according to an edge line, obtaining a gray level histogram of the plate image, obtaining a connected domain of the plate image and a window area of each pixel point, and obtaining the possibility of a powder coverage area according to the number of the connected domains and the gray level difference of the adjacent connected domains; obtaining a standard edge image, and obtaining a chaotic normalization value of the pixel points according to the number of edge lines of the standard edge image; obtaining a final significant value according to the chaotic normalization value, the atomization normalization value and the possibility of being positioned in a powder coverage area, and obtaining a plate image characteristic point; and obtaining cutting paths according to the characteristic points, and completing cutting of the plate by analogy. The invention solves the problem that the influence of powder on the image reduces the calculated amount of a three-step search method.
Description
Technical Field
The invention relates to the technical field of image data processing, in particular to an intelligent woodworking machine cutting method based on visual positioning.
Background
With the increasing development of manufacturing industry, it has become increasingly difficult to meet the industrial needs in the conventional bevel cutting mode of manual cutting and semi-automatic cutting. Meanwhile, manual cutting and semi-automatic cutting are difficult to ensure cutting precision and cutting efficiency, and complex-shape cutting is difficult to complete. Therefore, a complicated working condition robot cutting vision guiding system capable of achieving panoramic image acquisition, workpiece automatic identification and automatic and accurate workpiece cutting path acquisition and cutting in a factory environment is needed to be researched, and automation and intellectualization of groove cutting are achieved.
In the automatic cutting process, the condition that the path is offset possibly caused by vibration of a cutting knife is directly used for a preset path, in the process of optimizing the path by using an image technology, the influence of powder on the path optimization can occur, and when a fast matching algorithm is used, the calculation overhead is also overlarge because all pixels need to be matched.
Disclosure of Invention
The invention provides an intelligent woodworking machine cutting method based on visual positioning, which aims to solve the problem of influence of the existing powder on path optimization, and adopts the following technical scheme:
an embodiment of the invention provides an intelligent woodworking machine cutting method based on visual positioning, which comprises the following steps:
acquiring a plate image;
setting a preset path to obtain a path image, obtaining a cutting opening according to the path image, and obtaining a first cutting path according to the motion vectors of the first frame of plate image and the second frame of plate image and the cutting opening;
setting an initial low threshold value to obtain an initial edge image for edge detection of a third frame of plate image, adjusting the low threshold value to obtain a plurality of edge images and edge lines in each edge image, and obtaining an atomization value of each edge pixel point and an atomization normalization value of all pixel points according to the low threshold value and the low threshold value variation quantity of the edge pixel points;
obtaining a gray range according to a gray histogram of the third frame of plate image, obtaining a plurality of connected domains in the third frame of plate image according to the gray range, taking the average value of gray values of all pixel points of the connected domains as the gray value of the connected domains, obtaining a first window area of each pixel point in the third frame of plate image, and obtaining the possibility that the pixel points are located in a powder coverage area by the difference of the gray value of the connected domain corresponding to each pixel point and all adjacent connected domains and the number of the connected domains in the first window area;
obtaining a standard edge image by using the lowest threshold value edge detection on the third frame of plate image, obtaining the angle difference of edge pixel points, and obtaining a chaotic normalization value of the pixel points according to the angle difference of the edge pixel points, the number of the edge pixel points and the number of edge lines of a second window area of each pixel point;
obtaining a significant value of the third frame of plate image according to the chaotic normalization value, the atomization normalization value and the possibility of being positioned in the powder coverage area of each pixel point, obtaining a final significant value according to the significant value, the atomization normalization value and the significant suppression factor, and obtaining a characteristic point of the third frame of plate image according to the final significant value;
obtaining a second cutting path according to the characteristic points of the third frame of plate image and the first cutting path, comparing and adjusting the second cutting path with a preset path, obtaining a third cutting path according to the characteristic points of the fourth frame of plate image and the second cutting path, comparing and adjusting the third cutting path with the preset path, and completing cutting of the plate by analogy.
Preferably, the method for obtaining the cut-in port according to the path image comprises the following steps:
the path image is a binary image, the gray value of a pixel point on the path in the path image is 1, the gray values of the rest pixel points are 0, the pixel point with the gray value of 1 is found in the first row, the first column, the last row and the last column of the path image, and the cut-in and cut-out opening are obtained according to the moving direction of the preset path.
Preferably, the method for obtaining the initial edge image by setting the initial low threshold value and obtaining a plurality of edge images and edge lines in each edge image by adjusting the low threshold value includes:
setting an initial low threshold and a lowest low threshold, wherein the amplitude of each reduction of the initial low threshold is 1, each low threshold obtains an edge image, counting edge lines in the edge images obtained by all the low thresholds, dividing the edge lines in each edge image by using an OTSU threshold to obtain a length threshold, deleting the edge lines with the length smaller than or equal to the length threshold, and reserving the edge lines with the length larger than the length threshold.
Preferably, the method for obtaining the atomization value of each edge pixel point and the atomization normalization value of all the pixel points according to the low threshold value and the low threshold value variation of the edge pixel point comprises the following steps:
counting the number of edge points of the edge image corresponding to the reduced low threshold value every time the initial low threshold value is reduced by taking the edge image obtained by the initial low threshold value as a reference, for newly added edge pixel points, obtaining the difference between the reduced low threshold value and the initial low threshold value at the moment as edge complement difference, obtaining an atomization value of each newly added edge pixel point by multiplying the edge complement difference by the reciprocal of the reduced low threshold value, and obtaining the minimum atomization value as the atomization value of the edge pixel point for the same edge pixel point under different low threshold values, wherein the atomization value of the edge pixel point directly appearing in the initial edge image is defined as 0;
defining an initial atomization value of the non-edge pixel point to be 1, taking atomization values of all the pixel points in the neighborhood of all the non-edge pixel points 3*3 as atomization values of the non-edge pixel points, and carrying out linear normalization on the atomization values of all the pixel points to obtain an atomization normalization value of each pixel point.
Preferably, the method for obtaining the chaotic normalization value of the pixel points according to the angle difference of the edge pixel points, the number of the edge pixel points and the number of edge lines of the second window area of each pixel point includes:
taking the included angle between the tangent line of the edge pixel point on the edge line and the horizontal direction as an angle difference;
in the method, in the process of the invention,the angle difference of the jth edge pixel point of the ith edge line,for the number of edge pixels on the ith edge line,the sum of the angle differences of all edge pixel points of the ith edge line, namely the torsion degree of the ith edge line,for the length of the ith edge line,for the number of edge lines in the second window area corresponding to the z-th pixel point,and (3) marking the edge confusion degree of the second window area corresponding to the z-th pixel point as the edge confusion degree of the z-th pixel point, and linearly normalizing the edge confusion degree of the pixel point to obtain a confusion normalization value of each pixel point.
Preferably, the method for obtaining a second cutting path according to the feature points and the first cutting path of the third frame of plate image, comparing and adjusting the second cutting path with a preset path, obtaining a third cutting path according to the feature points and the second cutting path of the fourth frame of plate image, and comparing and adjusting the third cutting path with the preset path includes:
obtaining motion vectors from the second frame image to the third frame image by using a three-step search method according to the characteristic points of the third frame image, marking the motion vectors as second motion vectors, connecting the first bit of the second motion vectors with the last bit of the first motion vectors, marking the newly obtained cutting path part as a second cutting path, comparing the second cutting path with a preset path, continuously acquiring a fourth frame image if the second cutting path is completely contained in the preset path, and if the second cutting path is not completely contained in the preset path, usingThe algorithm performs path planning, optimizes the next path, and then acquires a fourth frame of image; obtaining characteristic points of the fourth frame image, obtaining a third motion vector and a third cutting path according to the characteristic points, comparing the third cutting path with a preset path, if the third cutting path is completely contained in the preset path, continuing to acquire the fifth frame image, and if the third cutting path is not completely contained in the preset path, usingAnd (3) carrying out path planning by the algorithm, and optimizing the next cutting path.
The beneficial effects of the invention are as follows: according to the invention, in the machining process, dust interference generated by the cutting knife in wood cutting is eliminated, significance in the acquired image is inhibited, so that the acquired characteristic points have high possibility of acquiring the matched pixel points in the adjacent image, and further, the problem that the acquired motion vector is inaccurate, so that the path calibration is inaccurate and the cutting effect is finally influenced due to the fact that the characteristic pixel points are acquired from a dust area and then the corresponding matched pixel points are not acquired in the adjacent image is avoided. Meanwhile, the feature points are acquired according to the image saliency map, so that the calculated amount in the process of acquiring the motion vector is reduced, and the instantaneity of the algorithm is greatly improved. So as to adapt to the faster cutting speed and increase the cutting efficiency of the cutting machine.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a schematic flow chart of an intelligent cutting method of a woodworking machine based on visual positioning according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flow chart of a method for intelligent cutting of a woodworking machine based on visual positioning according to an embodiment of the invention is shown, the method comprises the following steps:
step S001, acquiring an image by using an industrial camera to obtain an acquired gray scale image.
An industrial camera is placed beside the cutting knife, and the cutting knife is used for shooting the plate horizontally downwards, so that the camera moves along with the cutting knife. In the moving process of the cutting knife, each frame of the camera obtains an acquisition image, the center of the acquisition image is the position of the cutting knife, and when the direction of the cutting knife changes, the shooting direction of the camera is unchanged. And (3) obtaining a plate image according to an industrial camera, denoising and enhancing the acquired image by using Gaussian filtering, and graying the acquired image to obtain an acquired gray image.
Step S002, obtaining a plate image according to the acquired gray level image, obtaining a cutting path based on the first frame and the second frame of plate image, and obtaining an atomization normalization value of each pixel point based on the third frame of plate image.
In the process of cutting the plate, a preset path is required to be set by using a numerical control system, the cutting knife can move according to the preset path to cut the plate, but the cutting knife position is possibly offset due to vibration of the cutting knife to cause the cutting path to be different from the preset path, and for this embodiment, the position of the cutting knife is adjusted by using an industrial camera to meet the preset path.
Firstly, a neural network semantic segmentation network is used for processing an acquired gray level image to obtain a plate area in the acquired gray level image, the area is marked as a plate image, the neural network semantic segmentation network uses a U-net neural network, wherein network input is the acquired gray level image, pixel points of the plate area and pixel points of a cutting knife in the acquired gray level image are marked as 1, the rest pixel points are marked as 0, network output is the plate image, the neural network has the function of classification, and a used loss function is a cross entropy loss function.
According to the preset path, a path image is formed, and the sizes of the path image and the plate image are possibly different, so that the plate image is subjected to proportion change according to the size of the path image, and image comparison is facilitated, wherein the proportion change method is that y1=k1×y2 and x1=k2×x2, wherein x2 and y2 represent the sizes of the plate image, x1 and y1 represent the sizes of the path image, and k1 and k2 represent numerical values for proportion change of the acquired image respectively.
The path image is a binary image, the gray value of the pixel points on the path is 1, the gray values of the other pixel points are 0, the path image has no other texture information, the pixel points with the gray value of 1 are marked as the path pixel points, the pixel points with the gray value of 1 are found in the first row, the first column, the last row and the last column of the path image, and the cut-in and cut-out opening are obtained according to the moving direction of the preset path.
When the cutting knife is right above the cutting opening, the numerical control system is used for controlling the cutting knife to move downwards, wherein the downward moving distance is set to be different distances according to different wood board thicknesses, in the embodiment, the downward moving distance is 3 cm, and after the cutting knife moves downwards, the numerical control system is used for cutting the board according to a preset path.
Because the cutting knife exists, the obtained plate image has a certain visual field blind area, and therefore, it is difficult to directly obtain the cutting path according to the plate image, so that the embodiment adopts a three-step search method in a fast matching algorithm to obtain the motion vector of the plate image of the adjacent frame, obtains the cutting path according to the motion vector, and adjusts the cutting path according to the comparison result of the cutting path and the preset path. In the cutting process, powder appears in the plate, so that the accuracy of the obtained motion vector is affected, the influence of the powder on the motion vector is required to be reduced, according to the powder characteristics, the powder is diffused from more to less to two sides of a cutting crack, and along with the diffusion, the corresponding powder is reduced, namely, the powder around a cutting path is the most, the farther the distance from the cutting path is, the less powder appears, because the powder is just cut out from wood, the outside of the plate is reflected by air, the color is darkened, and the powder gray value is not reflected by contact with the air in the plate, so that the gray value of the powder cut in the plate image is higher than the gray value of the plate.
For the acquired first frame of plate image and second frame of plate image, a block matching algorithm is used to acquire a motion vector between the two frames of plate images, a quick matching algorithm is a known technology, and detailed description is omitted herein, since the influence of powder is small when cutting is started, the motion vector of the two frames of images is directly acquired through the quick matching algorithm, the motion vector at the moment is recorded as a first motion vector, the starting point of the first motion vector is the position of a cutting opening of the plate image, and at the moment, the first motion vector is the beginning of a cutting path and is recorded as a first cutting path.
Since the cutting operation occurs in this case, powder is generated, and then a third frame of plate image is acquired, pixels representing the powder in the plate image are designated as powder pixels, and during the cutting process, the powder is floated to a position farther from the cutting path, and the powder is not accumulated but is scattered, and the shielding of the plate by the scattered powder is called atomization.
The sheet image is subjected to canny edge detection, in which an initial low threshold is set to a and a lowest low threshold is set to b, and the values of a and b are determined according to the sheet, in this embodiment, a=100 and b=50. And (3) performing edge detection on the plate image by using a canny operator, wherein the high threshold is unchanged, and the low threshold is reduced from an initial low threshold to a lowest low threshold, wherein the reduction amplitude is 1 each time, so that a plurality of edge images are obtained from the plate image, and all edge lines of each edge image are obtained. When the powder is less, the powder has weaker edge shielding capability in the edge image, at the moment, when the low threshold value is smaller, the edge line can be completely detected, when the low threshold value is larger, the shielded part of the edge line cannot be detected, at the moment, the edge line which can be completely detected at the moment cannot be detected, on the basis, when the low threshold value is reduced from the initial low threshold value, the part of the edge line from non-to-some is more likely to be a powder pixel point, the degree of the edge line from non-to-some is recorded as an atomization value, and therefore the atomization value of each pixel point can be calculated.
Specifically, an edge image obtained by using an initial low threshold is recorded as an initial edge image, the number of pixels of each edge line is counted as the length of the edge line, an OTSU (on-the-fly) Otsu threshold algorithm is applied to divide the length of the edge line into two types to obtain a length threshold, the edge line with the length smaller than or equal to the length threshold is deleted, the edge line with the length larger than the length threshold is reserved, then the low threshold is continuously reduced, the edge line is screened once every time the edge line is reduced, whether edge pixels in the edge image obtained by the reduced low threshold appear in the initial edge image is judged, if so, the difference between the low threshold of the reduced edge image and the low threshold of the initial edge image is taken as an edge complement difference, the fact that the edge points need to pass through so many gray values is indicated, and an atomization value is calculated according to the edge complement difference of each edge point when the edge is changed by each low threshold, and the calculation method of the atomization value is as follows:
in the method, in the process of the invention,represents the edge complement difference of the v-th edge pixel point at the u-th low threshold change,indicating the low threshold at the u-th low threshold change,the function of the minimum value is represented by,represents the fogging value of the v-th edge pixel.
It should be noted that the same pixel point may have a plurality of atomization values without using a low threshold, only the minimum atomization value is selected, and the larger the edge complement difference is, the more the edge point needs to be lowered by the low threshold to be detected, which means that the deeper the atomization degree of the pixel point is, the greater the possibility of powder is.
For non-edge pixels in the edge image corresponding to the lowest low threshold, giving an initial fogging value of 1, and then calculating the average value of the fogging values of all pixels in the field of the non-edge pixels 3*3 for all the non-edge pixels as the fogging value of each non-edge pixel.
Obtaining atomization normalization values of all pixel points by using linear normalization for the atomization values of all pixel points in the plate image。
Thus, the atomization normalization value of each pixel point in the plate image is obtained.
And step S003, obtaining the possibility and disorder normalization value of each pixel point in the powder coverage area, and obtaining the characteristic point of the plate image based on the possibility and disorder normalization value.
Since the atomization value of each pixel point is calculated aiming at the scattered distribution of the powder, textures in the plate image can be blocked by a large area in the powder accumulation area, even if the low threshold value is changed, the textures cannot be accurately detected, and when the low threshold value is reduced, the accumulated partial powder is still not detected, however, because the powder accumulation and the powder gray value are different from the plate gray value, the possibility of the pixel point in the powder accumulation area can be obtained based on the dense area of the gray mutation in the plate image.
Specifically, for obtaining a gray histogram of a sheet image, obtaining an abscissa of an extremum of the gray value pattern and an abscissa of an initial point and an abscissa of an end point of the gray value pattern, recording these points as center points, obtaining a midpoint for two adjacent center points, recording the midpoint of the center point as a divided point, taking a gray between two adjacent divided points as one gray range, obtaining a plurality of gray ranges, obtaining a plurality of connected domains from all the obtained gray ranges using connected domain analysis in the sheet image, averaging all the pixel points in the connected domains to obtain a gray value of the connected domain, and obtaining a gray value difference between each connected domain and its adjacent connected domain, giving the pixel point for each pixel point on the sheet imageIs denoted as a first window area, in this embodimentThe more connected domains in the first window area, the more serious the intersection of the powder pixel points and the normal pixel points in the first window area is, the mixed area of the powder pixel points and the normal pixel points is called a powder coverage area, the powder coverage area has a large amount of powder to cover the plate, but the plate is not completely covered, and the possibility that each pixel point is in the powder coverage area is obtained based on the difference of gray values of the connected domains corresponding to each pixel point and adjacent connected domains and the number of the connected domains in the first window area of the pixel point, wherein the formula is as follows:
in the method, in the process of the invention,representing the average value of the gray value differences of all adjacent connected domains of the connected domain where the z-th pixel point of the plate image is positioned and the connected domain where the z-th pixel point is positioned,corresponding to the z-th pixel point of the plate imageThe number of connected domains within the first window region,the powder coverage value of the z pixel point of the plate image.
Linearly normalizing the powder coverage values of all pixel points in the plate image to obtain,I.e. the possibility of the z-th pixel location to be in the powder footprint.
The greater the likelihood that each pixel is located in the powder footprint, the greater the probability that the pixel is a powder pixel.
The edge image of the plate image is obtained as a standard edge image aiming at the lowest threshold value, and because the cutting positions are different and the positions of powder drifting to the plate are random, if the plate is not covered by the powder, the texture on the plate is a complete and clear edge line, if the texture on the plate is covered by the powder, a plurality of shorter edge lines can appear, so that one pixel point in the standard edge image is obtainedThe window area of the size is denoted as the second window area, in this embodimentObtaining the number of all edge lines in each second window area and the number of edge pixel points on each edge line, recording the included angle between the tangent line of each pixel point on each edge line on the edge line and the horizontal direction as an angle difference, and obtaining the edge confusion degree of the second window area according to the angle difference, the length of the edge lines and the number of the edge lines, wherein the formula is as follows:
in the method, in the process of the invention,the angle difference of the jth edge pixel point of the ith edge line,for the number of edge pixels on the ith edge line,the sum of the angle differences of all edge pixel points of the ith edge line, namely the torsion degree of the ith edge line,for the length of the ith edge line,for the number of edge lines in the second window area corresponding to the z-th pixel point,the degree of edge confusion of the second window area corresponding to the z-th pixel point is recorded as the degree of edge confusion of the z-th pixel point.
The edge confusion degree of all the pixel points is subjected to linear normalization to obtain a confusion normalization value of each pixel point, and the confusion normalization value is recorded as
And multiplying the obtained chaotic normalization value of each pixel point and the obtained atomized normalization value of each pixel point according to the possibility that each pixel point is positioned in the powder coverage area, and obtaining the possibility G that each pixel point is a powder pixel point.
After the possibility that each pixel point is a powder pixel point is obtained, an AC saliency analysis algorithm is used for obtaining a pixel point saliency value Q in an image, wherein the possibility that each pixel point is a powder pixel point is a saliency inhibition factor, the saliency is regulated and controlled by using the saliency inhibition factor, and the formula is as follows:
in the method, in the process of the invention,is the significant value of the z-th pixel,the value is normalized for the atomization of the z-th pixel,is a significant printing factor for the z-th pixel,and marking the regulated significant value of the z-th pixel point as a final significant value. The more likely a pixel is a powder pixel, the lower the final saliency value of the pixel. A saliency threshold value ψ=0.7 is set, and a pixel point whose final saliency value is larger than the saliency threshold value is recorded as a feature point.
Thus, the characteristic points of all pixel points in the plate image are obtained.
Step S004, cutting the plate according to the cutting path.
For the acquired third frame image, the characteristic points of the third frame image are obtained by using the method, the motion vectors from the second frame image to the third frame image are obtained by using a three-step search method according to the characteristic points of the third frame image and are recorded as second motion vectorsConnecting the first bit of the second motion vector with the last bit of the first motion vector, marking the newly obtained cutting path part as a second cutting path, comparing the second cutting path with a preset path, if the second cutting path is completely contained in the preset path, namely, each pixel point which is passed by the second cutting path, the preset path is also passed, continuing to acquire a fourth frame image, and if the second cutting path is not completely contained in the preset path, indicating that the cutting path deviates from the preset path, wherein the following steps are usedThe algorithm performs path planning, optimizes the next path, and then acquires a fourth frame of image; obtaining characteristic points of the fourth frame image, obtaining a third motion vector and a third cutting path according to the characteristic points, comparing the third cutting path with a preset path, if the third cutting path is completely contained in the preset path, namely, each pixel point passed by the third cutting path is also passed by the preset path, continuing to acquire the fifth frame image, if the third cutting path is not completely contained in the preset path, indicating that the cutting path deviates from the preset path, and using at the momentAnd (3) carrying out path planning by the algorithm, optimizing the next cutting path, then acquiring a fifth frame of image, and the like until the cutting of the plate is completed.
And obtaining and positioning the cutting path in real time so as to finish intelligent cutting according to the cutting path.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.
Claims (3)
1. The intelligent woodworking machine cutting method based on visual positioning is characterized by comprising the following steps of:
acquiring a plate image;
setting a preset path to obtain a path image, obtaining a cutting opening according to the path image, and obtaining a first cutting path according to the motion vectors of the first frame of plate image and the second frame of plate image and the cutting opening;
setting an initial low threshold value to obtain an initial edge image for edge detection of a third frame of plate image, adjusting the low threshold value to obtain a plurality of edge images and edge lines in each edge image, and obtaining an atomization value of each edge pixel point and an atomization normalization value of all pixel points according to the low threshold value and the low threshold value variation quantity of the edge pixel points;
obtaining a gray range according to a gray histogram of the third frame of plate image, obtaining a plurality of connected domains in the third frame of plate image according to the gray range, taking the average value of gray values of all pixel points of the connected domains as the gray value of the connected domains, obtaining a first window area of each pixel point in the third frame of plate image, and obtaining the possibility that the pixel points are located in a powder coverage area by the difference of the gray value of the connected domain corresponding to each pixel point and all adjacent connected domains and the number of the connected domains in the first window area;
obtaining a standard edge image by using the lowest threshold value edge detection on the third frame of plate image, obtaining the angle difference of edge pixel points, and obtaining a chaotic normalization value of the pixel points according to the angle difference of the edge pixel points, the number of the edge pixel points and the number of edge lines of a second window area of each pixel point;
obtaining a significant value of the third frame of plate image according to the chaotic normalization value, the atomization normalization value and the possibility of being positioned in the powder coverage area of each pixel point, obtaining a final significant value according to the significant value, the atomization normalization value and the significant suppression factor, and obtaining a characteristic point of the third frame of plate image according to the final significant value;
obtaining a second cutting path according to the characteristic points of the third frame of plate image and the first cutting path, comparing and adjusting the second cutting path with a preset path, obtaining a third cutting path according to the characteristic points of the fourth frame of plate image and the second cutting path, comparing and adjusting the third cutting path with the preset path, and finishing cutting of the plate by analogy;
the method for obtaining the atomization value of each edge pixel point and the atomization normalization value of all the pixel points according to the low threshold value and the low threshold value variation of the edge pixel points comprises the following steps:
counting the number of edge points of the edge image corresponding to the reduced low threshold value every time the initial low threshold value is reduced by taking the edge image obtained by the initial low threshold value as a reference, for newly added edge pixel points, obtaining the difference between the reduced low threshold value and the initial low threshold value at the moment as edge complement difference, obtaining an atomization value of each newly added edge pixel point by multiplying the edge complement difference by the reciprocal of the reduced low threshold value, and obtaining the minimum atomization value as the atomization value of the edge pixel point for the same edge pixel point under different low threshold values, wherein the atomization value of the edge pixel point directly appearing in the initial edge image is defined as 0;
defining the initial atomization value of the non-edge pixel points as 1, and defining all the non-edge pixel pointsTaking the atomization values of all the pixels in the neighborhood as the atomization values of non-edge pixels, and linearly normalizing the atomization values of all the pixels to obtain an atomization normalization value of each pixel;
the method for obtaining the chaotic normalization value of the pixel points according to the angle difference of the edge pixel points, the number of the edge pixel points and the number of the edge lines of the second window area of each pixel point comprises the following steps:
taking the included angle between the tangent line of the edge pixel point on the edge line and the horizontal direction as an angle difference;
in the method, in the process of the invention,is the ith edgeAngle difference of jth edge pixel point of edge line,/>For the number of edge pixels on the ith edge line,/th>For the sum of the angle differences of all edge pixel points of the ith edge line, i.e. the torsion degree of the ith edge line, +.>For the length of the ith edge line, +.>For the number of edge lines in the second window area corresponding to the z-th pixel point, +.>Marking the edge confusion degree of the second window area corresponding to the z-th pixel point as the edge confusion degree of the z-th pixel point, and linearly normalizing the edge confusion degree of the pixel point to obtain a confusion normalization value of each pixel point;
the method for obtaining a second cutting path according to the characteristic points and the first cutting path of the third frame of plate image, comparing and adjusting the second cutting path with a preset path, obtaining a third cutting path according to the characteristic points and the second cutting path of the fourth frame of plate image, and comparing and adjusting the third cutting path with the preset path comprises the following steps:
obtaining motion vectors from the second frame image to the third frame image by using a three-step search method according to the characteristic points of the third frame image, marking the motion vectors as second motion vectors, connecting the first bit of the second motion vectors with the last bit of the first motion vectors, marking the newly obtained cutting path part as a second cutting path, comparing the second cutting path with a preset path, continuously acquiring a fourth frame image if the second cutting path is completely contained in the preset path, and if the second cutting path is not completely contained in the preset path, usingThe algorithm performs path planning, optimizes the next path, and then acquires a fourth frame of image; obtaining characteristic points of the fourth frame image, obtaining a third motion vector and a third cutting path according to the characteristic points, comparing the third cutting path with a preset path, if the third cutting path is completely contained in the preset path, continuing to acquire the fifth frame image, and if the third cutting path is not completely contained in the preset path, using->And (3) carrying out path planning by the algorithm, and optimizing the next cutting path.
2. The intelligent cutting method of the woodworking machinery based on visual positioning according to claim 1, wherein the method for obtaining the cutting opening according to the path image is as follows:
the path image is a binary image, the gray value of a pixel point on the path in the path image is 1, the gray values of the rest pixel points are 0, the pixel point with the gray value of 1 is found in the first row, the first column, the last row and the last column of the path image, and the cut-in and cut-out opening are obtained according to the moving direction of the preset path.
3. The intelligent cutting method of woodworking machinery based on visual positioning according to claim 1, wherein the method for setting an initial low threshold to obtain an initial edge image, and adjusting the low threshold to obtain a plurality of edge images and edge lines in each edge image is as follows:
setting an initial low threshold and a lowest low threshold, wherein the amplitude of each reduction of the initial low threshold is 1, each low threshold obtains an edge image, counting edge lines in the edge images obtained by all the low thresholds, dividing the edge lines in each edge image by using an OTSU threshold to obtain a length threshold, deleting the edge lines with the length smaller than or equal to the length threshold, and reserving the edge lines with the length larger than the length threshold.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310442633.7A CN116168027B (en) | 2023-04-24 | 2023-04-24 | Intelligent woodworking machine cutting method based on visual positioning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310442633.7A CN116168027B (en) | 2023-04-24 | 2023-04-24 | Intelligent woodworking machine cutting method based on visual positioning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116168027A CN116168027A (en) | 2023-05-26 |
CN116168027B true CN116168027B (en) | 2023-07-04 |
Family
ID=86416754
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310442633.7A Active CN116168027B (en) | 2023-04-24 | 2023-04-24 | Intelligent woodworking machine cutting method based on visual positioning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116168027B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116485874B (en) * | 2023-06-25 | 2023-08-29 | 深圳市众翔奕精密科技有限公司 | Intelligent detection method and system for cutting intervals of die-cutting auxiliary materials |
CN116626029B (en) * | 2023-07-20 | 2023-09-22 | 津泰(天津)医疗器械有限公司 | Detection method for color difference of cobalt chloride test paper for diabetes |
CN116740059B (en) * | 2023-08-11 | 2023-10-20 | 济宁金康工贸股份有限公司 | Intelligent regulation and control method for door and window machining |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114638828A (en) * | 2022-05-18 | 2022-06-17 | 数聚(山东)医疗科技有限公司 | Radiological image intelligent segmentation method based on computer vision |
CN115760884A (en) * | 2023-01-06 | 2023-03-07 | 山东恩信特种车辆制造有限公司 | Semitrailer surface welding slag optimization segmentation method based on image processing |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103942780B (en) * | 2014-03-27 | 2017-06-16 | 北京工业大学 | Based on the thalamus and its minor structure dividing method that improve fuzzy connectedness algorithm |
JP6846365B2 (en) * | 2018-01-18 | 2021-03-24 | Kddi株式会社 | Suitable methods and equipment for foreground and background separation |
CN108875740B (en) * | 2018-06-15 | 2021-06-08 | 浙江大学 | Machine vision cutting method applied to laser cutting machine |
CN108876697B (en) * | 2018-06-22 | 2022-02-25 | 南开大学 | Pixel-level image authentication, tampering detection and recovery method |
JP2023505663A (en) * | 2019-12-05 | 2023-02-10 | 嘉楠明芯(北京)科技有限公司 | Character segmentation method, device and computer readable storage medium |
CN115147448A (en) * | 2022-05-20 | 2022-10-04 | 南京理工大学 | Image enhancement and feature extraction method for automatic welding |
CN114862880B (en) * | 2022-07-06 | 2022-09-02 | 山东泰恒石材有限公司 | Cutting optimization method and system based on anisotropic stone |
CN115423864A (en) * | 2022-07-26 | 2022-12-02 | 北京工业大学 | Automatic positioning method for chip cutting path in wafer image |
CN115457004B (en) * | 2022-09-22 | 2023-05-26 | 山东华太新能源电池有限公司 | Intelligent detection method of zinc paste based on computer vision |
CN115890012A (en) * | 2022-09-23 | 2023-04-04 | 武汉帝尔激光科技股份有限公司 | Wafer cutting path generation and laser cutting method |
-
2023
- 2023-04-24 CN CN202310442633.7A patent/CN116168027B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114638828A (en) * | 2022-05-18 | 2022-06-17 | 数聚(山东)医疗科技有限公司 | Radiological image intelligent segmentation method based on computer vision |
CN115760884A (en) * | 2023-01-06 | 2023-03-07 | 山东恩信特种车辆制造有限公司 | Semitrailer surface welding slag optimization segmentation method based on image processing |
Also Published As
Publication number | Publication date |
---|---|
CN116168027A (en) | 2023-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116168027B (en) | Intelligent woodworking machine cutting method based on visual positioning | |
CN110930459B (en) | Vanishing point extraction method, camera calibration method and storage medium | |
CN114782475B (en) | Corrugated carton line pressing defect detection optimization method based on artificial intelligence system | |
WO2021109697A1 (en) | Character segmentation method and apparatus, and computer-readable storage medium | |
CN110672007A (en) | Workpiece surface quality detection method and system based on machine vision | |
CN115439494B (en) | Spray image processing method for quality inspection of sprayer | |
CN112819772A (en) | High-precision rapid pattern detection and identification method | |
CN112233116B (en) | Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description | |
CN116777907A (en) | Sheet metal part quality detection method | |
CN115984271A (en) | Metal burr identification method based on angular point detection | |
CN116630329B (en) | Online visual detection method for multi-axis multi-channel numerical control system | |
CN116310845A (en) | Intelligent monitoring system for sewage treatment | |
CN116993731A (en) | Shield tunneling machine tool bit defect detection method based on image | |
CN116188472A (en) | Online visual detection method for numerical control machine tool parts | |
CN115439523A (en) | Method and equipment for detecting pin size of semiconductor device and storage medium | |
CN116862910B (en) | Visual detection method based on automatic cutting production | |
CN116385440B (en) | Visual detection method for arc-shaped blade | |
CN114581901A (en) | Method for extracting edges of ancient building wall contaminated inscription character images | |
CN110310239B (en) | Image processing method for eliminating illumination influence based on characteristic value fitting | |
CN111126371B (en) | Coarse pointer dial reading method based on image processing | |
CN114004812A (en) | Threaded hole detection method and system adopting guide filtering and neural network model | |
CN117314910B (en) | Accurate wristband material cutting control method based on machine vision | |
CN112614172A (en) | Plane and/or curved surface dividing method and system based on three-dimensional vision | |
CN111539329A (en) | Self-adaptive substation pointer instrument identification method | |
CN116703930B (en) | Automobile rearview mirror mold forming detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |