CN117314940B - Laser cutting part contour rapid segmentation method based on artificial intelligence - Google Patents

Laser cutting part contour rapid segmentation method based on artificial intelligence Download PDF

Info

Publication number
CN117314940B
CN117314940B CN202311617964.6A CN202311617964A CN117314940B CN 117314940 B CN117314940 B CN 117314940B CN 202311617964 A CN202311617964 A CN 202311617964A CN 117314940 B CN117314940 B CN 117314940B
Authority
CN
China
Prior art keywords
image
block
reference block
gray level
main reference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311617964.6A
Other languages
Chinese (zh)
Other versions
CN117314940A (en
Inventor
闫新华
闫新兴
付黎伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nobot Intelligent Equipment Shandong Co ltd
Original Assignee
Nobot Intelligent Equipment Shandong Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nobot Intelligent Equipment Shandong Co ltd filed Critical Nobot Intelligent Equipment Shandong Co ltd
Priority to CN202311617964.6A priority Critical patent/CN117314940B/en
Publication of CN117314940A publication Critical patent/CN117314940A/en
Application granted granted Critical
Publication of CN117314940B publication Critical patent/CN117314940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/168Segmentation; Edge detection involving transform domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention relates to the technical field of image data processing, in particular to an artificial intelligence-based laser cutting part contour rapid segmentation method, which comprises the following steps: the gray level image of the part is obtained, a quadtree algorithm is used for dividing the gray level image of the part into a plurality of image blocks, then the enhanced image block of each image block under different gain parameters and the frequency spectrum amplitude value and the energy value of each pixel point in the enhanced image block are obtained, and the multiscale constraint of each enhanced image block is obtained according to the frequency spectrum amplitude difference of each enhanced image block and the partial derivative difference of the energy values of all the pixel points in the enhanced image block in the transverse direction and the longitudinal direction, so that the optimal enhanced image block of each image block is obtained, and therefore, the contour edge line of the part in the optimal enhanced image of the part is obtained. According to the invention, through image self-adaptive segmentation and self-adaptive selection of gain parameters of each image block, the enhancement effect of the image is improved, so that the accuracy of rapid segmentation of the profile of the laser cutting part is improved.

Description

Laser cutting part contour rapid segmentation method based on artificial intelligence
Technical Field
The invention relates to the technical field of image data processing, in particular to a laser cutting part contour rapid segmentation method based on artificial intelligence.
Background
The laser cutting part contour rapid segmentation based on artificial intelligence is a technology combining the fields of computer vision, image enhancement, real-time processing and the like, and aims to improve the accuracy and the processing efficiency of part contour by automatically and optimally processing image data generated by laser cutting. This includes using real-time algorithms and hardware acceleration to adapt the method to an industrial environment while optimizing the inspection and cutting process of the part profile. And to achieve image enhancement of the cut part, thereby increasing its visibility, image linear contrast enhancement is typically used.
The existing problems are as follows: because the complexity of part edge feature distribution is higher, gain parameters in the traditional linear enhancement algorithm are fixed, more detail features can be lost in the enhancement process of part images, and further the enhancement effect is weakened, so that the effect of part contour segmentation is reduced.
Disclosure of Invention
The invention provides a laser cutting part contour rapid segmentation method based on artificial intelligence, which aims to solve the existing problems.
The invention discloses an artificial intelligence-based rapid laser cutting part contour segmentation method, which adopts the following technical scheme:
the embodiment of the invention provides an artificial intelligence-based laser cutting part contour rapid segmentation method, which comprises the following steps of:
collecting a laser cutting part image, and carrying out graying treatment to obtain a part gray image; dividing the gray level image of the part into a plurality of image blocks by using a quadtree algorithm;
marking any image block as a target block; carrying out linear transformation on the target block according to different gain parameters in sequence to obtain an enhanced image block of the target block under different gain parameters; the enhanced image block of the target block under all gain parameters is recorded as a reference block; performing discrete Fourier transform on each reference block to obtain a spectrum amplitude value and an energy value of each pixel point in each reference block;
recording any one reference block as a main reference block; obtaining the spectrum amplitude difference of the main reference block according to the spectrum amplitude difference of the pixel points between the main reference block and the adjacent reference blocks; obtaining multi-scale constraint of the main reference block according to partial derivative differences of energy values of all pixel points in the main reference block in the transverse direction and the longitudinal direction and spectrum amplitude differences of the main reference block;
obtaining an optimal enhanced image block of the target block according to the multi-scale constraints of all the reference blocks; and obtaining the contour edge line of the part in the optimal enhanced image of the part according to the optimal enhanced image blocks of all the image blocks.
Further, the gray level image of the part is divided into a plurality of image blocks by using a quadtree algorithm, and the method comprises the following specific steps:
performing first cross division by using a quadtree algorithm to obtain four image blocks of gray level image equally divided parts;
obtaining a variation coefficient of each image block of the part gray level image according to the number of pixel points and gray level value difference in each image block of the part gray level image;
obtaining the homogeneity coefficient of each image block of the part gray level image equal division according to the number of pixel points in each image block of the part gray level image equal division, the gray level value difference, the frequency of cross division and the variance of the variation coefficients of the four image blocks;
among the four image blocks equally divided in the gray level image of the part, the image block with the homogeneity coefficient smaller than a preset segmentation threshold value is marked as a segmented image block;
performing second cross division by using a quadtree algorithm to obtain four image blocks equally divided by each divided image block; dividing each divided image block into four image blocks equally, and marking the four image blocks as new image blocks;
obtaining the homogeneous coefficients of four new image blocks equally divided by each divided image block according to the acquisition mode of the homogeneous coefficients of the four image blocks equally divided by the gray level image of the part;
recording the new image block with the homogeneity coefficient smaller than the preset segmentation threshold as the new segmented image block;
performing third cross division by using a quadtree algorithm to obtain four image blocks which are equally divided by each newly divided image block; and so on, dividing the gray level image of the part into a plurality of image blocks.
Further, the coefficient of variation of each image block of the part gray level image is obtained according to the difference of the number of pixel points and gray level values in each image block of the part gray level image, comprising the following specific steps:
and calculating the variance of gray values of all pixels in each image block of the part gray image bisection, dividing the variance of the gray values by the number of pixels in each image block of the part gray image bisection, and recording the variance as the variation coefficient of each image block of the part gray image bisection.
Further, the specific calculation formula corresponding to the homogeneity coefficient of each image block of the part gray level image is obtained according to the number of pixel points in each image block of the part gray level image, the gray level value difference, the frequency of the cross division and the variance of the variability coefficients of the four image blocks, wherein the specific calculation formula is as follows:
wherein the method comprises the steps ofHomogeneity coefficient of ith image block equally divided for part gray level image,/>The number of pixel points in the ith image block which is equal to the gray level image of the part, and +.>Variance of gray values of all pixels in ith image block equally divided for gray image of part,/>The corresponding frequency of the i-th image block divided equally for the part gray level image, V is the variance of the variation coefficients of the four image blocks divided equally for the part gray level image, +.>Is a linear normalization function.
Further, the method sequentially carries out linear transformation on the target block according to different gain parameters to obtain the enhanced image block of the target block under the different gain parameters, and comprises the following specific steps:
from the slaveStarting, increment c each time until +.>Ending, obtaining a gain parameter sequence; wherein->For a preset minimum gain parameter, +.>C is a preset increment value, which is a preset maximum gain parameter;
recording any gain parameter in the gain parameter sequence as a target gain parameter;
and performing linear transformation on the target block by using an image linear enhancement algorithm with the gain parameter as a target gain parameter and the offset parameter as a preset offset parameter to obtain an enhanced image block of the target block under the target gain parameter.
Further, the method for obtaining the spectrum amplitude difference of the main reference block according to the spectrum amplitude difference of the pixel points between the main reference block and the adjacent reference block comprises the following specific steps:
according to the sequence of gain parameters in the gain parameter sequence, marking the next reference block corresponding to the main reference block as a sub-reference block;
and respectively calculating the average value of the spectrum amplitude values of all pixel points in the main reference block and the sub-reference block, and recording the normalized value of the difference of the average value as the spectrum amplitude difference of the main reference block.
Further, the multi-scale constraint of the main reference block is obtained according to the partial derivative difference of the energy values of all the pixel points in the main reference block in the transverse direction and the longitudinal direction and the spectrum amplitude difference of the main reference block, and the method comprises the following specific steps:
in the main reference block, a plane rectangular coordinate system is constructed by taking the lower left corner as an origin, taking the horizontal right as a transverse axis and taking the vertical upward as a longitudinal axis;
in a plane rectangular coordinate system, obtaining an abscissa value and an ordinate value of each pixel point in the main reference block;
and obtaining the multi-scale constraint of the main reference block according to the energy values, the horizontal coordinate values, the vertical coordinate values, the partial derivatives and the spectrum amplitude differences of the main reference block.
Further, the specific calculation formula corresponding to the multi-scale constraint of the main reference block is obtained according to the energy values, the horizontal coordinate values, the vertical coordinate values of all pixel points in the main reference block, the partial derivative of the vertical coordinate values and the spectrum amplitude difference of the main reference block, wherein the specific calculation formula comprises the following steps:
where P is the multi-scale constraint of the main reference block, m is the number of pixels in the main reference block,for a preset weight +.>For the difference in spectral amplitude of the main reference block, +.>For the energy value of the j-th pixel point in the main reference block,/and>is the abscissa value of the j-th pixel point in the main reference block,/and>is the ordinate value of the j-th pixel point in the main reference block,>is->Partial derivative of>Is->Partial derivative of>Is->Is the absolute valueFunction (F)>Is a linear normalization function.
Further, the obtaining the optimal enhanced image block of the target block according to the multi-scale constraints of all the reference blocks comprises the following specific steps:
and counting the maximum value in the multi-scale constraints of all the reference blocks, and marking the reference block corresponding to the maximum value as the optimal enhanced image block of the target block.
Further, the step of obtaining the contour edge line of the part in the optimal enhanced image of the part according to the optimal enhanced image blocks of all the image blocks comprises the following specific steps:
the image formed by the optimal enhancement image blocks of all the image blocks is recorded as a part optimal enhancement image of the part gray level image;
and obtaining the contour edge line of the part in the optimal enhanced image of the part by using the deep neural network.
The technical scheme of the invention has the beneficial effects that:
in the embodiment of the invention, the gray level image of the part is obtained, the gray level image of the part is divided into a plurality of image blocks by using a quadtree algorithm, and the image blocks are adaptively segmented by calculating the homogeneity coefficient of the image blocks, so that the similarity of texture characteristics in each image block is ensured, the selection of subsequent gain parameters is convenient, and the image enhancement effect is improved. And obtaining the multi-scale constraint of each enhanced image block according to the spectrum amplitude difference of each enhanced image block and the partial derivative difference of the energy values of all pixel points in the enhanced image block in the transverse and longitudinal directions, so as to obtain the optimal enhanced image block of each image block. The invention improves the enhancement effect of the image by adaptively dividing the image and adaptively selecting the gain parameter for each image block, thereby improving the accuracy of rapidly dividing the contour of the laser cutting part.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of steps of the method for rapidly dividing the contour of a laser cutting part based on artificial intelligence.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following description refers to the specific implementation, structure, characteristics and effects of the artificial intelligence-based laser cutting part contour rapid segmentation method according to the invention in combination with the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the artificial intelligence-based laser cutting part contour rapid segmentation method provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of steps of an artificial intelligence-based method for rapidly dividing a profile of a laser-cut part according to an embodiment of the present invention is shown, the method comprising the steps of:
step S001: collecting a laser cutting part image, and carrying out graying treatment to obtain a part gray image; the gray scale image of the part is divided into image blocks using a quadtree algorithm.
The purpose of this embodiment is to self-adapt the gain parameters of the linear transformation of each image block by the multi-scale constraint of the image block and the image discrete fourier transformation, so as to obtain a high-quality enhanced image, and to improve the accuracy of the part contour segmentation.
And acquiring a laser cutting part image by using an industrial camera, and carrying out graying treatment on the laser cutting part image to obtain a part gray level image. The image gray level is a known technology, and a specific method is not described herein.
It is known that laser cut part profiles generally exhibit more regular geometries, such as from a matrix to a circle with holes in its surface, such characteristics depending on design requirements. Thus, the regularity of the contour edge lines of the part is higher, namely, the part has a unique geometric shape. At such contour edges, which correspond to spectral features of higher frequencies, the larger the spectral scale of the image after discrete fourier transformation, because the more regular the edges, the larger the difference from the gray values of the surrounding pixels, and the larger the corresponding spectral scale. Therefore, in order to make the enhanced result more obvious the edge of the geometric shape required by the embodiment, the effect of performing gain parameter self-adaption according to the size of the scale can be further achieved by analyzing the corresponding frequency spectrum scales of different areas under different linear transformation gain parameters, and the visibility of the geometric edge is increased.
According to the distribution of the contour edges of the parts in the gray level image of the parts, different local areas in the image often need different gain parameters, so that the enhancement effect of each local area is guaranteed, the finer the division of the local area is, the better the enhancement effect of the local area is due to the proper gain parameters, and the simpler the optimal value of the gain parameters for enhancing each area is, because if the divided areas have the same texture characteristics or complexity, the division result of the areas is proved to be more reasonable, and the more easily selected gain parameters are corresponding to the division result of the areas.
And performing first time of cross division by using a quadtree algorithm to obtain four image blocks of the gray level image of the part. The quadtree algorithm is a well-known technique, and a specific method is not described herein.
What needs to be described is: if the number of the lines and the columns of the part gray level image is odd, the lines and the columns corresponding to the downward integer of half of the number of the lines and the columns are taken for image blocking. The subsequent aliquoting operation is the same.
In the four image blocks of the part gray level image, calculating the variance of gray level values of all pixels in each image block, dividing the variance by the number of pixels in each image block, and recording the variance as a variation coefficient of each image block.
Therefore, the calculation formula of the homogeneity coefficient of each image block of the gray level image of the part is known as follows:
wherein the method comprises the steps ofHomogeneity coefficient of ith image block equally divided for part gray level image,/>The number of pixel points in the ith image block which is equal to the gray level image of the part, and +.>Variance of gray values of all pixels in ith image block equally divided for gray image of part,/>For the number of the forking times corresponding to the ith image block of the part gray level image halving, the required explanation is that the number of the forking times corresponding to the current four image blocks is 1, the number of the forking times corresponding to the new image block in the subsequent analysis is 2, and the number of the forking times corresponding to the updated image block is 3.V is the variance of the variation coefficients of the four image blocks equally divided by the gray level image of the part. />Normalizing the data values to [0,1 ] as a linear normalization function]Within the interval.
What needs to be described is: the homogeneity coefficient of the image block shows the similarity of the texture features in the image block, and the larger the homogeneity coefficient is, the more similar the texture features in the image block are, and the more the texture features are not needed to be separated again.The inverse of the product of the current frequency of the cross division and the variation coefficient of the ith image block is represented, and the larger the variation coefficient is, the more unstable the distribution of the gray values of the pixels in the image block is, namely, the smaller the similarity of the gray values of the pixels in the image block is, so that the corresponding homogeneity coefficient is smaller. The more the number of the cross-division is, the less the similarity between the image block and the surrounding image blocks is, the more the image block is cross-divided, i.e. the smaller the homogeneity coefficient is, therefore +.>The larger the homogeneity coefficient of the i-th image block, the larger it is. V is the variance of the variation coefficients of the four image blocks, namely the variance of the variation coefficients of the four image blocks is reflected, and the larger V indicates the larger characteristic difference between the image blocks, the purpose of image segmentation is to distinguish different texture characteristics, so that the more reasonable the image segmentation is, the larger the homogeneity coefficient of each image block is. Thereby use V and->Normalized value of the product of (i) representing the homogeneity coefficient of the ith image block of the gray-scale image aliquot of the part, and>the smaller the i-th image block, the more needed to be re-forked.
The preset division threshold value in this embodiment is 0.8, which is described as an example, and other values may be set in other embodiments, which is not limited in this embodiment.
Among the four image blocks equally divided in the gray-scale image of the part, an image block having a homogeneity coefficient of less than 0.8 is referred to as a divided image block.
What needs to be described is: for the image blocks with the homogeneity coefficient of more than or equal to 0.8, the texture characteristics in the image blocks are similar, and the image blocks do not need to be separated again. And judging whether to divide again.
And performing second cross division by using a quadtree algorithm to obtain four image blocks which are equally divided by each divided image block.
The four image blocks equally divided for each divided image block are denoted as new image blocks.
And obtaining the homogeneous coefficients of the four new image blocks equally divided by each divided image block according to the acquisition mode of the homogeneous coefficients of the four image blocks equally divided by the gray level image of the part.
Among the four new image blocks equally divided among the divided image blocks, a new image block having a homogeneity coefficient of less than 0.8 is noted as a new divided image block.
And performing third time of cross division by using a quadtree algorithm to obtain four image blocks which are equally divided by each new divided image block.
And so on, dividing the gray level image of the part into a plurality of image blocks.
What needs to be described is: the number of the branches of the quadtree algorithm is at most 10 in this embodiment, which is described as an example, and other values may be set in other embodiments, and this embodiment is not limited thereto. The texture features in each image block obtained at this time are similar, so that the acquisition of the subsequent gain coefficients is facilitated, and the similar texture features are respectively enhanced, so that the enhancement effect can be better improved.
Step S002: marking any image block as a target block; carrying out linear transformation on the target block according to different gain parameters in sequence to obtain an enhanced image block of the target block under different gain parameters; the enhanced image block of the target block under all gain parameters is recorded as a reference block; and performing discrete Fourier transform on each reference block to obtain the frequency spectrum amplitude value and the energy value of each pixel point in each reference block.
The above-mentioned obtained image blocking results, in order to obtain the enhancement results corresponding to different image blocks, it is necessary to determine the optimized value of the specific gain parameter according to the multi-scale constraint of the discrete fourier transform results of different image blocks. That is, in the gray scale image of the part, different image blocks have independent optimal gain parameters corresponding to independent multi-scale constraint results of discrete fourier transforms. The more suitable the gain parameter setting within a certain image block, the larger the multi-scale constraint at this time, i.e. the larger the spectral scale, the corresponding high frequency components, i.e. the part contour edges. In this embodiment, the multiple scales may be understood as spectral variation amounts reflected by different enhancement results, and when the gain parameter is selected more appropriately, the spectral variation amount is better corresponding to the gain parameter, because the variation amount can reflect the amplified edge feature in the original image.
And (3) marking any one image block as a target block in a plurality of image blocks divided by the gray level image of the part.
Gain parameters and offset parameters are known to be the main parameters for linear enhancement of an image, where the gain parameters are used to control the adjustment of the contrast of the image. The greater the gain, the greater the degree of contrast increase. Is an offset parameter for controlling the adjustment of the brightness. Increasing the offset increases the brightness of the entire image. In order to ensure that the brightness of the image is unchanged, the embodiment only increases the contrast of the image, and the preset minimum gain parameter1, preset maximum gain parameter +.>For 3, the preset increment value c is 0.1, and the preset offset parameter b is 0, which is described as an example, but other values may be set in other embodiments, which is not limited in this embodiment.
From the slaveStarting, increment c each time until +.>And finally, obtaining a gain parameter sequence.
Any gain parameter in the gain parameter sequence is recorded as a target gain parameter.
And performing linear transformation on the target block by using an image linear enhancement algorithm with the gain parameter as a target gain parameter and the offset parameter as a preset offset parameter to obtain an enhanced image block of the target block under the target gain parameter.
What needs to be described is: the image linear enhancement algorithm is a known technology, and the specific process of performing linear transformation on the target block is as follows:where a is a target gain parameter, b is a preset offset parameter,gray value for j-th pixel in target block,>and (3) obtaining an enhanced image block of the target block under the target gain parameter by taking the gray value after linear transformation of the j-th pixel point in the target block and m as the number of the pixel points in the target block.
In the above manner, in the gain parameter sequence, the enhanced image block of the target block under each gain parameter is obtained.
The enhanced image block of the target block under all gain parameters is noted as a reference block.
And performing discrete Fourier transform on each reference block to obtain the frequency spectrum amplitude value and the energy value of each pixel point in each reference block.
What needs to be described is: discrete fourier transform is a well known technique, and specific methods are not described here. The discrete Fourier transform can directly obtain the spectrum amplitude of each pixel, and the energy value of each pixel is obtained by the following steps: and performing discrete Fourier transform on the image to obtain a frequency domain representation. The square of the modulus is calculated for the complex value of each pixel of the frequency domain representation, i.e. the real part and the imaginary part of the complex are squared and added separately to obtain the energy value of each pixel, which is known in the art.
Step S003: recording any one reference block as a main reference block; obtaining the spectrum amplitude difference of the main reference block according to the spectrum amplitude difference of the pixel points between the main reference block and the adjacent reference blocks; and obtaining the multi-scale constraint of the main reference block according to the partial derivative difference of the energy values of all the pixel points in the main reference block in the transverse and longitudinal directions and the spectrum amplitude difference of the main reference block.
Any one of the reference blocks is denoted as a main reference block.
In the main reference block, a plane rectangular coordinate system is constructed with a pixel point at the lower left corner as an origin, a horizontal axis to the right as a horizontal axis and a vertical axis to the up.
And in the plane rectangular coordinate system, obtaining the horizontal coordinate value and the vertical coordinate value of each pixel point in the main reference block.
And according to the gain parameter sequence, marking the next reference block corresponding to the main reference block as a sub-reference block.
From this, the calculation formula of the multi-scale constraint P of the main reference block is known as follows:
wherein P is the multi-scale constraint of the main reference block, m is the number of pixels in the target block, m is also the number of pixels in the main reference block,for a preset weight +.>For the difference in spectral amplitude of the main reference block, +.>Is the average value of the spectrum amplitude of all pixel points in the main reference block, < >>For dividing the average value of the spectrum amplitude of all pixel points in the reference block, < >>For the energy value of the j-th pixel point in the main reference block,/and>is the abscissa value of the j-th pixel point in the main reference block,/and>is the ordinate value of the j-th pixel point in the main reference block,>is->Partial derivative of>Is->Partial derivative of>Is->Is the absolute function. The partial derivative is a well-known arithmetic operation. The present embodiment sets +.>This is 0.3, and is described as an example, but other values may be set in other embodiments, and the present example is not limited thereto. />Normalizing the data values to [0,1 ] as a linear normalization function]Within the interval.
What needs to be described is:the larger the difference of the frequency spectrum amplitude between the enhanced image blocks under two adjacent gain parameters is, the better the enhancement effect under the current gain parameters is, because when the gain parameters are changed slightly, the larger the frequency spectrum amplitude of the enhanced image blocks is changed, and the ideal enhancement effect can be determined from the whole. In regular geometrical edges, the spectrum has a strong energy distribution in the vertical direction for edges perpendicular to the horizontal axis and in the horizontal direction for edges perpendicular to the vertical axis. Therefore, if the obtained result is close to the deviation of the energy value, the energy distribution in the transverse or longitudinal direction can be determined to be dense, and the edges of the general part structure are in a regular geometrical body in combination with scene characteristics, so that the characteristics of the original edge can be determined by calculating the deviation of the specific direction regardless of the specific direction, and the obtained result is close to the energy value. />And->Respectively representing the comparison of the deflection of the energy value of the jth pixel in the transverse or longitudinal direction, when the deflection is completely deflected in the transverse or longitudinal direction>And->The difference between them is the largest, thus when +.>The larger the energy distribution of the j-th pixel point is, the more the energy distribution is in the horizontal or vertical direction, namely, the scene feature is provided. />For the preset weight, for +.>Andand carrying out weighted summation to obtain multi-scale constraint P representing the main reference block, wherein the larger the P value is, the better the enhancement effect of the main reference block is.
According to the mode, the multi-scale restriction of each reference block is obtained.
What needs to be described is: if the main reference block is the last reference block, the corresponding sub-reference block does not exist, so that the multi-scale constraint of the last reference block is the multi-scale constraint of the penultimate reference block.
Step S004: obtaining an optimal enhanced image block of the target block according to the multi-scale constraints of all the reference blocks; and obtaining the contour edge line of the part in the optimal enhanced image of the part according to the optimal enhanced image blocks of all the image blocks.
And counting the maximum value in the multi-scale constraints of all the reference blocks, and marking the reference block corresponding to the maximum value as the optimal enhanced image block of the target block.
What needs to be described is: if a plurality of maximum values exist, taking a reference block corresponding to the forefront maximum value as the optimal enhanced image block of the target block according to the gain parameter sequence.
And obtaining the optimal enhanced image block of each image block divided by the gray level image of the part according to the mode.
And (3) marking an image formed by the optimal enhancement image blocks of all the image blocks divided by the part gray level image as a part optimal enhancement image of the part gray level image.
The embodiment of the invention adopts a deep neural network to identify the part contour edge line in the optimal enhancement image of the segmented part.
The relevant content of the deep neural network is as follows:
the deep neural network used in this embodiment is a deep labv3 neural network; the dataset used is the part-optimal enhanced image dataset.
The pixel points to be segmented are divided into 2 classes, namely, the labeling process of the corresponding label of the training set is as follows: the single-channel semantic tag is marked as 0 corresponding to the pixel points in the positions belonging to the background class, and the mark belonging to the contour edge line of the part is marked as 1.
The task of the network is classification, so the loss function used is a cross entropy loss function.
And obtaining part contour edge lines in the part optimal enhanced image through the deep neural network, thereby completing the rapid segmentation of the laser cut part contour.
The present invention has been completed.
In summary, in the embodiment of the present invention, the gray level image of the part is obtained, and the gray level image is divided into a plurality of image blocks by using a quadtree algorithm. And carrying out linear transformation on each image block to obtain an enhanced image block of each image block under different gain parameters, and obtaining the frequency spectrum amplitude value and the energy value of each pixel point in each enhanced image block by using discrete Fourier transformation. And obtaining the multi-scale constraint of each enhanced image block according to the difference of the frequency spectrum amplitude of the pixel points between the adjacent enhanced image blocks and the partial derivative difference of the energy values of all the pixel points in each enhanced image block in the transverse and longitudinal directions. And obtaining the optimal enhanced image block of each image block according to the multi-scale constraint of all the enhanced image blocks, thereby obtaining the contour edge line of the part in the optimal enhanced image of the part. According to the invention, through image self-adaptive segmentation and self-adaptive selection of gain parameters of each image block, the enhancement effect of the image is improved, so that the accuracy of rapid segmentation of the profile of the laser cutting part is improved.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the invention, but any modifications, equivalent substitutions, improvements, etc. within the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. The laser cutting part contour rapid segmentation method based on artificial intelligence is characterized by comprising the following steps of:
collecting a laser cutting part image, and carrying out graying treatment to obtain a part gray image; dividing the gray level image of the part into a plurality of image blocks by using a quadtree algorithm;
marking any image block as a target block; carrying out linear transformation on the target block according to different gain parameters in sequence to obtain an enhanced image block of the target block under different gain parameters; the enhanced image block of the target block under all gain parameters is recorded as a reference block; performing discrete Fourier transform on each reference block to obtain a spectrum amplitude value and an energy value of each pixel point in each reference block;
recording any one reference block as a main reference block; obtaining the spectrum amplitude difference of the main reference block according to the spectrum amplitude difference of the pixel points between the main reference block and the adjacent reference blocks; obtaining multi-scale constraint of the main reference block according to partial derivative differences of energy values of all pixel points in the main reference block in the transverse direction and the longitudinal direction and spectrum amplitude differences of the main reference block;
obtaining an optimal enhanced image block of the target block according to the multi-scale constraints of all the reference blocks; obtaining part contour edge lines in the part optimal enhanced image according to the optimal enhanced image blocks of all the image blocks;
the multi-scale constraint of the main reference block is obtained according to the partial derivative difference of the energy values of all pixel points in the main reference block in the transverse direction and the longitudinal direction and the spectrum amplitude difference of the main reference block, and the method comprises the following specific steps:
in the main reference block, a plane rectangular coordinate system is constructed by taking the lower left corner as an origin, taking the horizontal right as a transverse axis and taking the vertical upward as a longitudinal axis;
in a plane rectangular coordinate system, obtaining an abscissa value and an ordinate value of each pixel point in the main reference block;
obtaining multi-scale constraint of the main reference block according to the energy values, the horizontal coordinate values, the partial derivatives of the vertical coordinate values and the spectrum amplitude differences of the main reference block;
the specific calculation formula corresponding to the multi-scale constraint of the main reference block is obtained according to the energy values, the horizontal coordinate values, the vertical coordinate values of all pixel points in the main reference block, the partial derivative of the vertical coordinate values and the spectrum amplitude difference of the main reference block, wherein the specific calculation formula comprises the following steps:
where P is the multi-scale constraint of the main reference block, m is the number of pixels in the main reference block,for a preset weight +.>For the difference in spectral amplitude of the main reference block, +.>For the energy value of the j-th pixel point in the main reference block,/and>is the abscissa value of the j-th pixel point in the main reference block,/and>is the ordinate value of the j-th pixel point in the main reference block,>is->Partial derivative of>Is->Partial derivative of>Is->Is the absolute function, ++>Is a linear normalization function.
2. The method for quickly dividing the contour of the laser cutting part based on artificial intelligence according to claim 1, wherein the step of dividing the gray level image of the part into a plurality of image blocks by using a quadtree algorithm comprises the following specific steps:
performing first cross division by using a quadtree algorithm to obtain four image blocks of gray level image equally divided parts;
obtaining a variation coefficient of each image block of the part gray level image according to the number of pixel points and gray level value difference in each image block of the part gray level image;
obtaining the homogeneity coefficient of each image block of the part gray level image equal division according to the number of pixel points in each image block of the part gray level image equal division, the gray level value difference, the frequency of cross division and the variance of the variation coefficients of the four image blocks;
among the four image blocks equally divided in the gray level image of the part, the image block with the homogeneity coefficient smaller than a preset segmentation threshold value is marked as a segmented image block;
performing second cross division by using a quadtree algorithm to obtain four image blocks equally divided by each divided image block; dividing each divided image block into four image blocks equally, and marking the four image blocks as new image blocks;
obtaining the homogeneous coefficients of four new image blocks equally divided by each divided image block according to the acquisition mode of the homogeneous coefficients of the four image blocks equally divided by the gray level image of the part;
recording the new image block with the homogeneity coefficient smaller than the preset segmentation threshold as the new segmented image block;
performing third cross division by using a quadtree algorithm to obtain four image blocks which are equally divided by each newly divided image block; and so on, dividing the gray level image of the part into a plurality of image blocks.
3. The method for rapidly dividing the contour of the laser cutting part based on the artificial intelligence according to claim 2, wherein the obtaining the variation coefficient of each image block of the part gray level image equal division according to the pixel point number and the gray level value difference in each image block of the part gray level image equal division comprises the following specific steps:
and calculating the variance of gray values of all pixels in each image block of the part gray image bisection, dividing the variance of the gray values by the number of pixels in each image block of the part gray image bisection, and recording the variance as the variation coefficient of each image block of the part gray image bisection.
4. The method for rapidly dividing the contour of the laser cutting part based on the artificial intelligence according to claim 2 is characterized in that the specific calculation formula corresponding to the homogeneity coefficient of each image block of the part gray level image is obtained according to the number of pixels in each image block of the part gray level image, the gray level value difference, the number of the fork divisions and the variance of the variability coefficients of the four image blocks, wherein the specific calculation formula is as follows:
wherein the method comprises the steps ofHomogeneity coefficient of ith image block equally divided for part gray level image,/>The number of pixel points in the ith image block which is equal to the gray level image of the part, and +.>Variance of gray values of all pixels in ith image block equally divided for gray image of part,/>Equally dividing gray scale images of partsThe corresponding frequency of the i-th image block is V is the variance of the variation coefficients of the four image blocks equally divided by the gray level image of the part,/for the i-th image block>Is a linear normalization function.
5. The method for rapidly dividing the contour of the laser cutting part based on the artificial intelligence according to claim 1, wherein the method sequentially carries out linear transformation on the target block according to different gain parameters to obtain the enhanced image block of the target block under the different gain parameters, comprises the following specific steps:
from the slaveStarting, increment c each time until +.>Ending, obtaining a gain parameter sequence; wherein->For a preset minimum gain parameter, +.>C is a preset increment value, which is a preset maximum gain parameter;
recording any gain parameter in the gain parameter sequence as a target gain parameter;
and performing linear transformation on the target block by using an image linear enhancement algorithm with the gain parameter as a target gain parameter and the offset parameter as a preset offset parameter to obtain an enhanced image block of the target block under the target gain parameter.
6. The method for rapidly dividing the contour of the laser cutting part based on the artificial intelligence according to claim 5, wherein the step of obtaining the difference of the frequency spectrum amplitude of the main reference block according to the difference of the frequency spectrum amplitude of the pixel points between the main reference block and the adjacent reference blocks comprises the following specific steps:
according to the sequence of gain parameters in the gain parameter sequence, marking the next reference block corresponding to the main reference block as a sub-reference block;
and respectively calculating the average value of the spectrum amplitude values of all pixel points in the main reference block and the sub-reference block, and recording the normalized value of the difference of the average value as the spectrum amplitude difference of the main reference block.
7. The method for rapidly dividing the contour of the laser cutting part based on the artificial intelligence according to claim 1, wherein the method for obtaining the optimal enhanced image block of the target block according to the multi-scale constraints of all the reference blocks comprises the following specific steps:
and counting the maximum value in the multi-scale constraints of all the reference blocks, and marking the reference block corresponding to the maximum value as the optimal enhanced image block of the target block.
8. The method for rapidly dividing the contour of the part by laser cutting based on artificial intelligence according to claim 1, wherein the step of obtaining the contour edge line of the part in the optimal enhanced image of the part according to the optimal enhanced image blocks of all the image blocks comprises the following specific steps:
the image formed by the optimal enhancement image blocks of all the image blocks is recorded as a part optimal enhancement image of the part gray level image;
and obtaining the contour edge line of the part in the optimal enhanced image of the part by using the deep neural network.
CN202311617964.6A 2023-11-30 2023-11-30 Laser cutting part contour rapid segmentation method based on artificial intelligence Active CN117314940B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311617964.6A CN117314940B (en) 2023-11-30 2023-11-30 Laser cutting part contour rapid segmentation method based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311617964.6A CN117314940B (en) 2023-11-30 2023-11-30 Laser cutting part contour rapid segmentation method based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN117314940A CN117314940A (en) 2023-12-29
CN117314940B true CN117314940B (en) 2024-02-02

Family

ID=89281587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311617964.6A Active CN117314940B (en) 2023-11-30 2023-11-30 Laser cutting part contour rapid segmentation method based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN117314940B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557561B (en) * 2024-01-11 2024-03-22 凌源日兴矿业有限公司 Underground roadway wall gap rapid detection method based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097379A (en) * 2016-07-22 2016-11-09 宁波大学 A kind of distorted image detection using adaptive threshold and localization method
WO2021068330A1 (en) * 2019-10-12 2021-04-15 平安科技(深圳)有限公司 Intelligent image segmentation and classification method and device and computer readable storage medium
CN117094916A (en) * 2023-10-19 2023-11-21 江苏新路德建设有限公司 Visual inspection method for municipal bridge support

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097379A (en) * 2016-07-22 2016-11-09 宁波大学 A kind of distorted image detection using adaptive threshold and localization method
WO2021068330A1 (en) * 2019-10-12 2021-04-15 平安科技(深圳)有限公司 Intelligent image segmentation and classification method and device and computer readable storage medium
CN117094916A (en) * 2023-10-19 2023-11-21 江苏新路德建设有限公司 Visual inspection method for municipal bridge support

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图像分割与重组的激光全息图像处理方法;周红;李丽华;曾强宇;;激光杂志(第09期);全文 *

Also Published As

Publication number Publication date
CN117314940A (en) 2023-12-29

Similar Documents

Publication Publication Date Title
CN107784661B (en) Transformer substation equipment infrared image classification and identification method based on region growing method
CN117314940B (en) Laser cutting part contour rapid segmentation method based on artificial intelligence
CN108154519A (en) Dividing method, device and the storage medium of eye fundus image medium vessels
CN104867130B (en) A kind of self-adapting division method based on crack image sub-district gray average
CN110428450B (en) Scale-adaptive target tracking method applied to mine tunnel mobile inspection image
CN109035274A (en) File and picture binary coding method based on background estimating Yu U-shaped convolutional neural networks
CN109726649B (en) Remote sensing image cloud detection method and system and electronic equipment
CN105303561A (en) Image preprocessing grayscale space division method
CN103020953A (en) Segmenting method of fingerprint image
Olugbara et al. Pixel intensity clustering algorithm for multilevel image segmentation
CN116188468B (en) HDMI cable transmission letter sorting intelligent control system
CN108510478B (en) Lung airway image segmentation method, terminal and storage medium
CN112508963A (en) SAR image segmentation method based on fuzzy C-means clustering
CN116385438A (en) Nuclear magnetic resonance tumor region extraction method
CN112070717A (en) Power transmission line icing thickness detection method based on image processing
CN112102217A (en) Method and system for quickly fusing visible light image and infrared image
CN104268845A (en) Self-adaptive double local reinforcement method of extreme-value temperature difference short wave infrared image
CN107564008A (en) Rapid SAR image segmentation method based on crucial pixel fuzzy clustering
CN111199228B (en) License plate positioning method and device
CN109829511B (en) Texture classification-based method for detecting cloud layer area in downward-looking infrared image
CN110349119B (en) Pavement disease detection method and device based on edge detection neural network
CN113223098B (en) Preprocessing optimization method for image color classification
CN105701807B (en) A kind of image partition method based on temporal voting strategy
CN110348452B (en) Image binarization processing method and system
CN103824279A (en) Image segmentation method based on organizational evolutionary cluster algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant