CN117798522A - Accurate positioning method for laser cutting head based on machine vision - Google Patents

Accurate positioning method for laser cutting head based on machine vision Download PDF

Info

Publication number
CN117798522A
CN117798522A CN202410231454.3A CN202410231454A CN117798522A CN 117798522 A CN117798522 A CN 117798522A CN 202410231454 A CN202410231454 A CN 202410231454A CN 117798522 A CN117798522 A CN 117798522A
Authority
CN
China
Prior art keywords
cutting head
laser cutting
dimensional
stereoscopic
positioning method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410231454.3A
Other languages
Chinese (zh)
Other versions
CN117798522B (en
Inventor
石中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ospri Intelligent Technology Co ltd
Original Assignee
Shenzhen Ospri Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ospri Intelligent Technology Co ltd filed Critical Shenzhen Ospri Intelligent Technology Co ltd
Priority to CN202410231454.3A priority Critical patent/CN117798522B/en
Priority claimed from CN202410231454.3A external-priority patent/CN117798522B/en
Publication of CN117798522A publication Critical patent/CN117798522A/en
Application granted granted Critical
Publication of CN117798522B publication Critical patent/CN117798522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to the field of image data processing, in particular to a machine vision-based accurate positioning method for a laser cutting head. The method specifically comprises the following steps: capturing three-dimensional image data of a working surface through a depth camera and a three-dimensional imaging device, constructing a three-dimensional vision enhancement type convolution network, and generating a three-dimensional surface model; transmitting the three-dimensional surface model data to a dynamic adjustment strategy, strengthening the dynamic characteristics of the stereoscopic image, and designing a feedback control model to convert the dynamic characteristics extracted from the stereoscopic image into a motion control instruction of the laser cutting head. The problems of limited three-dimensional modeling capability, insufficient positioning accuracy and adaptability, low-efficiency dynamic adjustment strategy, cutting efficiency and quality and robustness in the prior art are solved; the precise positioning of the cutting head is difficult to maintain when irregular or complex working surfaces are processed, and an effective stereoscopic vision information processing and dynamic adjustment mechanism is lacked; the adjustment strategy is slow to respond and lacks an effective real-time feedback and adjustment mechanism.

Description

Accurate positioning method for laser cutting head based on machine vision
Technical Field
The invention relates to the field of image data processing, in particular to a machine vision-based accurate positioning method for a laser cutting head.
Background
Laser cutting is widely used as a precision machining technology in the field of industrial manufacturing. It uses high-energy laser beam to cut and engrave material, and is applicable to various materials including metal, plastics, wood and cloth. With the continuous pursuit of precision and efficiency in manufacturing industry, higher requirements are put on the positioning precision and adaptability of the laser cutting technology.
In conventional laser cutting techniques, the positioning of the cutting head is largely dependent on two-dimensional image processing techniques, which have limitations in processing complex, irregular or dynamically changing work surfaces. Particularly in the high-speed cutting process, the response to dynamic changes is not rapid or accurate, resulting in a reduction in cutting quality and efficiency.
Chinese patent application number: CN201710796682.5, publication date: 2017.11.21A follow-up scanning positioning device and method for positioning a laser cutting head comprises a cutting head positioning system, a laser cutting head device and a binocular vision position measuring system, wherein an installation frame is arranged at the end part of the cutting head positioning system, the laser cutting head device is fixedly installed at the end part of the cutting head positioning system through the installation frame, a connecting frame is arranged at the side part of the installation frame, and the binocular vision position measuring system is fixedly installed at the side part of the laser cutting head device through the connecting frame. The invention has the advantages that the position and posture information of the workpiece space mark point system representing the workpiece position relative to the binocular vision position measuring system is automatically formed by adopting the binocular vision position measuring system, and the position and posture information of the laser axis of the laser cutting head relative to the workpiece can be indirectly obtained by superposing and resolving the two opposite position and posture information by the machine tool control system.
However, the above technology has at least the following technical problems: limited three-dimensional modeling capability, insufficient positioning accuracy and adaptability, inefficient dynamic adjustment strategies, cutting efficiency and quality problems, and robustness problems; the precision is insufficient when irregular or complex working surfaces are processed, the accurate positioning of the cutting head is difficult to maintain, and an effective stereoscopic vision information processing and dynamic adjustment mechanism is lacked; the adjustment strategy is generally slow in response, lacks an effective real-time feedback and adjustment mechanism, reduces cutting efficiency, and influences consistency and accuracy of cutting results.
Disclosure of Invention
The invention provides a laser cutting head accurate positioning method based on machine vision, which solves the problems of limited three-dimensional modeling capability, insufficient positioning accuracy and adaptability, low-efficiency dynamic adjustment strategy, cutting efficiency and quality and robustness in the prior art; the precision is insufficient when irregular or complex working surfaces are processed, the accurate positioning of the cutting head is difficult to maintain, and an effective stereoscopic vision information processing and dynamic adjustment mechanism is lacked; the adjustment strategy is generally slow in response, lacks an effective real-time feedback and adjustment mechanism, reduces cutting efficiency, and influences consistency and accuracy of cutting results. The technical innovation of high-precision three-dimensional modeling and accurate positioning of the laser cutting head through a stereoscopic vision enhanced convolution network and a dynamic adjustment strategy is realized.
The invention discloses a machine vision-based accurate positioning method for a laser cutting head, which specifically comprises the following technical scheme:
the accurate positioning method of the laser cutting head based on machine vision comprises the following steps:
s1, capturing three-dimensional image data of a working surface, constructing a stereoscopic vision enhancement type convolution network, and generating a three-dimensional surface model;
and S2, transmitting the three-dimensional surface model data to a dynamic adjustment strategy, strengthening the dynamic characteristics of the stereoscopic image, and designing a feedback control model to adjust the position of the laser cutting head in real time.
Preferably, the S1 specifically includes:
capturing three-dimensional image data of a working surface by a depth camera and a stereoscopic imaging device, wherein the working surface is the surface of an object to be subjected to laser cutting; the stereoscopic vision enhancement type convolution network comprises an input layer, a convolution layer, a feature fusion layer and an output layer.
Preferably, the S1 further includes:
receiving three-dimensional image data through an input layer; the convolution layer is used for extracting depth information of the stereoscopic image by constructing a stereoscopic vision feature extraction layer; the three-dimensional surface model is output through the output layer for positioning the laser cutting head.
Preferably, the S1 further includes:
in the process of extracting the stereoscopic vision characteristics, each neuron of the convolution layer processes pixel values of corresponding positions in the left view and the right view of the stereoscopic image, so that parallax information is obtained, and characteristic response is obtained.
Preferably, the S1 further includes:
after obtaining the characteristic response, predicting a depth map; depth information is extracted and integrated from the feature response based on the feature response of each pixel and the average response of the pixel neighborhood.
Preferably, the S1 further includes:
the stereoscopic enhancement convolutional network is optimized with the objective of minimizing the difference between the predicted depth map and the true depth map.
Preferably, the S1 further includes:
the depth information comprises the depth information of each pixel point relative to the depth camera, and the depth information is converted into three-dimensional space coordinates, processed and optimized by utilizing the internal parameters of the depth camera, including focal length and principal point coordinates, so as to finally obtain the three-dimensional surface model.
Preferably, the S2 specifically includes:
an enhancement formula based on time sequence difference is designed, and the dynamic characteristics of the stereoscopic image are enhanced by comparing pixel intensity differences between continuous frames.
Preferably, the S2 further includes:
in the process of realizing the feedback control model, dynamic characteristics extracted from the stereoscopic image are converted into motion control instructions of the laser cutting head.
The technical scheme of the invention has the beneficial effects that:
1. by capturing three-dimensional image data by utilizing a depth camera and a three-dimensional imaging device and combining a three-dimensional vision enhancement type convolution network, the three-dimensional modeling method can be used for three-dimensionally modeling the working surface, provides accurate reference for a laser cutting head when complex or irregular surfaces are processed, and greatly improves cutting precision; by combining the stereoscopic vision feature extraction layer and the feature fusion layer, the understanding capability of the spatial information of the working surface is enhanced, and in a dynamic or unstable cutting environment, the accuracy of positioning of the cutting head is improved, and the adaptability of the cutting head in a changing environment is also enhanced;
2. the enhancement formula and the feedback control model based on time sequence difference can effectively capture the dynamic change of the working surface, convert the dynamic change into a motion control instruction of the cutting head, and ensure the accuracy and stability of the position of the cutting head in a rapidly-changing cutting environment by a dynamic adjustment strategy; through an automatic and accurate control system, the laser cutting device and the laser cutting method remarkably improve the efficiency and quality of laser cutting, reduce material waste, improve the quality of finished products, and are extremely important in the field of precision manufacturing.
Drawings
Fig. 1 is a flowchart of a method for accurately positioning a laser cutting head based on machine vision according to an embodiment of the present invention.
Detailed Description
In order to further illustrate the technical means and effects adopted by the present invention to achieve the preset purpose, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the accurate positioning method of the laser cutting head based on machine vision provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flow chart of a method for accurately positioning a laser cutting head based on machine vision according to an embodiment of the invention is shown, the method comprises the following steps:
s1, capturing three-dimensional image data of a working surface, constructing a stereoscopic vision enhancement type convolution network, and generating a three-dimensional surface model;
three-dimensional image data of a working surface, which is an object surface to be subjected to laser cutting, is captured by a depth camera and a stereoscopic imaging device. And constructing a stereoscopic vision enhancement type convolution network, and realizing three-dimensional modeling of the working surface by combining a stereoscopic vision technology. The three-dimensional image data of the working surface is transmitted to a stereoscopic vision enhanced convolution network for processing after standardized processing, and a three-dimensional surface model is generated, so that accurate reference is provided for positioning of the laser cutting head.
Specifically, an input layer of the stereoscopic enhanced convolutional network receives three-dimensional image data from a depth camera and a stereoscopic imaging device. The stereo vision enhancement type convolution network comprises an input layer, a convolution layer, a feature fusion layer and an output layer, and is characterized in that the structure of the convolution layer is adjusted, and a stereo vision feature extraction layer is added on the basis of the traditional convolution network and used for extracting depth information of a stereo image. The spatial understanding of the working surface by the stereoscopic enhanced convolutional network is enhanced when complex or irregular surfaces are processed. In order to effectively fuse the stereoscopic vision features with the traditional features, a feature fusion layer is designed. Through a fusion algorithm, the accuracy and the robustness of the depth information are improved; the fusion algorithm is the prior art, and can be specifically set according to specific implementation, and is not limited herein. Finally, the stereoscopic vision-enhanced convolution network outputs a three-dimensional surface model for accurate positioning of the laser cutting head.
In the stereoscopic feature extraction step, a convolution layer structure is designed to extract depth information of the stereoscopic image, and richer spatial information is provided to enhance understanding of the shape and surface detail of the object. Each neuron of the convolution layer processes pixel values at corresponding positions in left and right views of the stereoscopic image, thereby acquiring parallax information. Specifically, the entire image is traversed by sliding windows, computing differences in left and right views for pixels within each window, and weighted summation by weight parameters, and then applying an activation function (e.g., reLU) to enhance the non-linear characteristics. This process forms the core formula for feature extraction:
wherein,representing the position +.>Response of->Is the position +.>Weight of->Is an activation function->Is a cyclic variable used to traverse pixels around the convolution kernel, +.>,/>Is the size of the convolution kernel, representing the extent of the pixel neighborhood covered by the convolution operation, +.>And->Left view and right view of stereoscopic image, respectively, at pixel position +.>Intensity values of (2).
After the feature response is obtained, an accurate depth map is calculated. The depth information is calculated by comparing disparities of the left and right views. In order to extract and synthesize accurate depth information from the feature responses, the feature response of each pixel and the average response of its neighborhood are considered to improve the robustness of the depth estimation. Depth information is calculated by the following formula:
wherein,is indicated at the position +.>Depth information calculated at ∈>And->Is a parameter for adjusting the depth scale, +.>Is the characteristic response diagram at the position +.>Value of->Is a loop variable for traversing characteristic response +.>Is a neighborhood of pixels.
The network is optimized through a training process to achieve the most accurate depth estimation. The optimization objective is to minimize the difference between the predicted depth map and the true depth map, using the mean square error as the loss function. The method is realized by a gradient descent method and variants thereof, such as random gradient descent (SGD), adam optimizer and the like, and the training is iterated continuously, and weight parameters in the network are adjusted until the loss function is minimized. The loss function is defined as follows:
wherein,is a loss function, +.>Is the total number of pixels, +.>Is a true depth map.
Depth informationThe depth information of each pixel point relative to the depth camera is contained, and the depth information is converted into three-dimensional space coordinates by utilizing the internal parameters of the depth camera, such as focal length and principal point coordinates. For each pixel in the depth information +.>And its corresponding depth value->Its three-dimensional coordinates->The calculation can be made by the following formula:
wherein,is the principal point coordinates, +.>Is the focal length. Processing and optimizing three-dimensional point cloud obtained from depth information to improve quality of three-dimensional surface model, including noise reduction and point cloud registrationAnd (5) grid generation. The noise reduction process may apply various filtering techniques, such as statistical filtering or gaussian filtering, to remove or reduce noise; the point cloud registration may be achieved by Iterative Closest Point (ICP) algorithm; mesh generation is the conversion of point cloud data into a mesh model, and common methods include poisson reconstruction or Delaunay triangulation. Thereby converting the optimized point cloud data into a three-dimensional surface model.
And S2, transmitting the three-dimensional surface model data to a dynamic adjustment strategy, strengthening the dynamic characteristics of the stereoscopic image, and designing a feedback control model to adjust the position of the laser cutting head in real time.
The three-dimensional surface model provides detailed geometric information for initial positioning of the laser cutting head, and data of the three-dimensional surface model is transmitted to a dynamic adjustment strategy. In the dynamic adjustment strategy, a real-time monitoring device (such as a camera) continuously tracks the state of the working surface, and the real-time data is combined with a previously generated three-dimensional surface model to be able to predict the change in the cutting process and adjust the position of the cutting head accordingly.
Specifically, in order to effectively capture the dynamic change of the working surface, an enhancement formula based on time series difference is designed, and the dynamic characteristics of the image are enhanced by comparing the pixel intensity differences between the continuous frames. The formula is as follows:
wherein,is indicated at +.>And position->The first part of the formula is a time difference term for capturing the time-dependent change of the image, wherein +.>Is the equilibrium timeCoefficients of spatial features>Is a gaussian filter applied to the time difference term for emphasizing the features around the center pixel,/->Is an index for a time difference term, +.>Is an element in the time weight matrix; the second part is a spatial feature term for extracting the spatial details of the current frame, whereinIs a gaussian filter applied to the spatial feature term, < +.>Is an index for spatial feature items, +.>Is an element in the spatial weight matrix; />、/>、/>Respectively expressed in time->And->Position->And->Pixel intensity of +.>Is a time difference interval for comparing image frames at different points in time.
Two Gaussian filtersAnd->Based on the spatial correlation characteristics of the image, emphasizes features of the image on different spatial scales, thereby capturing detail changes of different sizes more effectively. The two gaussian filters are determined by the following formula:
wherein,and->The standard deviation of the two gaussian filter functions determines the spatial extent of the filter.
On the basis of dynamic feature extraction, a feedback control model is designed to adjust the position of the laser cutting head in real time. The dynamic features extracted from the image are translated into motion control instructions for the cutting head to maintain accurate and stable operation in a rapidly changing cutting environment. The main formulas of the control model are as follows:
wherein,representing the adjusted cutting head position +.>Is the current location, +.>Is a position error>、/>And->Is PID gain, ++>Is a fuzzy logic function based on error +.>Error rate->The fuzzy value of (2) is used for adjusting the control output, so that the adaptability to complex environments is enhanced; />Is the target position of the cutting head, determined by the cutting path planning or the workpiece geometry, +.>And->Is a factor that trades off the effects of location differences and dynamic characteristics.
In summary, the accurate positioning method of the laser cutting head based on machine vision is completed.
The sequence of the embodiments of the invention is merely for description and does not represent the advantages or disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and the same or similar parts of each embodiment are referred to each other, and each embodiment mainly describes differences from other embodiments.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (9)

1. The accurate positioning method of the laser cutting head based on machine vision is characterized by comprising the following steps of:
s1, capturing three-dimensional image data of a working surface, constructing a stereoscopic vision enhancement type convolution network, and generating a three-dimensional surface model;
and S2, transmitting the three-dimensional surface model data to a dynamic adjustment strategy, strengthening the dynamic characteristics of the stereoscopic image, and designing a feedback control model to adjust the position of the laser cutting head in real time.
2. The machine vision-based laser cutting head precise positioning method according to claim 1, wherein S1 specifically comprises:
capturing three-dimensional image data of a working surface by a depth camera and a stereoscopic imaging device, wherein the working surface is the surface of an object to be subjected to laser cutting; the stereoscopic vision enhancement type convolution network comprises an input layer, a convolution layer, a feature fusion layer and an output layer.
3. The machine vision-based laser cutting head precise positioning method of claim 2, wherein S1 further comprises:
receiving three-dimensional image data through an input layer; the convolution layer is used for extracting depth information of the stereoscopic image by constructing a stereoscopic vision feature extraction layer; the three-dimensional surface model is output through the output layer for positioning the laser cutting head.
4. The machine vision based laser cutting head precise positioning method of claim 3, wherein S1 further comprises:
in the process of extracting the stereoscopic vision characteristics, each neuron of the convolution layer processes pixel values of corresponding positions in the left view and the right view of the stereoscopic image, so that parallax information is obtained, and characteristic response is obtained.
5. The machine vision-based laser cutting head precise positioning method of claim 4, wherein S1 further comprises:
after obtaining the characteristic response, predicting a depth map; depth information is extracted and integrated from the feature response based on the feature response of each pixel and the average response of the pixel neighborhood.
6. The machine vision-based laser cutting head precise positioning method of claim 5, wherein S1 further comprises:
the stereoscopic enhancement convolutional network is optimized with the objective of minimizing the difference between the predicted depth map and the true depth map.
7. The machine vision-based laser cutting head precise positioning method of claim 5, wherein S1 further comprises:
the depth information comprises the depth information of each pixel point relative to the depth camera, and the depth information is converted into three-dimensional space coordinates, processed and optimized by utilizing the internal parameters of the depth camera, including focal length and principal point coordinates, so as to finally obtain the three-dimensional surface model.
8. The machine vision-based laser cutting head precise positioning method according to claim 1, wherein S2 specifically comprises:
an enhancement formula based on time sequence difference is designed, and the dynamic characteristics of the stereoscopic image are enhanced by comparing pixel intensity differences between continuous frames.
9. The machine vision-based laser cutting head precise positioning method of claim 1, wherein S2 further comprises:
in the process of realizing the feedback control model, dynamic characteristics extracted from the stereoscopic image are converted into motion control instructions of the laser cutting head.
CN202410231454.3A 2024-03-01 Accurate positioning method for laser cutting head based on machine vision Active CN117798522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410231454.3A CN117798522B (en) 2024-03-01 Accurate positioning method for laser cutting head based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410231454.3A CN117798522B (en) 2024-03-01 Accurate positioning method for laser cutting head based on machine vision

Publications (2)

Publication Number Publication Date
CN117798522A true CN117798522A (en) 2024-04-02
CN117798522B CN117798522B (en) 2024-05-17

Family

ID=

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2998505A1 (en) * 2012-11-26 2014-05-30 Akeo Plus METHOD AND SYSTEM FOR MARKING A SURFACE BY LASER PROCESSING
CN104259669A (en) * 2014-09-11 2015-01-07 苏州菲镭泰克激光技术有限公司 Precise three-dimensional curved surface laser marking method
CN104551411A (en) * 2014-11-18 2015-04-29 南京大学 Calibration method of laser galvanometer processing system under guidance of binocular stereoscopic vision
CN111299761A (en) * 2020-02-28 2020-06-19 华南理工大学 Real-time attitude estimation method of welding seam tracking system
CN111299762A (en) * 2020-02-28 2020-06-19 华南理工大学 Laser real-time weld joint tracking method for separating strong noise interference
CN115709331A (en) * 2022-11-23 2023-02-24 山东大学 Welding robot full-autonomous visual guidance method and system based on target detection
DE102022203359A1 (en) * 2021-11-10 2023-05-11 Robert Bosch Gesellschaft mit beschränkter Haftung Monitoring device, in particular for a laser material processing system, laser material processing system, method and computer program
CN116475563A (en) * 2023-06-06 2023-07-25 厦门大学 Deep learning three-dimensional weld tracking method and device
CN117583751A (en) * 2023-12-29 2024-02-23 武汉威士登自动化控制技术有限公司 H-shaped steel laser cutting deformation compensation algorithm

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2998505A1 (en) * 2012-11-26 2014-05-30 Akeo Plus METHOD AND SYSTEM FOR MARKING A SURFACE BY LASER PROCESSING
CN104259669A (en) * 2014-09-11 2015-01-07 苏州菲镭泰克激光技术有限公司 Precise three-dimensional curved surface laser marking method
CN104551411A (en) * 2014-11-18 2015-04-29 南京大学 Calibration method of laser galvanometer processing system under guidance of binocular stereoscopic vision
CN111299761A (en) * 2020-02-28 2020-06-19 华南理工大学 Real-time attitude estimation method of welding seam tracking system
CN111299762A (en) * 2020-02-28 2020-06-19 华南理工大学 Laser real-time weld joint tracking method for separating strong noise interference
DE102022203359A1 (en) * 2021-11-10 2023-05-11 Robert Bosch Gesellschaft mit beschränkter Haftung Monitoring device, in particular for a laser material processing system, laser material processing system, method and computer program
CN115709331A (en) * 2022-11-23 2023-02-24 山东大学 Welding robot full-autonomous visual guidance method and system based on target detection
CN116475563A (en) * 2023-06-06 2023-07-25 厦门大学 Deep learning three-dimensional weld tracking method and device
CN117583751A (en) * 2023-12-29 2024-02-23 武汉威士登自动化控制技术有限公司 H-shaped steel laser cutting deformation compensation algorithm

Similar Documents

Publication Publication Date Title
CN111089569B (en) Large box body measuring method based on monocular vision
CN111673235B (en) Robot arc 3D printing layer height regulating and controlling method and system
CN113385486B (en) Automatic laser cleaning path generation system and method based on line structured light
CN111476841B (en) Point cloud and image-based identification and positioning method and system
CN113920060A (en) Autonomous operation method and device for welding robot, electronic device, and storage medium
CN111179321B (en) Point cloud registration method based on template matching
Hou et al. A teaching-free welding method based on laser visual sensing system in robotic GMAW
CN110425996A (en) Workpiece size measurement method based on binocular stereo vision
CN109483887B (en) Online detection method for contour accuracy of forming layer in selective laser melting process
CN112348864A (en) Three-dimensional point cloud automatic registration method for laser contour features of fusion line
CN111940891A (en) Focusing method, system, equipment and storage medium of fiber laser cutting head
Liu et al. Real-time 3D surface measurement in additive manufacturing using deep learning
Zhang et al. Development of an AR system achieving in situ machining simulation on a 3‐axis CNC machine
CN117798522B (en) Accurate positioning method for laser cutting head based on machine vision
Ben et al. Research on visual orientation guidance of industrial robot based on cad model under binocular vision
CN114750154A (en) Dynamic target identification, positioning and grabbing method for distribution network live working robot
CN117798522A (en) Accurate positioning method for laser cutting head based on machine vision
CN112387982B (en) Laser additive process power combined regulation and control method
CN114310883A (en) Mechanical arm autonomous assembling method based on multiple knowledge bases
CN116604212A (en) Robot weld joint identification method and system based on area array structured light
CN115770988A (en) Intelligent welding robot teaching method based on point cloud environment understanding
CN115601357A (en) Stamping part surface defect detection method based on small sample
Liu et al. Fractional‐order PID servo control based on decoupled visual model
CN111985308B (en) Stereoscopic vision space curve matching method of differential geometry
Li et al. In situ three-dimensional laser machining system integrating in situ measurement, reconstruction, parameterization, and texture mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant