CN117103266A - Aircraft skin feature recognition and edge milling path planning method based on semantic segmentation - Google Patents

Aircraft skin feature recognition and edge milling path planning method based on semantic segmentation Download PDF

Info

Publication number
CN117103266A
CN117103266A CN202311145421.9A CN202311145421A CN117103266A CN 117103266 A CN117103266 A CN 117103266A CN 202311145421 A CN202311145421 A CN 202311145421A CN 117103266 A CN117103266 A CN 117103266A
Authority
CN
China
Prior art keywords
point cloud
semantic segmentation
cloud image
edge milling
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311145421.9A
Other languages
Chinese (zh)
Inventor
王偲佳
吴楠
曲星宇
张海宁
刘宁
叶玉玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shengong Technology Co ltd
Original Assignee
Beijing Shengong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shengong Technology Co ltd filed Critical Beijing Shengong Technology Co ltd
Priority to CN202311145421.9A priority Critical patent/CN117103266A/en
Publication of CN117103266A publication Critical patent/CN117103266A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23CMILLING
    • B23C3/00Milling particular work; Special milling operations; Machines therefor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0014Image feed-back for automatic industrial control, e.g. robot with camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Image Processing (AREA)

Abstract

The utility model relates to an aircraft skin feature recognition and edge milling path planning method based on semantic segmentation. The method comprises the steps of obtaining an original point cloud image by using a line laser scanner, and preprocessing the original point cloud image; semantic segmentation is carried out on the point cloud image by using a PointNet model, a reference piece point cloud image is obtained, and feature extraction is carried out on the point cloud image; calculating a point cloud posture based on time sequence; and planning the edge milling path based on the characteristic points. Experiments prove that compared with the traditional method, the method has smaller limitation on the scanning angle of the sensor, and the data volume of the linear point cloud is smaller than that of the planar point cloud, and the geometric profile is simpler than that of the planar point cloud, so that the method has lower calculation cost and requirements on software and hardware, can finish high-precision edge milling, saves cost, and has a calculation speed which can meet the 12ms communication time interval of the RSI of the KUKA robot, so that the method has wide application prospect in the field of edge milling path planning.

Description

Aircraft skin feature recognition and edge milling path planning method based on semantic segmentation
Technical Field
The utility model belongs to the technical field of aircraft skin edge milling, and particularly relates to an aircraft skin feature identification and edge milling path planning method and system based on semantic segmentation.
Background
The common aircraft skin edge milling system performs analysis and calculation through a mathematical model or actually collected data to generate a path with position and posture information so as to guide the robot to process. The utility model patent of an aircraft skin automatic high-precision edge milling device and an edge milling method thereof applied by northwest industrial university in 2015 provides a solution for point alignment, normal leveling, flexible clamping and high-precision track planning of a region to be milled, and improves the dimensional precision and normal precision of skin edge milling. The utility model patent of a robot trimming system applied by Shen Feimin machine 2019 adopts a contour scanning device and a depth measuring device to measure and calculate in real time, and improves the positioning accuracy of a robot by a real-time feedback method.
However, in the aircraft assembly industry at present, although the demand for automatic trimming in a skin frame is very strong, no mature product is successfully applied at home and abroad.
Disclosure of Invention
In order to meet the technical requirements of automatic trimming in a skin frame and overcome the defects of low manual trimming precision, large workload, long time consumption, low efficiency and large operation hazard, the utility model provides a novel aircraft skin feature recognition and edge milling path planning method based on semantic segmentation.
Interpretation of the terms
Aircraft skin feature identification and edge milling path planning based on semantic segmentation: the semantic segmentation is a deep learning method, and can distinguish the category of each point in the aircraft skin point cloud data, so that the datum referenced by the edge milling is accurately distinguished, the position of the characteristic point on the datum and the inclination angle of the datum plane are calculated, and the path planning of the edge milling is completed according to the calculated position and angle.
The whole design thought of the method is that point cloud data are obtained by using a line laser sensor, semantic segmentation is carried out on the point cloud data by using a PointNet model, milling positions and angles are calculated based on the segmented reference part point cloud patterns, and then path planning of edge milling is completed. Experiments prove that compared with the traditional method, the method has smaller limitation on the scanning angle of the sensor. In addition, the data volume of the linear point cloud data is smaller than that of the planar point cloud, so that the calculation cost of the method is lower, the calculation speed can meet the 12ms communication time interval of the RSI of the KUKA robot, and the online edge milling function is facilitated. Finally, as the geometric outline of the line laser point cloud data is simpler than that of the planar point cloud, the requirements on software and hardware are lower, and the cost can be saved under the condition of finishing high-precision edge milling, so that the method has wider application prospects in the scenes of edge milling path planning and the like.
In general, the aircraft skin feature recognition and edge milling path planning method based on semantic segmentation comprises the following processing steps:
1) Point cloud image preprocessing
Because the surface characteristic point cloud image obtained by line laser scanning has the characteristics of high resolution, small data volume and low interference resistance, and meanwhile, the method (system) needs to preprocess the original point cloud image in consideration of the factors of small target characteristic scale, complex structure, high real-time requirement of the processing process and the like of the object to be detected, and reduces the errors of the subsequent characteristic extraction and path planning processes. The method processes the acquired point cloud in a mode comprising threshold filtering and Gaussian filtering through analysis of the results of the point cloud obtained in the simulation processing process experiment, and the optimization of the acquired point cloud is jointly achieved by means of a time sequence-based mean filtering function of a scanner at a hardware end.
2) Point cloud image feature extraction
The path of the track of the edging (milling) is mainly based on part of the feature of the reference piece, such as the chamfer edge in the reference piece, etc., according to the machining process. Therefore, in the acquired point cloud image, the system needs to extract and divide the processing object first to distinguish the skin and the reference piece and determine the position of the feature. According to the method, a PointNet model is used for carrying out semantic segmentation on the preprocessed point cloud image, a reference piece point cloud image is obtained and divided into different semantic sections, boundary points of a chamfer section and a reference upper plane section in the semantic sections are obtained to serve as primary characteristic points, characteristic point fine extraction is carried out in a mode of straight line fitting to obtain intersection points, and accurate characteristic point positions are obtained.
3) Point cloud pose calculation based on time sequence
In the trimming process, the milling module needs to adjust the posture according to the reference machining surface of the frame, and according to the process information, the condition that the pitch angle of the reference surface changes is known, so that a tool coordinate system established on a machining tool spindle needs to rotate along with the change of the pitch angle under the control of a system. After the base coordinate system and the tool coordinate system are adjusted to a reasonable pose, the system adjusts the pose by taking the main motion direction of the milling process as an axis. In the online trimming mode, the processing process has higher requirement on real-time performance, so that the system directly utilizes an original image obtained by scanning the line laser to calculate the relative posture based on time sequence.
4) Path planning based on feature points
After the feature points are acquired, the system will perform path planning on the basis of the series of feature points. The system carries out path point segmentation again on the basis of the characteristic points, adopts an interpolation method, fully utilizes the change trend of each characteristic point in the interpolation section, and calculates and obtains the processing path.
Specifically, the utility model provides an aircraft skin feature recognition and edge milling path planning method based on semantic segmentation, which comprises the following steps:
s1, acquiring an original point cloud image by using a line laser scanner, and preprocessing the original point cloud image;
s2, performing semantic segmentation on the preprocessed point cloud image by using a PointNet model to obtain a reference point cloud image, and performing feature extraction on the point cloud image;
s3, calculating the point cloud attitude based on time sequence;
s4, planning a milling path based on the characteristic points;
s5, fitting and re-dividing the processing track.
Further, the preprocessing of the original point cloud image in the step S1 of the aircraft skin feature recognition and edge milling path planning method based on semantic segmentation comprises the following steps:
s11, analyzing a point cloud result obtained by an experiment in a simulation processing process;
s12, preprocessing an original point cloud image by adopting a threshold filtering and Gaussian filtering mode according to the analysis result of the previous step;
s13, optimizing the original point cloud image by using a line laser scanner based on a time sequence approach interval mean value filtering method.
Further, the method for identifying aircraft skin features and planning edge milling paths based on semantic segmentation in step S2 of the present utility model uses a PointNet model to perform semantic segmentation on a preprocessed point cloud image, obtain a reference point cloud image, and perform feature extraction on the point cloud image, including:
s21, performing deep learning training on the PointNet model in advance by utilizing point cloud data;
s22, judging the semantics of each discrete point in the preprocessed point cloud image by using a PointNet model, completing the semantic segmentation of the whole point cloud, obtaining a reference point cloud image and dividing the reference point cloud image into different semantic sections;
s23, acquiring boundary points of the chamfering section and the reference upper plane section in the semantic section as primary characteristic points;
s24, performing feature point fine extraction by adopting a mode of solving intersection points through straight line fitting, and obtaining accurate feature point positions.
Further, in the foregoing semantic segmentation-based aircraft skin feature recognition and edge milling path planning method step S21, the point cloud data is utilized to perform deep learning training on the PointNet model in advance, and training parameters of the PointNet model include: batch_size, epoch, learning_rate, decay_rate, npoint, step_size, lr_decay.
Further, in the step S22 of the method for identifying aircraft skin features and planning edge milling paths based on semantic segmentation, the semantics of each discrete point in the preprocessed point cloud image are judged by using a PointNet model, so that the semantic segmentation of the whole point cloud is completed, the input of the PointNet model is the coordinates of point cloud data, the output of the model is m×n scores, and the scores of n input points relative to m categories are respectively corresponding; comprising the following steps:
estimating an affine transformation matrix (3 x 3), multiplying the matrix by the input point cloud prior to feature extraction;
each of the n points has 1024-dimensional characteristics, and the maximum value of the n points is selected to obtain a global characteristic;
the global feature is copied for n times and spliced with the feature obtained by the second layer MLP, namely, the local feature and the global feature of each point are spliced;
through two MLPs, an output with dimension of m×n is finally obtained, that is, m classification is performed for each point, and a prediction score corresponding to each class is output.
Further, the method for identifying aircraft skin features based on semantic segmentation and planning edge milling paths according to the utility model uses the point cloud features at adjacent moments in the time sequence-based point cloud gesture calculation in the step S3, and obtains the relative gesture relation of the tool coordinate system at the corresponding moment by comparing the gesture changes of the two, and comprises the following steps:
s31, setting the motion direction of a tool coordinate system to coincide with the main motion direction of milling;
s32, a tool coordinate system established on a machining tool spindle rotates along with the change of a pitch angle of a reference machining surface of a reference piece, and the base coordinate system and the tool coordinate system are adjusted to a reasonable pose;
s33, taking the main motion direction in the milling process as an axis, and adjusting the gesture;
s34, in the line trimming mode, the original point cloud image obtained by line laser scanning is directly utilized to calculate the relative posture based on time sequence.
Further, the aircraft skin feature recognition and edge milling path planning method based on semantic segmentation in step S4 of the present utility model includes: after the characteristic points are obtained, path point segmentation is carried out again on the basis of the characteristic points, an interpolation method is adopted, and the processing path is obtained through calculation by utilizing the change trend of each characteristic point in the interpolation section.
On the other hand, the utility model also provides a semantic segmentation-based aircraft skin feature recognition and edge milling path planning system, which aims at the aircraft skin frame structure, takes a KUKA industrial robot as a carrier, acquires skin trimming reference information by using a line laser scanner, plans a processing path in real time according to a processing technology, and guides the robot to complete a shape following movement task;
the system provides an online or offline data processing mode;
the system consists of a software system and a hardware platform, wherein the software system is a data analysis and processing system positioned on an upper computer; the hardware platform comprises an upper computer, a 3D line laser scanner, a KUKA industrial robot, a milling module, a visual calibration plate, a tail end supporting mechanism and a system assembly structural member.
The system operates according to the aircraft skin feature recognition and edge milling path planning method based on semantic segmentation.
In addition, the utility model also relates to an application of the aircraft skin feature recognition and edge milling path planning method and system based on semantic segmentation in the aircraft assembly industry.
In summary, the aircraft skin feature recognition and edge milling path planning method and system based on semantic segmentation have the following advantages:
1) The edge milling system based on the traditional visual algorithm has poor parameter adaptability/robustness, and aims at the defect that different parameters are required to be matched under different working conditions.
2) Experiments prove that compared with the traditional method, the method has smaller limitation on the scanning angle of the sensor. In addition, the data volume of the linear point cloud data is smaller than that of the planar point cloud, so that the calculation cost of the method is lower, the calculation speed can meet the 12ms communication time interval of the RSI of the KUKA robot, and the online edge milling function is facilitated. Finally, as the geometric outline of the line laser point cloud data is simpler than that of the planar point cloud, the method has lower requirements on software and hardware, and can save cost under the condition of finishing high-precision edge milling, so that the method has wider application prospects in the scenes of edge milling path planning and the like.
3) In order to ensure the continuity of the processing process and the real-time performance of system path planning, the system uses a mean value filtering method to carry out consistency pretreatment on the characteristic points, reduces oscillation errors and the like caused by line laser scanning, avoids excessive jitter of the path, ensures the stability of parameters such as speed and the like in the processing process, and improves the stability of the processing process.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present utility model, the drawings that need to be used in the embodiments of the present utility model will be briefly described below, and it is obvious that the following drawings are only some embodiments described in the present utility model, and other drawings can be obtained according to the drawings without inventive effort for those skilled in the art.
Fig. 1 is a schematic illustration of a skin and fiducials in a workpiece in accordance with an embodiment of the present utility model.
Fig. 2 is a block diagram of a software system and a hardware platform of the intelligent trimming system of the robot.
FIG. 3 is a schematic diagram of the PointNet model structure.
Fig. 4 is a schematic diagram of single-angle experimental point cloud data annotation in an embodiment of the utility model.
Fig. 5 is a schematic diagram of multi-angle hybrid experimental point cloud data annotation in an embodiment of the utility model.
Fig. 6 is a schematic diagram of a model training test result according to an embodiment of the present utility model.
Detailed Description
In order to make the objects, technical solutions and advantages of the present utility model more apparent, the technical solutions of the present utility model will be clearly and completely described below with reference to specific embodiments and corresponding drawings. It is apparent that the described embodiments are only some embodiments of the present utility model, but not all embodiments, and the present utility model may be implemented or applied by different specific embodiments, and that various modifications or changes may be made in the details of the present description based on different points of view and applications without departing from the spirit of the present utility model.
Meanwhile, it should be understood that the scope of the present utility model is not limited to the following specific embodiments; it is also to be understood that the terminology used in the examples of the utility model is for the purpose of describing particular embodiments only, and is not intended to limit the scope of the utility model.
Examples: aircraft skin feature recognition and edge milling path planning method based on semantic segmentation
The embodiment mainly aims at the application scene of shape following edge milling, and the workpiece is divided into a skin part and a reference part. As shown in fig. 1, the blue portion is the skin, and the red is the reference for the conformal milling edge. Since the skin has a reserved tooling allowance, it may protrude more than the reference, and there is a difference in the allowance protruding at different positions. The shape following edge milling application of the aircraft skin requires that the skin edge milling is strictly performed according to the contour shape of the corresponding position reference piece, so that the skin after edge milling is flush with the reference.
The hardware platform is the basis for realizing stable carrying of the robot end vision sensor and the cutting tool, and carrying out vision information acquisition and processing planning, and as shown in fig. 2, the hardware platform mainly comprises an upper computer, a KUKA robot, a line laser scanner, a set of cutting tool, an end supporting mechanism and respective equipment structural members thereof.
In deep learning research based on point cloud data, few networks directly process the point cloud data of a scene. The PointNet is a point cloud classification/segmentation deep learning framework proposed by the university of Sichoff in 2016, directly uses original point cloud data as input, furthest reserves the spatial characteristics of the point cloud, achieves good effect in final test, and lays a foundation for developing and processing a deep learning network structure of the point cloud data later.
The semantic segmentation effect of the PointNet model is shown in FIG. 3, and the semantic segmentation of the whole point cloud is completed through the semantic judgment of each discrete point in the point cloud data. The input of the PointNet model is point cloud data (points groups), the basic description of each point is coordinates (x, y, z), and descriptive information such as colors (RGB), normal vectors (Normal) and the like can be added, and the output of the model is m multiplied by n scores, which respectively correspond to the scores of n points relative to m categories.
The PointNet network directly uses a multi-layer perceptron (MLP) instead of convolution. Because the conventional convolution framework has a very regular form for sharing weights and optimizing the input data, but the point cloud is unordered, and generally needs to be changed into a voxel grid or subjected to a series of normalization processes, too many artifacts are definitely introduced by using convolution, and the data can be damaged. The maximum pooling is selected and used fully considering the disorder, symmetry and space transformation invariance of the point cloud.
Since the point cloud label does not change with the spatial transformation, by estimating an affine transformation matrix (3 x 3), the input point cloud is multiplied by this matrix obtained by the estimation (i.e., an affine transformation is made to the input point cloud) before feature extraction is made of MLP.
Each of the n points entered has 1024-dimensional features, and then we choose the maximum of the n points (in the dimension of the number of points) to get a global feature. For the segmentation task, the global feature is copied n times and spliced with the features obtained by the previous second layer MLP, namely the local feature and the global feature of each point are spliced, and then the two MLPs are used for finally obtaining the output with the dimension of n multiplied by m, namely m classification is carried out on each point, and the prediction score corresponding to each class is output.
The data acquisition angles of the single-angle experiments are 50 degrees, 55 degrees and 60 degrees respectively, and the mixing angles comprise acquisition angles of 40 degrees, 45 degrees, 50 degrees, 55 degrees and 60 degrees. The single-angle experiment uses only 55 degrees of point cloud data for model training, and then semantic segmentation accuracy at 50 degrees and 60 degrees is tested respectively, so that the mobility (the point cloud semantic segmentation capability of the model under a new acquisition angle) of the model is tested. The multi-angle experiment is to train and test the data under a plurality of acquisition angles in a mixed mode, so that the universality of the model is checked (only one model can be trained, and the model is suitable for point cloud semantic segmentation tasks under a plurality of acquisition angles).
In addition, a certain milling amount is reserved before the primary processing of the workpiece in the actual processing process, and the height difference between the workpiece and the reference workpiece in the point cloud data scanned during the primary processing is larger than that during the next processing. With each milling in the process, the upper surface height of the work piece decreases and approaches the reference piece. Since the reserved milling amount of the incoming material is not fixed, shape characteristics shown by the point cloud data can be changed, so that semantic segmentation accuracy of the deep learning model is affected, a supplementary experiment is conducted aiming at the problem, and the processing capacity of the deep learning semantic segmentation model under the condition is explored.
In the experimental process, the tail end of the robot moves along with the milling cutter and the line laser contour sensor, point cloud data of a workpiece to be milled is obtained in real time and processed, and point clouds of all areas are framed in a mode of manually drawing polygons by using cloudcompact software, so that semantic category labeling is carried out.
When point cloud data with an acquisition angle of 55 degrees are selected for marking and training, the selected point cloud data are cut-out fragments (excluding sections which are formed by iterative accumulation in the Y-axis direction without the former laser scanner, and excluding sections with large shaking and incomplete structure before the end of scanning), as shown in fig. 4, the marking has 6 semantic categories (label= {0,1,2,3,4,5 }) which are respectively a left side elevation (blue, category 0), a workpiece upper surface (dark green, category 1), a right side elevation (light green, category 2) of the workpiece, a left side elevation (yellow, category 3) of the reference piece, a chamfer (orange, category 4) of the reference piece and an upper surface (red, category 5) of the reference piece.
In addition, the comprehensive labeling and model training with the acquisition angle of 40-60 degrees selects full-section point cloud data to maximize the practical situation of laminating laser scanning, and the adaptability of the model to emergency situations (the situations of incomplete scanned structure, large shaking, large flaws of workpieces and the like) is checked, as shown in fig. 5, the labeling has 4 semantic categories (label= {0,1,2,3 }) which are respectively the upper surface (green, category 1) of a workpiece, the chamfer surface (yellow, category 2) of a reference piece, the upper surface (red, category 3) of a reference piece and other surfaces (surfaces irrelevant to the tool pose of the deburring system, blue, category 0).
Following the data set partitioning ratio (training set: validation set: test set=8:1:1) common in deep learning, the data set is partitioned and training batches are generated prior to model training.
The training parameters of the model are shown in the following table:
table 1 training parameters of the model
The detection result is shown in fig. 6, and the detection result shows that the deep learning point cloud semantic segmentation algorithm is suitable for the trimming system, the accuracy rate can reach 97-98%, and the accuracy rate is shown in the following table 2. The correct rate calculation rule of the experiment is as follows: the result of semantic segmentation of each point model is wrong as different from the manual annotation, so that the situation that the accuracy is reduced due to the fact that the boundary of each region is fuzzy or the manual annotation is inaccurate exists, and the accuracy of the algorithm can be further improved after the accuracy calculation rule is adjusted and the boundary point small-range fuzzy module is added.
TABLE 2 statistic results of accuracy
In addition to semantic segmentation accuracy, the speed of operation is also a focus of our attention. Because model training can be completed and stored before formal processing, even can be completed directly in the software development process, and the user does not need to train again, only the time spent in the model use (semantic segmentation of unlabeled data) stage is counted in the experiment.
Comparing the experimental results, the model training is performed by using the data with the acquisition angle of 55 degrees, and then the model training is applied to other acquisition angle data, or the model training is performed by using the data with the acquisition angles, and then the model training is applied to the multi-angle data, so that the higher accuracy rate is shown in the experiment. The more similar the training set data and the test set data are, the higher the accuracy of semantic segmentation is (the same acquisition angle, the same form of point cloud data is presented, and the more similar the acquisition angles are, the more similar the data forms are). The data form is mainly expressed in that the point clouds of the side vertical surfaces and the point clouds of the crack positions of the workpiece and the reference piece respectively show different integrality, when the acquisition angle approaches to the horizontal direction, the point clouds of one side vertical surface are more complete, and the point cloud integrality of the other side vertical surface is reduced.
When the whole point cloud of the machined part is improved by 1mm along the Z-axis direction, the semantic segmentation precision of the 40-60 DEG model for 40-60 DEG data is reduced by about 2-17 percent compared with that of the model without data processing. In addition, after the height difference between the workpiece and the reference workpiece is increased from 0.4mm to 0.8mm, the accuracy of semantic segmentation is reduced, because the data characteristics are changed and the difference between the data and the model training set data is increased after the position of the workpiece is lifted through data processing. The method also accords with the rule that the higher the similarity between the test set data and the training set data is, the greater the accuracy of semantic segmentation is.
Therefore, if the suitability of the model is to be improved, the model is suitable for more working conditions and acquisition angles, more abundant training set data are needed, or the actual working conditions and the acquisition angles are limited in the range with good semantic segmentation effect of the model through artificial limitation, and the accuracy of the semantic segmentation can be improved through improving the model structure.
In addition to semantic segmentation accuracy, the running speed of the model is also an important point of our attention. Compared with the model training speed, the model testing speed which influences the processing speed through the data processing speed in the practical application is more concerned. The operating speeds of the different acceleration methods are shown in table 3 below, by way of testing.
TABLE 3 running speed test results
Acceleration method Model Single line semantic segmentation speed
Without any means for 55 degree training model About 9.5 ms/bar
Improved data reading mode Training model of 40-60 DEG About 2.5 ms/bar
Improved data read mode + multithreading Training model of 40-60 DEG About 0.91 ms/bar
Overall, the method comprises the following steps:
and firstly, acquiring an original point cloud image by using a line laser scanner, and preprocessing the original point cloud image.
And secondly, performing semantic segmentation on the preprocessed point cloud image by using a PointNet model to obtain a reference point cloud image, and performing feature extraction on the point cloud image.
And (III) calculating the point cloud gesture based on time sequence.
And (IV) planning a milling path based on the characteristic points.
The system measurement standard used in the method consists of a standard frame (standard piece) and a skin, wherein the standard frame and the characteristic structure thereof are reference bases for shape-following processing in the trimming process, so that the adopted point cloud is required to be subjected to region segmentation and structure segmentation.
The skin and the frame structure can be obtained after the original point cloud image is subjected to filtering treatment, and different characteristic points and characteristic structures divided by the characteristic points, such as chamfers, planes and the like, can be distinguished in the frame structure obviously. Based on the point cloud imaging characteristics, the image mainly comprises discrete point cloud data, and single sampling data obtained by a line laser scanner are ordered point cloud lines, so that information analysis can be carried out on determined points in the point cloud, the corresponding meaning of the points is judged, and the segmentation of the point cloud structure is completed.
In the method, a PointNet model is used for carrying out semantic segmentation on the preprocessed point cloud image, a reference point cloud image is obtained, and feature extraction is carried out on the point cloud image. The method comprises the steps of carrying out deep learning training on a PointNet model in advance by utilizing point cloud data, judging the semantics of each discrete point in a preprocessed point cloud image by utilizing the PointNet model, completing semantic segmentation of the whole point cloud, obtaining a reference point cloud image, dividing the reference point cloud image into different semantic sections, obtaining boundary points of a chamfer section and a reference upper plane section in the semantic sections as primary characteristic points, and carrying out characteristic point fine extraction by adopting a mode of solving the intersection points by straight line fitting on the basis to obtain accurate characteristic point positions.
In the actual machining process, the system needs to adjust the posture according to the inclination angle characteristic of the reference frame so as to control a tool coordinate system arranged on the milling module and further drive the cutter to adjust, and therefore accurate shape following machining is achieved. The traditional pose resolving method is to calculate the inclination angle under the visual coordinate system through the line point cloud plane structure acquired by the line laser scanner, convert the inclination angle into a tool coordinate system and a robot base coordinate system, calculate the pose state of the robot base coordinate system in space, and coincide the processing main shaft corresponding to the tool coordinate system with the tool base coordinate system, and perform tool coordinate system resolving at all times and corresponding mechanical arm pose state inverse resolving. However, this process is often accompanied by more coordinate transformation processes, longer coordinate transformation times, and less accurate solutions. Therefore, under the condition of guaranteeing the processing technology, the system provides a relative pose calculation method which is relatively efficient and high in precision by combining the actual processing process and the control method.
The system sets the motion direction of the tool coordinate system to coincide with the basic coordinate system and the main motion direction of actual machining, reduces the attitude transformation freedom degree required to be adjusted as much as possible, and limits the attitude transformation freedom degree to the adjustment of the pitch angle. And then the system converts the complex gesture change into a continuous change process based on characteristics by utilizing the consistency and the continuity of the processing skin and the reference frame structure in the processing process, namely changing the absolute gesture calculation at the independent moment into the relative gesture calculation in continuous time.
The point cloud relative posture calculation method based on the time sequence mainly utilizes point cloud characteristics of adjacent moments, and obtains the relative posture relation of the tool coordinate system at the corresponding moment by comparing posture changes of the point cloud characteristics and the point cloud characteristics. At the adjacent measurement moment, the visual measurement coordinate system can generate pose change under the base coordinate system along with the change of the pose of the current mechanical arm end tool coordinate system, so that for the measurement reference at the adjacent moment, although the reference to be measured basically keeps unchanged due to the structural consistency and continuity, the plane pose of the reference to be measured can be obviously changed in the visual measurement coordinate plane. Due to the machining process of the system, the orientation of the milling tool is determined by the plane outside the chamfer, so that the characteristics corresponding to the plane structure will directly determine the direction of the tool spindle. In the method, after feature point extraction and feature structure segmentation, the system carries out straight line fitting on the feature to obtain a point-oriented type of the straight line where the feature is located, and a visual coordinate system unit plane vector of the direction is obtained. Similarly, the system can extract the feature structure unit vector at the time according to the point cloud image at the preamble time. The following formula is shown:
wherein,representing T i-1 Feature vector a, of the moment in time>Representing T i-1 Time direction T i The posture transformation of the tool coordinate system at the moment can be calculated and obtained by the posture transformation angle of the tail end of the mechanical arm.
The angle calculation formula can be further known as follows:
the rotation direction determination can be made by the cross product of the vectors:
after the characteristic points based on the point cloud images are extracted and optimized, the characteristic points can describe the path more accurately, but still have a certain degree of fluctuation. Therefore, the system optimizes and fits the characteristic points for generating the path so as to ensure the accuracy and smoothness of the final path point in the online and offline modes.
Based on the original path points, the system adopts a point cloud smoothing filtering method, and the path points are subjected to iterative processing by using an average filtering method to obtain the point cloud paths with stronger relevance and consistency, wherein the point cloud paths are shown in the following formula:
the system uses a target point as a center, and combines a certain range point in the neighborhood to perform averaging treatment, wherein n=3 is initially set.
The method can realize the optimization processing of the path points to a great extent, and can describe the processing path more accurately. Because the corresponding adaptability of the system is increased by adopting an interval processing method during sampling and point cloud processing, the condition of response delay caused by unstable processing time is avoided in a redundant period mode, so that the path point cannot be directly used for processing path guidance, and the system is required to re-divide the path point. With known interval periods, the system will process the waypoints using interpolation methods, given the following:
wherein i=0, 1, 2..7, 8
In the off-line processing mode, the system performs space high-order curve fitting based on the path points on the basis, namely, a least square method is utilized, and final curve parameters are obtained by solving an overdetermined equation. The following equation can be seen for the curve polynomial of order N:
the above can be constructed into a set of equations:
Y=XK;
wherein Y is an [ M X1 ] dimensional matrix composed of parameter points, X is an [ M X6 ] dimensional matrix composed of parameter points, M represents the number of coordinate points participating in fitting, and K is a [ 6X 1] dimensional coefficient matrix to be solved. By the least squares solution of the overdetermined equation, it is possible to obtain:
K=(X T X) -1 X T Y;
and then obtaining a coefficient matrix K through the matrix operation.
By the method, the system acquires the high-order fitting curve corresponding to the group of characteristic points, and the acquired paths are proportionally divided by a quantitative interception method, so that the re-segmentation of the path points is realized, and accurate processing path guidance is provided for the system.
The present utility model is not limited to the preferred embodiments, and any equivalent modifications and variations in light thereof can be made by those skilled in the art without departing from the scope of the present utility model, but are intended to be encompassed by the following claims.

Claims (8)

1. An aircraft skin feature recognition and edge milling path planning method based on semantic segmentation is characterized by comprising the following steps:
s1, acquiring an original point cloud image by using a line laser scanner, and preprocessing the original point cloud image;
s2, performing semantic segmentation on the preprocessed point cloud image by using a PointNet model to obtain a reference point cloud image, and performing feature extraction on the point cloud image;
s3, calculating the point cloud attitude based on time sequence;
s4, planning a milling path based on the feature points.
2. The method for identifying aircraft skin features and planning edge milling paths based on semantic segmentation according to claim 1, wherein the preprocessing of the original point cloud image in step S1 comprises:
s11, analyzing a point cloud result obtained by an experiment in a simulation processing process;
s12, preprocessing an original point cloud image by adopting a threshold filtering and Gaussian filtering mode according to the analysis result of the previous step;
s13, optimizing the original point cloud image by using a line laser scanner based on a time sequence approach interval mean value filtering method.
3. The aircraft skin feature recognition and edge milling path planning method based on semantic segmentation according to claim 1, wherein the performing semantic segmentation on the preprocessed point cloud image using the PointNet model in step S2 to obtain a reference point cloud image, and performing feature extraction on the point cloud image comprises:
s21, performing deep learning training on the PointNet model in advance by utilizing point cloud data;
s22, judging the semantics of each discrete point in the preprocessed point cloud image by using a PointNet model, completing the semantic segmentation of the whole point cloud, obtaining a reference point cloud image and dividing the reference point cloud image into different semantic sections;
s23, acquiring boundary points of the chamfering section and the reference upper plane section in the semantic section as primary characteristic points;
s24, performing feature point fine extraction by adopting a mode of solving intersection points through straight line fitting, and obtaining accurate feature point positions.
4. The method for identifying aircraft skin features and planning edge milling paths based on semantic segmentation according to claim 3, wherein the training parameters of the PointNet model in step S21 include: batch_size, epoch, learning_rate, decay_rate, npoint, step_size, lr_decay.
5. The method for identifying aircraft skin features and planning edge milling paths based on semantic segmentation according to claim 4, wherein in step S22, semantics of each discrete point in the preprocessed point cloud image are judged by using a PointNet model, so as to complete semantic segmentation of the whole point cloud, wherein the input of the PointNet model is coordinates of point cloud data, the output of the model is m×n scores, and the scores of n input points relative to m categories are respectively corresponding; comprising the following steps:
estimating an affine transformation matrix (3 x 3), multiplying the matrix by the input point cloud prior to feature extraction;
each of the n points has 1024-dimensional characteristics, and the maximum value of the n points is selected to obtain a global characteristic;
the global feature is copied for n times and spliced with the feature obtained by the second layer MLP, namely, the local feature and the global feature of each point are spliced;
through two MLPs, an output with dimension of m×n is finally obtained, that is, m classification is performed for each point, and a prediction score corresponding to each class is output.
6. The method for identifying aircraft skin features and planning edge milling paths based on semantic segmentation according to claim 1, wherein the time-series-based point cloud gesture calculation in step S3 uses point cloud features at adjacent moments, and the tool coordinate system relative gesture relationship at corresponding moments is obtained by comparing gesture changes of the two, comprising:
s31, setting the motion direction of a tool coordinate system to coincide with the main motion direction of milling;
s32, a tool coordinate system established on a machining tool spindle rotates along with the change of a pitch angle of a reference machining surface of a reference piece, and the base coordinate system and the tool coordinate system are adjusted to a reasonable pose;
s33, taking the main motion direction in the milling process as an axis, and adjusting the gesture;
s34, in the line trimming mode, the original point cloud image obtained by line laser scanning is directly utilized to calculate the relative posture based on time sequence.
7. The method for identifying aircraft skin features and planning edge milling paths based on semantic segmentation according to claim 1, wherein the path planning based on feature points in step S4 comprises: after the characteristic points are obtained, path point segmentation is carried out again on the basis of the characteristic points, an interpolation method is adopted, and the processing path is obtained through calculation by utilizing the change trend of each characteristic point in the interpolation section.
8. The aircraft skin feature recognition and edge milling path planning system based on semantic segmentation is characterized in that the system aims at a skin frame structure of an aircraft, KUKA industrial robots are used as carriers, line laser scanners are used for acquiring skin trimming reference information, processing paths are planned in real time according to processing technology, and the robots are guided to complete shape following movement tasks;
the system provides an online or offline data processing mode;
the system consists of a software system and a hardware platform, wherein the software system is a data analysis and processing system positioned on an upper computer; the hardware platform comprises an upper computer, a 3D line laser scanner, a KUKA industrial robot, a milling module, a visual calibration plate, a tail end supporting mechanism and a system assembly structural member.
CN202311145421.9A 2023-09-06 2023-09-06 Aircraft skin feature recognition and edge milling path planning method based on semantic segmentation Pending CN117103266A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311145421.9A CN117103266A (en) 2023-09-06 2023-09-06 Aircraft skin feature recognition and edge milling path planning method based on semantic segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311145421.9A CN117103266A (en) 2023-09-06 2023-09-06 Aircraft skin feature recognition and edge milling path planning method based on semantic segmentation

Publications (1)

Publication Number Publication Date
CN117103266A true CN117103266A (en) 2023-11-24

Family

ID=88796298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311145421.9A Pending CN117103266A (en) 2023-09-06 2023-09-06 Aircraft skin feature recognition and edge milling path planning method based on semantic segmentation

Country Status (1)

Country Link
CN (1) CN117103266A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974719A (en) * 2024-03-28 2024-05-03 深圳新联胜光电科技有限公司 Processing tracking and detecting method, system and medium for optical lens

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974719A (en) * 2024-03-28 2024-05-03 深圳新联胜光电科技有限公司 Processing tracking and detecting method, system and medium for optical lens

Similar Documents

Publication Publication Date Title
CN110434671B (en) Cast member surface machining track calibration method based on characteristic measurement
CN109623656B (en) Mobile double-robot cooperative polishing device and method based on thickness online detection
US12061078B2 (en) On-machine inspection and compensation method employing point clouds and applied to complex surface processing
CN109900706B (en) Weld joint based on deep learning and weld joint defect detection method
CN110227876A (en) Robot welding autonomous path planning method based on 3D point cloud data
CN103678754B (en) Information processor and information processing method
CN112614098B (en) Blank positioning and machining allowance analysis method based on augmented reality
CN117103266A (en) Aircraft skin feature recognition and edge milling path planning method based on semantic segmentation
CN105868498A (en) Scanning line point cloud based skin boundary feature reconstruction method
CN111531407B (en) Workpiece attitude rapid measurement method based on image processing
CN112697058A (en) Machine vision-based large-size plate assembly gap on-line measurement system and method
CN110103071B (en) Digital locating machining method for deformed complex part
CN112729112B (en) Engine cylinder bore diameter and hole site detection method based on robot vision
CN114353690B (en) On-line detection device and detection method for roundness of large aluminum alloy annular forging
CN111536872A (en) Two-dimensional plane distance measuring device and method based on vision and mark point identification device
CN115464669B (en) Intelligent optical perception processing system based on intelligent welding robot and welding method
CN114283139A (en) Weld joint detection and segmentation method and device based on area array structured light 3D vision
CN117433430A (en) System and method for detecting size of steel plate cutting part
CN118081767A (en) Automatic programming system and method for post-processing machining of casting robot
CN118386236A (en) Teaching-free robot autonomous welding polishing method based on combination of line laser scanning and stereoscopic vision
Wang et al. Towards region-based robotic machining system from perspective of intelligent manufacturing: A technology framework with case study
CN116909208B (en) Shell processing path optimization method and system based on artificial intelligence
CN117111545A (en) Skin milling path real-time planning method based on line laser
CN111376272A (en) Robot measurement path planning method for three-dimensional scanning process of shell structure
CN110021027B (en) Edge cutting point calculation method based on binocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination