CN116091401A - Spacecraft assembly part identification positioning method based on target detection and composite target code - Google Patents

Spacecraft assembly part identification positioning method based on target detection and composite target code Download PDF

Info

Publication number
CN116091401A
CN116091401A CN202211489693.6A CN202211489693A CN116091401A CN 116091401 A CN116091401 A CN 116091401A CN 202211489693 A CN202211489693 A CN 202211489693A CN 116091401 A CN116091401 A CN 116091401A
Authority
CN
China
Prior art keywords
target
target code
assembly
information
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211489693.6A
Other languages
Chinese (zh)
Inventor
武子科
潘攀
赵海绮
刘杰强
邢景仪
艾婷
贾冬宇
张显
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dongfang Measurement and Test Institute
Original Assignee
Beijing Dongfang Measurement and Test Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dongfang Measurement and Test Institute filed Critical Beijing Dongfang Measurement and Test Institute
Priority to CN202211489693.6A priority Critical patent/CN116091401A/en
Publication of CN116091401A publication Critical patent/CN116091401A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention relates to a spacecraft assembly part identification positioning method based on target detection and composite target codes, which comprises the following steps: calibrating a camera by using a Zhang Zhengyou camera calibration method, and correcting the captured image; identifying and coarsely positioning a target assembly in the image by utilizing an improved YOLOv3 target detection algorithm; moving a camera to align a visual center to a center of a target code of the target assembly; identifying the target code, acquiring corresponding assembly part information, and checking the identification result of a target detection algorithm; and if the identification result is correct, fine positioning the target assembly part according to the target code by utilizing a composite target code technology. According to the method, the identification and the positioning of the assembly parts of the spacecraft are realized by introducing a target detection algorithm and a composite target code technology, so that the accuracy and the stability of the assembly of the spacecraft are improved.

Description

Spacecraft assembly part identification positioning method based on target detection and composite target code
Technical Field
The invention relates to the technical field of target detection, in particular to a spacecraft assembly part identification positioning method based on target detection and composite target code technology.
Background
With the further advancement of industry 4.0, the metering test work of spacecraft is developing towards intellectualization and automation. Because spacecraft accessories have the characteristics of multiple varieties and small batches, the assembly process of the spacecraft accessories often has the problems of longer processing routes and multi-type mixed line production, and therefore, the development and the application of the automatic assembly platform are particularly important.
The automatic assembly platform has been applied in a plurality of fields such as automobile integral assembly, special-shaped die assembly, bulb automatic assembly, and the related research directions include: the assembly and disassembly assisting system based on virtual reality, the high-precision automatic assembly system based on visual and force information, the double-arm cooperation robot automatic assembly system, the flexible automatic assembly system suitable for small-batch and multi-type assembly and the like. Related researches on spacecraft accessory assembly comprise the identification and positioning of robot grabbing rivets based on binocular cameras, automatic measurement of assembly precision in spacecraft assembly integration and experiments, positioning mechanism design in spacecraft assembly, shaft hole part assembly systems and the like.
The aerospace industry in China mainly uses traditional means to finish the assembly work of the spacecraft, the accuracy and the stability of the spacecraft are relatively poor, and the international standard is difficult to meet. The automatic assembly platform can effectively solve the problem, but few truly realized applications exist, and in particular, no intelligent system which can be suitable for various assembly works exists. In the key technology of the automatic assembly platform, besides the mature specification and application of the system calibration technology and the two-dimension code encoding and decoding technology, other technologies have a large development and progress space, and the applicability in the system is not verified yet.
In the spacecraft fitting assembly process, precise identification and positioning of the fitting are required. The precise positioning method used at home and abroad at present mainly comprises a laser method, an image processing method and a two-dimensional code positioning method. The laser method may cause unexpected damage to spacecraft accessories, and the method of image processing and two-dimensional code positioning is more applicable. However, the image processing method can well identify the spacecraft accessories, but more accurate positioning data is difficult to obtain; the existing two-dimensional code positioning method depends on the fixed position of the spacecraft, and is troublesome to operate.
Disclosure of Invention
Aiming at the defects existing in the existing spacecraft assembly, the invention aims to provide a spacecraft assembly part identification positioning method based on target detection and composite target codes, so that the accuracy and stability of spacecraft assembly part identification and positioning are improved.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
the invention provides a spacecraft assembly part identification positioning method based on target detection and composite target codes, which comprises the following steps:
calibrating a camera by using a Zhang Zhengyou camera calibration method, and correcting the captured image;
identifying and coarsely positioning a target assembly in the image by utilizing an improved YOLOv3 target detection algorithm;
moving a camera to align a visual center to a center of a target code of the target assembly;
identifying the target code, acquiring corresponding assembly part information, and checking the identification result of a target detection algorithm;
and if the identification result is correct, fine positioning the target assembly part according to the target code by utilizing a composite target code technology.
According to one aspect of the invention, calibrating a camera using a Zhang Zhengyou camera calibration method, correcting an acquired image, comprises:
printing and fixing a plane calibration plate formed by checkerboards;
the plane calibration plates are placed at different positions, and the images of different poses are acquired by using cameras to shoot;
and detecting the characteristics of the plane calibration plate to obtain the internal and external parameters and distortion parameters of the camera, and carrying out optimization estimation on the imaging precision.
In accordance with one aspect of the present invention, identifying and coarsely locating a target assembly in an image using a modified YOLOv3 target detection algorithm includes:
constructing a data set of assembly part images, and dividing the data set into a training set and a testing set;
constructing an improved YOLOv3 target detection network;
training the improved YOLOv3 target detection network by using a training set, and iterating until a loss function is no longer reduced to obtain a weight file;
and testing the images in the test set by using the weight file, identifying a target assembly, and roughly estimating the pose of the target assembly under a standard coordinate system.
According to one aspect of the invention, constructing a dataset of fitting images includes:
acquiring assembly part images containing different workpiece types and different shooting distances;
preprocessing the assembly part image through data format conversion and equal proportion scaling to form a basic data set;
expanding image data of the images in the basic data set through image rotation operation, and carrying out data enhancement by combining an adjacent interpolation method to obtain an enhanced data set;
and marking the images in the enhancement data set, and recording the starting point coordinates, the length, the width and the classification information of the images to obtain the marked enhancement data set.
According to one aspect of the invention, an improved YOLOv3 target detection network is constructed, comprising: GIOU is used instead of the interaction ratio as an indicator of the target detection effect and Mish activation function is used instead of ReLU activation function.
According to one aspect of the invention, moving the camera to center the vision on the center of the target code of the target assembly includes:
acquiring visual information around the mechanical arm by using a visual sensor, and determining pose information;
the force protection sensor is used for generating feedback information when the mechanical arm collides;
performing obstacle detection, target recognition and pose analysis according to pose information and feedback information, and making a decision;
and controlling the mechanical arm to move the camera to a specified position according to the decision information, so that the visual center is aligned to the center of the target code of the target assembly.
According to one aspect of the invention, identifying the target code, obtaining corresponding fitting information, and verifying the identification result of target detection includes:
identifying information in a target code image captured by a camera by utilizing a composite target code technology;
inquiring corresponding assembly part information in a database according to index information of the information storage layer of the target code;
and comparing the assembly part information with the identification result of the target detection algorithm, and re-detecting the target assembly part if the identification result is wrong.
According to one aspect of the invention, fine positioning of the target assembly according to the target code using a composite target code technique includes:
calculating the distance between the camera and the center point of the target code, and establishing a target code coordinate system according to the target code image captured by the camera;
extracting corner points in the target code image by using a Harris corner point detection algorithm;
calculating the rotation angle of the target code image in space according to the corner points;
determining the accurate position relation between assembly part information and the target assembly part according to assembly part information corresponding to the target code, and determining the accurate positions of the angular points and the rotation angles under a target code coordinate system;
and calculating the accurate position of the corner point under a standard coordinate system according to the rotation angle.
According to one aspect of the invention, extracting corner points in the object code image using a Harris corner point detection algorithm comprises:
calculating the gradient of the pixel point of the target code image in the horizontal direction and the vertical direction and the product of the gradient and the gradient, and generating a matrix:
performing Gaussian filtering on the target code image to obtain a new matrix;
calculating an interest value of each corresponding pixel point on the original target code image;
selecting a pixel point corresponding to a local maximum interest value;
setting threshold value, selecting corner point (x) i,j ,y i,j 0), wherein i is the transverse i-th corner point and j is the longitudinal j-th corner point.
According to one aspect of the invention, the accurate position of the corner point under the standard coordinate system is calculated according to the rotation angle, and a specific calculation formula is as follows:
Figure BDA0003962968710000041
wherein the coordinates of the precise position are (x i ' ,j ,y i ' ,j ,z i ' ,j ) The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the corner points are (x i,j ,y i,j 0), i is the transverse i-th corner point, j is the longitudinal j-th corner point; alpha represents the rotation angle of the longitudinally adjacent corner points of the target code image along the x-axis direction, beta represents the rotation angle of the transversely adjacent corner points of the target code image along the y-axis direction, and theta represents the average value of the rotation angles of the adjacent corner points in all rows and columns of the target code image along the z-axis direction.
Compared with the prior art, the invention has the following beneficial effects:
according to the scheme of the invention, aiming at the defects existing in the assembly of the existing spacecraft, the identification and the positioning of the assembly parts of the spacecraft are realized by introducing the target detection algorithm and the novel composite target code, and the accuracy and the stability of the assembly of the spacecraft are improved. The assembly part identification and rough positioning are realized by using the improved YOLOv3 target detection algorithm, and the algorithm has high detection speed and good precision. Meanwhile, the novel composite target code technology is combined with a target detection algorithm, so that the assembly part can be identified and precisely positioned.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 schematically illustrates a flowchart of an implementation of a method for identifying and locating a spacecraft assembly based on target detection and composite target codes according to an embodiment of the invention;
FIG. 2 schematically illustrates a composite target code provided by an embodiment of the present invention;
FIG. 3 schematically shows a flowchart of an implementation of a camera calibration method according to an embodiment of the present invention;
FIG. 4 schematically shows a flow chart for implementing the data set construction provided by the embodiment of the invention;
FIG. 5 schematically illustrates an implementation flow chart of an alignment process between a camera and a target code center provided by an embodiment of the present invention;
FIG. 6 is a schematic flow chart of the process of identifying object codes and detecting the identification result of the object according to the embodiment of the present invention;
FIG. 7 schematically illustrates a flowchart for implementing a process for fine positioning of a fitting based on a composite object code technique provided by an embodiment of the present invention;
FIG. 8 schematically illustrates a rotation of a target code along an x-axis direction according to an embodiment of the present invention;
FIG. 9 schematically illustrates a rotation of a target code along a y-axis direction according to an embodiment of the present invention;
fig. 10 schematically illustrates rotation of a target code along the z-axis direction according to an embodiment of the present invention.
Detailed Description
The description of the embodiments of this specification should be taken in conjunction with the accompanying drawings, which are a complete description of the embodiments. In the drawings, the shape or thickness of the embodiments may be enlarged and indicated simply or conveniently. Furthermore, portions of the structures in the drawings will be described in terms of separate descriptions, and it should be noted that elements not shown or described in the drawings are in a form known to those of ordinary skill in the art.
Any references to directions and orientations in the description of the embodiments herein are for convenience only and should not be construed as limiting the scope of the invention in any way. The following description of the preferred embodiments will refer to combinations of features, which may be present alone or in combination, and the invention is not particularly limited to the preferred embodiments. The scope of the invention is defined by the claims.
According to the conception of the invention, the embodiment discloses a spacecraft assembly part identification positioning method based on target detection and composite target codes, and the identification and positioning of the spacecraft assembly part are realized jointly by introducing a target detection algorithm and a novel composite target code technology, so that the accuracy and stability of spacecraft assembly are improved. The method comprises the steps of calibrating a camera by using a Zhang Zhengyou camera calibration method, then performing target detection by using an improved YOLOv3 target detection algorithm, positioning and identifying a target assembly, and roughly estimating the pose of the target assembly in a standard coordinate system according to the pose of the target assembly in an image. Moving the camera to lead the visual center to be aligned with the target center, identifying information in the target code, inquiring assembly part information corresponding to the target code in the database, and checking the target detection and identification result. And after the object detection and identification result is determined to be correct, the fitting assembly is precisely positioned according to the object code. As shown in fig. 1, the method specifically includes the following steps:
step S101, calibrating a camera by using a Zhang Zhengyou camera calibration method, and correcting the captured image.
In this embodiment, a camera is required to obtain an image, but due to the influence of factors such as the three-dimensional spatial position relationship between the photographed object and the camera, lens distortion, and camera quality, the imaging system is inconsistent with an ideal pinhole model, and the acquired image has geometric distortion, especially when a wide-angle lens is used, the radial distortion seriously affects the image quality, which makes image processing difficult. It is therefore necessary to correct geometrically distorted images or to calibrate the camera. The Zhang Zhengyou camera calibration method is used for calibrating parameters in the camera, the Zhang Zhengyou method is based on a single-plane checkerboard, the defect that a high-precision calibration object is needed in the traditional calibration method is overcome, meanwhile, compared with the self-calibration method, the precision is improved, and the operation is convenient.
The camera imaging principle can be described by the following formula:
x=K[R|t]X
wherein X is the coordinates in the camera, X is the real world coordinates, K is the reference matrix, and [ r|t ] is the reference matrix, so the following formula can be written:
Figure BDA0003962968710000071
wherein f is focal length, s is distortion parameter, (x) 0 ,y 0 ) Is the center point coordinate, and alpha is the proportional parameter. The camera calibration process is a process of solving the internal and external parameter matrixes.
In one embodiment, as shown in fig. 3, in step S101, the calibration of the camera by using the Zhang Zhengyou camera calibration method, the specific implementation process of correcting the captured image includes: the planar calibration plate formed by the checkerboard is printed and fixed, for example, the intersection number of the planar calibration plate is 10×10, and the planar calibration plate can be fixed on an object which is not easy to deform. The planar calibration plate is placed at different positions and photographed using a camera to capture images of different poses, for example 12 images of different poses. And detecting the characteristics of the plane calibration plate to obtain the internal and external parameters and distortion parameters of the camera, and optimizing and estimating the imaging precision to improve the estimation precision.
Step S102, the improved YOLOv3 target detection algorithm is utilized to identify and coarsely position the target assembly in the image.
In one embodiment, as shown in fig. 4, the specific implementation process of identifying and coarsely locating the target assembly in the image by using the modified YOLOv3 target detection algorithm in step S102 includes: constructing a data set of assembly part images, and dividing the data set into a training set and a testing set; constructing an improved YOLOv3 target detection network; training the improved YOLOv3 target detection network by using a training set, and iterating until a loss function is no longer reduced to obtain a weight file; and testing the images in the test set by using the weight file, identifying a target assembly, and roughly estimating the pose of the target assembly under a standard coordinate system.
The data set used in this embodiment is built on the spacecraft accessory automated assembly system first, requiring the construction of the data set prior to target detection using YOLOv 3.
Specifically, the specific implementation process of constructing the dataset of the assembly part image in the steps includes: fitting part images comprising different workpiece types and different shooting distances are acquired, for example, spacecraft fitting part images are acquired, 45 original images are acquired, and the images comprise 5 workpieces, 3 shooting distances and 15 types, and three images are acquired in each type. The original image can be used after preprocessing, and the assembly image is preprocessed by data format conversion and scaling in equal proportion to form a basic data set. The preprocessed image is represented as a 416 x 3 matrix, adapted to the input of the YOLO network. Because the spacecraft assembly has the characteristics of large volume, multiple types, batch and few types, easy collision and damage and the like, the problems of shielding, incomplete display and the like of the images in the basic data set can exist, so that the data enhancement is carried out on the images in the basic data set, the recognition capability of a network can be enhanced, and the stability performance of the network is improved. And expanding image data of the images in the basic data set through image rotation operation, and carrying out data enhancement by combining an adjacent interpolation method to obtain an enhanced data set. For example, the image is rotated at a random angle, each picture in the basic data set is rotated five times, the original picture is expanded into six pictures, and simultaneously, the adjacent interpolation method is used for filling the cavity phenomenon, so that an enhanced data set containing 270 pictures is obtained. And marking the images in the enhancement data set, and recording the starting point coordinates, the length, the width and the classification information of the images to obtain the marked enhancement data set. The marked enhancement data set is divided into a sample training set and a test set. In order to determine the prior frame size of the algorithm, the size of the existing prior frame is classified by using a k-means clustering method, so that the sizes of 9 prior frames in three resolutions are obtained. The a priori block sizes in 32 times downsampling are 323×340, 283×259, and 200×237, respectively; the a priori block sizes in 16-fold downsampling are 249×175, 166×181, and 149×140, respectively; the a priori block sizes in 8 times downsampling are 117×116, 89×94 and 57×58, respectively.
Specifically, in the implementation process of constructing the improved YOLOv3 target detection network in the above steps, the GIOU is used to replace the interaction ratio (IOU) as an index for measuring the target detection effect, and the mich activation function is used to replace the ReLU activation function.
Illustratively, the input is an RGB image with a resolution of 416×416, the output results are results of three search grids of 13×13, 26×26 and 52×52, each search grid including search results of 3 different sized sample boxes, consisting of coordinates of 4 corners of the frame, confidence of the frame and a number of object classes of 5 (corresponding to 4 class assembly models and empty classes), each search grid containing a total of 3× (4+1+5) =30 output results. Wherein the confidence is expressed by the following formula:
Figure BDA0003962968710000081
wherein P is r (object) judging whether a target exists or not, wherein the value is 1 or 0;
Figure BDA0003962968710000082
for the cross ratio, the overlap ratio between the true area and the prediction frame is expressed, the larger the value is, the better the detection effect is, and the calculation formula of the value is as follows:
Figure BDA0003962968710000083
here, area (pred) represents the detection frame area, and area (truth) represents the true value area. The IOU has the advantages of non-negativity, independence of the size of the dimension and the like, however, the calculation precondition is that the boundary frame to be solved is adjacent to the real frame, the size measurement cannot be carried out under the condition of non-adjacency, meanwhile, the relation between the boundary frame and the real frame cannot be judged in the calculation process, for example, whether the directions are consistent or not, and particularly, when a target object and a detection frame present a certain included angle, the detection and the identification cannot be carried out. Thus, using GIOU instead of IOU, the formula for GIOU is as follows:
Figure BDA0003962968710000091
wherein A is C The area of the minimum closure area of the predicted frame and the real frame, namely the area of the minimum frame containing the predicted frame and the real frame, and U represents the union of the predicted frame and the real frame.
GIOU has good properties and operability, and the main characteristics are as follows:
(1) Similar to the IOU method calculation loss function, the loss function of GIOU can be expressed as:
Figure BDA0003962968710000092
(2) GIOU is insensitive to feature maps of different scales;
(3) The value range of GIOU is [ -1,1]. When the distance between the two frames is infinite, the value of GIOU is-1; when the infinity of the two rectangles coincides, the value of GIOU is 1, at which time giou=iou.
(4) Unlike the IOU focusing on only the overlapping area, the GIOU focuses on not only the overlapping area but also other non-overlapping areas, and can better reflect the overlapping ratio of the two areas.
The main purpose of the activation function is to non-linearise the input signal, responsible for mapping the input of the neuron to the output. Instead of a ReLU activation function, a mich activation function is used, the formula of which is as follows:
f(x)=x*tanh(ln(1+e x ))
compared to the ReLU activation function, positive values of the dash activation function can reach infinity, saturation due to capping is avoided, and slight changes in negative values can have better gradient flow. Such a smooth activation function can express deeper information and better propagate the information, thereby improving accuracy.
Step S103, moving the camera to align the visual center with the center of the target code of the target assembly.
In one embodiment, as shown in fig. 5, the step S103 of moving the camera to center the vision on the center of the target code of the target assembly includes: acquiring visual information around the mechanical arm by using a visual sensor, and determining pose information; the force protection sensor is used for generating feedback information when the mechanical arm collides; performing obstacle detection, target recognition and pose analysis according to pose information and feedback information, and making a decision; and controlling the mechanical arm to move the camera to a specified position according to the decision information, so that the visual center is aligned to the center of the target code of the target assembly.
Specifically, after the fitting pose is obtained based on the YOLOv3 target detection algorithm, a sensor is used for collecting peripheral information, the sensor part consists of a visual sensor and a force protection sensor, the visual sensor can collect visual information of the periphery of the machine and is used for pose determination and the like, and the force protection sensor can generate feedback information when the mechanical arm collides, so that the loss caused by collision is avoided. The sensor collects the environmental information and then transmits the environmental information to the upper computer, and the analysis decision system uses the data collected by the sensor to perform operations such as obstacle detection, target identification, pose analysis and the like on the upper computer, so as to make relevant decisions and command the next operation of the robot. In the process, the communication system is responsible for constructing a communication interface among the robot, the sensor and the upper computer, uploading surrounding information collected by the vision sensor and the force protection sensor to the upper computer, and then transmitting control information decided by the upper computer to the control system. The control system is mainly used for controlling the mechanical arm, namely, the mechanical arm is required to reach a designated position, and after the decision information of the upper computer is received, the mechanical arm is controlled to move to a corresponding position, so that the camera is ensured to be aligned to the target code center of the target assembly.
Step S104, the target codes are identified, corresponding assembly part information is obtained, and the identification result of the target detection algorithm is checked.
In one embodiment, as shown in fig. 6, the step S104 of identifying the target code, obtaining corresponding assembly information, and verifying the implementation of the identification result of the target detection algorithm includes: identifying information in a target code image captured by a camera by utilizing a composite target code technology; inquiring corresponding assembly part information in a database according to index information of the information storage layer of the target code; and comparing the assembly part information with the identification result of the target detection algorithm, and re-detecting the target assembly part if the identification result is wrong.
As shown in fig. 2, in this embodiment, the target code is a novel composite target code technology, and the specific contents include: (1) The auxiliary positioning layer is characterized in that the bottommost layer is a checkerboard calibration line and can be used as a pose calibration plate for assisting in carrying out pose recognition work of the assembly; (2) The information storage layer is used for recording the information of the assembly part and the accurate position relation between the target code and the assembly part, wherein the middle layer is a classical two-dimensional code; (3) The personnel interaction layer, the uppermost layer is an identification code, records the equipment name, the number and the responsible person of the assembly, and is convenient for the identification and the safety management of operators; (4) Center mark point, center has a center mark point for fine positioning.
The database in the steps records information such as the index number, the name, the current assembly progress, the historical assembly record, the complete assembly process and the like of the assembly part. After identifying the information contained in the target code, the assembly corresponding to the target code information is queried in the database according to the index information recorded in the information storage layer, so as to determine the accurate position relation between the assembly information and the assembly. In order to facilitate management of information, only an administrator can enter and modify device information in the database. The database system not only can provide data inquiry, but also can simultaneously carry out quality control, knowledge management and data tracking on the assembly. Because the recording capability of a single target code is limited, the target code is only used as an index, and the system only needs to search the assembly corresponding to the information contained in the target code after identifying the information contained in the target code. The fitting number is "DF-fitting class number-fitting number".
And step S105, if the identification result is correct, fine positioning the target assembly part according to the target code by utilizing a compound target code technology.
In one embodiment, as shown in fig. 7, the implementation process of fine positioning the target assembly according to the target code in step S105 by using the composite target code technology includes: calculating the distance between the camera and the center point of the target code, and establishing a target code coordinate system according to the target code image captured by the camera; extracting corner points in the target code image by using a Harris corner point detection algorithm; calculating the rotation angle of the target code image in space according to the corner points; determining the accurate position relation between assembly part information and the target assembly part according to assembly part information corresponding to the target code, and determining the accurate positions of the angular points and the rotation angles under a target code coordinate system; and calculating the accurate position of the corner point under a standard coordinate system according to the rotation angle.
Specifically, the step of calculating the distance h between the camera and the center point of the target code and establishing a specific implementation process of the target code coordinate system according to the target code image captured by the camera includes:
origin of coordinates: taking a central marking point of the target code image as an origin of a coordinate system;
the x-axis: taking a horizontal axis in the target code image as an x-axis of a coordinate system, and taking the right direction as a positive direction;
y axis: taking a vertical axis in the target code image as a y axis of a coordinate system, and taking the upward direction as a positive direction;
the z axis: the direction facing the camera is the positive direction by taking the axis which passes through the center point of the target code image and is perpendicular to the x axis and the y axis as the z axis of the coordinate system.
For example, use (x i,j ,y i,j ,z i,j ) Representing coordinates of the target code image point in the coordinate system; using (x' i,j ,y' i,j ,z' i,j ) Representing coordinates of the actual position point of the target code in the coordinate system; distance is used to represent the actual distance of two neighboring corner points in the object code.
Specifically, the specific implementation process of extracting the corner in the target code image by using the Harris corner detection algorithm in the above steps includes:
(1) Calculating the gradient of the pixel point of the target code image in the horizontal direction and the vertical direction and the product of the gradient and the gradient, and generating a matrix M:
Figure BDA0003962968710000121
wherein I is x And I y Gradients in horizontal and vertical directions, respectively, I xy =I x ·I y
Figure BDA0003962968710000122
Figure BDA0003962968710000123
(2) Performing Gaussian filtering on the target code image to obtain a new matrix M; the discrete two-dimensional zero-mean gaussian function is:
Figure BDA0003962968710000124
(3) Calculating an interest value of each pixel point corresponding to the original target code image, namely an R value:
Figure BDA0003962968710000125
where k is an empirical constant, the present embodiment takes k=0.04.
(4) Selecting pixel points corresponding to the maximum interest value in the local range;
(5) Setting threshold value, selecting corner point (x) i,j ,y i,j 0), wherein i is the transverse i-th corner point and j is the longitudinal j-th corner point. Because the image is a two-dimensional plane, the z-axis coordinates of all points on the image are 0, meanwhile, because the number of corner points of each row in the auxiliary positioning layer is not the same, for uniform description, the number of transverse corner points is n, the number of longitudinal corner points is m, and the m and n are not fixed values but are changed according to the number of actual corner points.
Specifically, as shown in fig. 8 to 10, the specific implementation process of calculating the rotation angle of the target code image in space according to the corner point in the above steps includes:
as shown in fig. 8, the rotation angle α in the x-axis direction is calculated using two longitudinally adjacent corner points, and the calculation formula is as follows:
Figure BDA0003962968710000126
in order to make the calculation of the angle more accurate, the average value of the distances between the adjacent angular points and the distance between the first angular point and the last angular point are used as the calculation basis of the alpha angle of a certain row, then the average value of the alpha angles of all rows is taken as the actual alpha value, and the final calculation formula is as follows:
Figure BDA0003962968710000131
as shown in fig. 9, the rotation angle β in the y-axis direction is calculated using two corner points adjacent in the lateral direction, and the calculation formula is as follows:
Figure BDA0003962968710000132
in order to make the calculation of the angle more accurate, the average value of the distances between the adjacent angular points and the distance between the first angular point and the last angular point are used as the calculation basis of a certain column of beta angles, then the average value of all columns of beta angles is taken as the actual beta value, and the final calculation formula is as follows:
Figure BDA0003962968710000133
as shown in fig. 10, the rotation angle θ in the z-axis direction is calculated using two corner points adjacent in the lateral direction, and the calculation formula is as follows:
Figure BDA0003962968710000134
simultaneously, two longitudinally adjacent angular points are used for calculating a rotation angle theta along the z-axis direction, and the calculation formula is as follows:
Figure BDA0003962968710000135
in order to make the calculation of the angle more accurate, the average value of the distances between the longitudinally adjacent angular points and the distance between the first angular point and the last angular point are used as the calculation basis of the angle theta of a certain row, the average value of the distances between the transversely adjacent angular points and the distance between the first angular point and the last angular point are used as the calculation basis of the angle theta of a certain column, and then the average value of the angles of all rows and columns is used as the actual value of the angle theta, and the final calculation formula is as follows:
Figure BDA0003962968710000141
since the spatial movement of the object includes six degrees of freedom movements, i.e., three movements of movement along the x-axis, movement along the y-axis, and movement along the z-axis, and three rotational movements of rotation along the x-axis, rotation along the y-axis, and rotation along the z-axis, since the camera has been aligned with the center point of the object code, there is no movement in the x-axis direction and the y-axis direction of the object code image, while the movement in the z-axis direction is mainly used to calculate the center point distance h of the camera from the object code, and therefore only the rotational angles of the object code image in the x-axis, y-axis, and z-axis directions need to be calculated.
Specifically, the specific calculation process for calculating the accurate position of the corner point under the standard coordinate system according to the rotation angle in the above steps includes the following formula:
Figure BDA0003962968710000142
wherein the coordinates of the precise position are (x i ' ,j ,y i ' ,j ,z i ' ,j ) The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the corner points are (x i,j ,y i,j 0), i is the transverse i-th corner point, j is the longitudinal j-th corner point; alpha represents the rotation angle of the longitudinally adjacent corner points of the target code image along the x-axis direction, beta represents the rotation angle of the transversely adjacent corner points of the target code image along the y-axis direction, and theta represents the average value of the rotation angles of the adjacent corner points in all rows and columns of the target code image along the z-axis direction.
The sequence numbers of the steps related to the method of the present invention do not mean the sequence of the execution sequence of the method, and the execution sequence of the steps should be determined by the functions and the internal logic, and should not limit the implementation process of the embodiment of the present invention in any way.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the invention.

Claims (10)

1. A spacecraft assembly part identification positioning method based on target detection and composite target codes comprises the following steps:
calibrating a camera by using a Zhang Zhengyou camera calibration method, and correcting the captured image;
identifying and coarsely positioning a target assembly in the image by utilizing an improved YOLOv3 target detection algorithm;
moving a camera to align a visual center to a center of a target code of the target assembly;
identifying the target code, acquiring corresponding assembly part information, and checking the identification result of a target detection algorithm;
and if the identification result is correct, fine positioning the target assembly part according to the target code by utilizing a composite target code technology.
2. The method of claim 1, wherein calibrating the camera using a Zhang Zhengyou camera calibration method, correcting the acquired image, comprises:
printing and fixing a plane calibration plate formed by checkerboards;
the plane calibration plates are placed at different positions, and the images of different poses are acquired by using cameras to shoot;
and detecting the characteristics of the plane calibration plate to obtain the internal and external parameters and distortion parameters of the camera, and carrying out optimization estimation on the imaging precision.
3. The method of claim 1, wherein identifying and coarsely locating the target assembly in the image using the modified YOLOv3 target detection algorithm comprises:
constructing a data set of assembly part images, and dividing the data set into a training set and a testing set;
constructing an improved YOLOv3 target detection network;
training the improved YOLOv3 target detection network by using a training set, and iterating until a loss function is no longer reduced to obtain a weight file;
and testing the images in the test set by using the weight file, identifying a target assembly, and roughly estimating the pose of the target assembly under a standard coordinate system.
4. A method according to claim 3, wherein constructing a dataset of fitting images comprises:
acquiring assembly part images containing different workpiece types and different shooting distances;
preprocessing the assembly part image through data format conversion and equal proportion scaling to form a basic data set;
expanding image data of the images in the basic data set through image rotation operation, and carrying out data enhancement by combining an adjacent interpolation method to obtain an enhanced data set;
and marking the images in the enhancement data set, and recording the starting point coordinates, the length, the width and the classification information of the images to obtain the marked enhancement data set.
5. A method according to claim 3, wherein constructing an improved YOLOv3 target detection network comprises: GIOU is used instead of the interaction ratio as an indicator of the target detection effect and Mish activation function is used instead of ReLU activation function.
6. The method of claim 1, wherein moving the camera to center the visual center to the center of the target code of the target assembly comprises:
acquiring visual information around the mechanical arm by using a visual sensor, and determining pose information;
the force protection sensor is used for generating feedback information when the mechanical arm collides;
performing obstacle detection, target recognition and pose analysis according to pose information and feedback information, and making a decision;
and controlling the mechanical arm to move the camera to a specified position according to the decision information, so that the visual center is aligned to the center of the target code of the target assembly.
7. The method of claim 1, wherein identifying the target code, obtaining corresponding fitting information, and verifying an identification result of target inspection, comprises:
identifying information in a target code image captured by a camera by utilizing a composite target code technology;
inquiring corresponding assembly part information in a database according to index information of the information storage layer of the target code;
and comparing the assembly part information with the identification result of the target detection algorithm, and re-detecting the target assembly part if the identification result is wrong.
8. The method of claim 1, wherein fine positioning the target assembly according to the target code using a composite target code technique comprises:
calculating the distance between the camera and the center point of the target code, and establishing a target code coordinate system according to the target code image captured by the camera;
extracting corner points in the target code image by using a Harris corner point detection algorithm;
calculating the rotation angle of the target code image in space according to the corner points;
determining the accurate position relation between assembly part information and the target assembly part according to assembly part information corresponding to the target code, and determining the accurate positions of the angular points and the rotation angles under a target code coordinate system;
and calculating the accurate position of the corner point under a standard coordinate system according to the rotation angle.
9. The method of claim 8, wherein extracting corner points in the object code image using Harris corner detection algorithm comprises:
calculating the gradient of the pixel point of the target code image in the horizontal direction and the vertical direction and the product of the gradient and the gradient, and generating a matrix:
performing Gaussian filtering on the target code image to obtain a new matrix;
calculating an interest value of each corresponding pixel point on the original target code image;
selecting a pixel point corresponding to a local maximum interest value;
setting threshold value, selecting corner point (x) i,j ,y i,j 0), wherein i is the transverse i-th corner point and j is the longitudinal j-th corner point.
10. The method according to claim 8, wherein the precise position of the corner point in the standard coordinate system is calculated according to the rotation angle, and a specific calculation formula is as follows:
Figure FDA0003962968700000031
wherein the coordinates of the precise position are (x i ' ,j ,y i ' ,j ,z i ' ,j ) The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the corner points are (x i,j ,y i,j 0), i is the transverse i-th corner point, j is the longitudinal j-th corner point; alpha represents the rotation angle of the longitudinally adjacent corner points of the target code image along the x-axis direction, beta represents the rotation angle of the transversely adjacent corner points of the target code image along the y-axis direction, and theta represents the average value of the rotation angles of the adjacent corner points in all rows and columns of the target code image along the z-axis direction.
CN202211489693.6A 2022-11-25 2022-11-25 Spacecraft assembly part identification positioning method based on target detection and composite target code Pending CN116091401A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211489693.6A CN116091401A (en) 2022-11-25 2022-11-25 Spacecraft assembly part identification positioning method based on target detection and composite target code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211489693.6A CN116091401A (en) 2022-11-25 2022-11-25 Spacecraft assembly part identification positioning method based on target detection and composite target code

Publications (1)

Publication Number Publication Date
CN116091401A true CN116091401A (en) 2023-05-09

Family

ID=86199963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211489693.6A Pending CN116091401A (en) 2022-11-25 2022-11-25 Spacecraft assembly part identification positioning method based on target detection and composite target code

Country Status (1)

Country Link
CN (1) CN116091401A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117161719A (en) * 2023-11-03 2023-12-05 佛山科学技术学院 Visual and tactile fusion type pre-assembled part gesture recognition method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117161719A (en) * 2023-11-03 2023-12-05 佛山科学技术学院 Visual and tactile fusion type pre-assembled part gesture recognition method and system
CN117161719B (en) * 2023-11-03 2024-01-19 佛山科学技术学院 Visual and tactile fusion type pre-assembled part gesture recognition method and system

Similar Documents

Publication Publication Date Title
CN110689579B (en) Rapid monocular vision pose measurement method and measurement system based on cooperative target
EP1434169A2 (en) Calibration apparatus, calibration method, program for calibration, and calibration jig
CN111241988B (en) Method for detecting and identifying moving target in large scene by combining positioning information
CN106651942A (en) Three-dimensional rotation and motion detecting and rotation axis positioning method based on feature points
CN111476841B (en) Point cloud and image-based identification and positioning method and system
CN116091401A (en) Spacecraft assembly part identification positioning method based on target detection and composite target code
CN114001651B (en) Large-scale slender barrel type component pose in-situ measurement method based on binocular vision measurement and priori detection data
CN109472778B (en) Appearance detection method for towering structure based on unmanned aerial vehicle
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
Wohlfeil et al. Automatic camera system calibration with a chessboard enabling full image coverage
CN112712566B (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
Ma et al. Precision pose measurement of an object with flange based on shadow distribution
CN110458951B (en) Modeling data acquisition method and related device for power grid pole tower
JPH06243236A (en) Setting device for coordinate system correction parameter of visual recognizer
CN110853103A (en) Data set manufacturing method for deep learning attitude estimation
CN113592962B (en) Batch silicon wafer identification recognition method based on machine vision
CN112365600B (en) Three-dimensional object detection method
CN115358529A (en) Construction safety assessment method based on computer vision and fuzzy reasoning
CN111189396B (en) Displacement detection method of incremental absolute grating ruler based on neural network
Liang et al. An integrated camera parameters calibration approach for robotic monocular vision guidance
CN112991372A (en) 2D-3D camera external parameter calibration method based on polygon matching
CN113256726A (en) Online calibration and inspection method for sensing system of mobile device and mobile device
Wang et al. A binocular vision method for precise hole recognition in satellite assembly systems
Uyanik et al. A method for determining 3D surface points of objects by a single camera and rotary stage
CN113392823B (en) Oil level meter reading method based on deep network regression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination