CN115797099A - Augmented reality auxiliary assembly method for visual field blind area of aircraft equipment cabin - Google Patents

Augmented reality auxiliary assembly method for visual field blind area of aircraft equipment cabin Download PDF

Info

Publication number
CN115797099A
CN115797099A CN202211529490.5A CN202211529490A CN115797099A CN 115797099 A CN115797099 A CN 115797099A CN 202211529490 A CN202211529490 A CN 202211529490A CN 115797099 A CN115797099 A CN 115797099A
Authority
CN
China
Prior art keywords
assembly
pose
model
pose estimation
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211529490.5A
Other languages
Chinese (zh)
Inventor
杨军
薛政杰
连旭升
张海龙
周振秋
何强
陈儒琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202211529490.5A priority Critical patent/CN115797099A/en
Publication of CN115797099A publication Critical patent/CN115797099A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

An augmented reality auxiliary assembly method for a visual field blind area of an aircraft equipment cabin comprises self-positioning of a depth camera based on point cloud registration, assembly object pose estimation based on deep learning, assembly object pose estimation based on an edge area, self-adaptive switching of a pose estimation method and auxiliary assembly guide visualization; the invention uses the self-adaptive switching method to fuse the pose estimation method based on deep learning and the pose estimation method of traditional image processing so as to balance the robustness, real-time performance and accuracy of pose estimation; meanwhile, based on the pose estimation result, the visual field blind area of the assembly process can be visualized by using an augmented reality technology, the assembly result is detected, and a user can quickly position an assembly object and complete an assembly task, so that the consistency, the real-time performance and the substitution feeling of airplane assembly operation guidance are improved, and the manual assembly efficiency and the manual assembly quality are further improved.

Description

Augmented reality auxiliary assembly method for visual field blind area of aircraft equipment cabin
Technical Field
The invention relates to the technical field of airplane assembly, in particular to an augmented reality auxiliary assembly method for a visual field blind area of an airplane equipment cabin.
Background
Advanced aviation equipment such as airplanes and the like have the characteristics of complex product structures, large part quantity, various coordination relations and the like, so that the assembly task difficulty is high, the operation is complex and high, and the operation process is long. Due to the flexibility and adaptability requirements of the assembly process, a great deal of assembly work is still dominated by the manual assembly mode. The characteristic of the airplane assembly operation requires operators to be familiar with and memorize complicated operation instructions, and a large number of process instruction books and production operation documents need to be browsed, so that the process instruction is weak, time and labor are consumed, the map identification error is easy to occur, the assembly efficiency and quality are seriously restricted, and the problem is particularly prominent in the assembly operation of the visual field blind area of the airplane equipment compartment.
For assisting assembly operators of advanced aviation equipment to improve assembly efficiency and quality, methods such as three-dimensional assembly instructions and guide information visualization and augmented reality technologies are gradually applied to aircraft assembly sites at present, and visual and interactive operation process descriptions are provided for the assembly operators, for example, chinese patent CN202110337423.2, is named as: the working principle of the method is that parts are identified through part model outline information and are identified in an envelope body mode to guide assembly, the existing defects are that the parts are positioned through position and outline information and cannot be accurately positioned in a slightly complex actual scene, so that a virtual-real fusion effect is influenced, the assembly condition is judged only through the position and outline information, and the method cannot be used in a view blind area scene of an airplane equipment cabin. To realize the augmented reality visualization effect of virtual-real fusion, pose information of an assembly object is needed to display virtual information in an overlaying manner on a real object, such as chinese patent CN202110892450.6, which is named as: an assembly guidance method and system based on Hololens depth data have the working principle that the position and pose of a part are obtained by combining point cloud data with an improved Votenet network and then using ICP (inductively coupled plasma) calibration to guide assembly, and have the defects that the method is based on the Votenet network, the operation speed is low, and the position and pose updating is instable. Therefore, an augmented reality auxiliary assembly method for the visual field blind area of the aircraft equipment cabin is needed to solve the problems that the pose estimation of an assembly object is inaccurate and not real-time under the scene of the visual field blind area of the aircraft equipment cabin, and the automatic detection of an assembly result and the guide of assembly are difficult.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide an augmented reality auxiliary assembly method for a visual field blind area of an aircraft equipment cabin, which uses an adaptive switching method to fuse a pose estimation method based on deep learning and a pose estimation method of traditional image processing so as to balance the robustness, real-time performance and accuracy of pose estimation; meanwhile, based on the pose estimation result, the visual field blind area of the assembly process can be visualized by using an augmented reality technology, the assembly result is detected, and a user can quickly position an assembly object and complete an assembly task, so that the consistency, the real-time performance and the substitution feeling of airplane assembly operation guidance are improved, and the manual assembly efficiency and the manual assembly quality are further improved.
In order to achieve the purpose, the invention adopts the technical scheme that:
an augmented reality auxiliary assembly method for a visual field blind area of an aircraft equipment cabin comprises the following steps:
step 1), self-positioning of a depth camera based on point cloud registration: firstly, installing a depth camera in a designated area in an assembly scene; secondly, registering the point cloud data of the scene model with the point cloud data collected by the depth camera; finally, converting the pose information of the depth camera in the assembly scene according to the point cloud registration result;
step 2), estimating the pose of an assembly object based on deep learning (the assembly object comprises avionics equipment, a pipeline joint and a bolt): firstly, collecting an assembly object pose data set, and enriching the data set by a data enhancement method of color space enhancement and six-degree-of-freedom transformation enhancement; secondly, building a deep learning network, and using a model pre-trained by a public data set as an initial model; thirdly, training and testing the initial model in the pose data set of the assembly object, and fine-tuning to obtain a migration model; finally, outputting the pose of the assembly object to the collected RGB image through a deep learning network by using a migration model in an online mode;
step 3), estimating the pose of the assembly object based on the edge area: firstly, rendering a space viewpoint model of an assembly object CAD model, and obtaining edge contour points and normal vectors of the space viewpoint model; secondly, counting segmentation probabilities of corresponding line segments in the normal vector direction of the edge contour points of the image in the acquired RGBD image in an online mode, wherein the corresponding line segments belong to the foreground or the background, and completing uncertainty modeling; finally, optimizing an uncertainty model, and calculating a corresponding contour and an assembly object pose which most accord with image segmentation;
step 4), self-adaptive switching of the pose estimation method: performing online pose estimation on the assembly object by using the method in the step 2) to obtain an initial pose, and performing subsequent pose estimation by using the method in the step 3) based on the initial pose in a loop iteration manner; judging the pose estimation stability of the current assembly object by using an evaluation function, and calling the method in the step 2) once when the calculation value of the evaluation function is greater than a set threshold value so as to correct and stabilize the pose estimation result;
step 5), auxiliary assembly guide visualization: firstly, constructing an assembly process database to realize the access of assembly information including an assembly part three-dimensional model, an assembly tool, an assembly step, a process index and assembly guide information; secondly, according to the pose information of the depth camera in the assembly scene obtained in the step 1), combining the pose estimation result of the assembly object obtained in the step 4), obtaining the pose information of the assembly object in the assembly scene, and rendering assembly information such as a three-dimensional model of the assembly part on display equipment; and finally, judging whether the assembling step is finished or not according to the pose information of the assembling object, and further guiding the next assembling step until the assembling task is finished.
The invention has the beneficial effects that:
(1) The visual field blind area in the assembly process is visualized by utilizing the augmented reality technology, so that a user can quickly position an assembly object and complete an assembly task;
(2) The scheme provides a self-adaptive switching method, and the self-adaptive switching is based on an edge region and a deep learning-based assembly object pose estimation method, so that the robustness, real-time performance and accuracy of assembly object positioning are balanced, and the universality and portability of the method are improved;
(3) The invention can automatically detect the assembly result, and uses various auxiliary assembly guide visualization methods to ensure the accuracy and diversity of the assembly guide information and improve the assembly efficiency and quality.
Drawings
FIG. 1 is an overall flow chart of the method of the present invention.
FIG. 2 is a flowchart of an edge region-based pose estimation method for an assembly object according to the present invention.
FIG. 3 is an input/output diagram of the pose estimation method of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail with reference to the following embodiments and the accompanying drawings.
As shown in fig. 1, an augmented reality auxiliary assembly method for a visual field blind area of an aircraft equipment cabin comprises the following steps:
step 1), self-positioning of a depth camera based on point cloud registration: firstly, installing a depth camera in a designated area in an assembly scene; secondly, registering the point cloud data of the scene model with the point cloud data collected by the depth camera; finally, converting the pose information of the depth camera in the assembly scene according to the point cloud registration result; the method specifically comprises the following steps:
step 11) the depth camera is installed in a designated area of an assembly scene to acquire RGBD images of more operator view blind areas, the central position and the orientation of the depth camera in the area are selected as the initial pose of the depth camera, the point cloud data of a scene model and the point cloud data acquired by the depth camera are subjected to coarse registration by using a PCA algorithm, then the precise registration is completed by using a GICP algorithm, and a pose matrix M of the depth camera in the assembly scene C Comprises the following steps:
Figure BDA0003974010770000041
in the formula:
Figure BDA0003974010770000042
transformation matrix obtained for fine registration, M C' Is a depth camera initial pose matrix;
step 2), estimating the pose of an assembly object based on deep learning (the assembly object comprises avionics equipment, a pipeline joint and a bolt): firstly, collecting an assembly object pose data set, and enriching the data set by a data enhancement method of color space enhancement and six-degree-of-freedom transformation enhancement; secondly, building a deep learning network, and using a model pre-trained by a public data set as an initial model; thirdly, training and testing the initial model in the pose data set of the assembly object, and fine-tuning to obtain a migration model; finally, outputting the pose of the assembly object to the collected RGB image through a deep learning network by using a migration model in an online mode; the method specifically comprises the following steps:
step 2.1), an assembling object pose data set under an aircraft equipment cabin scene is constructed by using an ObjectDatasetTools sourcing tool, RGBD images of assembling objects under different angles, distances and illumination conditions are collected and labeled, the labeling information comprises assembling object names and pose information, wherein the labeling mode of the pose information is that the pose information of the assembling objects in a first frame RGBD image is manually registered and labeled, the pose information of the assembling objects in other frames is calculated based on two-dimensional code poses beside the assembling objects, and automatic calculation and labeling of the pose information are realized; on the basis of the assembly object pose data set, expanding the assembly object pose data set by a data enhancement method of color space enhancement and six-degree-of-freedom transformation enhancement;
step 2.2), an EfficientPose deep learning network is built, an initial model is pre-trained on the basis of a Linemod public data set, training and testing are carried out on the initial model in an assembly object pose data set, fine adjustment of each weight parameter of the initial model is completed, and a migration model is obtained; the migration model is used for outputting predicted assembly object pose information to the online collected RGB images in the EfficientPose deep learning network;
step 3), based on the pose estimation of the assembly object in the edge area, as shown in fig. 2, firstly, rendering a space viewpoint model of the CAD model of the assembly object to obtain edge contour points and normal vectors of the space viewpoint model; secondly, counting segmentation probabilities of corresponding line segments in the normal vector direction of the edge contour points of the image in the acquired RGBD image in an online mode, wherein the corresponding line segments belong to the foreground or the background, and completing uncertainty modeling; finally, optimizing an uncertainty model, and calculating a corresponding contour and an assembly object pose which best meet image segmentation; the method specifically comprises the following steps:
step 3.1), in order to obtain a dense and uniform space viewpoint model, equally dividing the triangles of each surface of the regular icosahedron into four sub-triangles, iterating for 4 times to obtain 256 regular triangles on each surface, totaling 5120 regular triangles, locating the sampling viewpoints of the virtual camera at the vertex of the triangles, keeping the distance from the center of the regular icosahedron by 1 meter (the distance depends on the size of an assembly object), pointing the optical axis to the center of the regular icosahedron, setting the center of the CAD model of the assembly object as the center of the regular icosahedron, rendering the space viewpoint model of the CAD model at 2562 sampling viewpoints, and collecting 200 edge contour points and normal vectors of the CAD model in the space viewpoint model;
step 3.2), selecting a graph projected onto an imaging plane by the CAD model corresponding to the position and posture estimation result of the previous frame as a mask, taking the edge contour point of the CAD model sampled on the contour of the mask as a midpoint, and extending the same distance (the distance is 5 to 30 and is reduced along with the increase of iteration times) to two sides along the direction of each normal vector, wherein the formed line segment is a corresponding line; CAD model depth for eliminating edge contour pointsSubtracting a corresponding line (considering that the position is shielded) with the difference of the actual depth value of the point (the minimum depth value of 8 pixels around the edge contour point in the RGBD image) being more than 3 cm from the value, and eliminating sampling points (considering that the position is shielded or the sampling points are background) with the absolute value of the deviation between the actual depth value of each pixel point in the mask and the depth value of the CAD model being more than 5 cm; projecting the corresponding line onto an imaging plane to form a projection corresponding line, and calculating the probability that each pixel point on the projection corresponding line belongs to the foreground and the background: based on Bayes theory and the mask outline of the previous frame, the posterior probability that each pixel point is located in the current frame mask outline and the posterior probability that the depth information meets the current frame mask are calculated, the combined probability of the two posterior probabilities is maximized by using a Newton method and a Gihonov regularization method, and the predicted value of the attitude change vector theta is aligned
Figure BDA0003974010770000052
Updating a pose estimation result by iterative optimization:
Figure BDA0003974010770000051
wherein the content of the first and second substances,
Figure BDA0003974010770000061
Figure BDA0003974010770000062
in the formula, g is a gradient vector (the corresponding line and the sampling point are respectively considered by two addends), H is a Hessian matrix of 6 multiplied by 6 (the corresponding line and the sampling point are respectively considered by the two addends), and lambda r And λ t For rotation and translation regularization parameters (corresponding to prior probabilities, controlling confidence in the previous frame attitude estimate, thereby stabilizing the optimization and preventing the development in the wrong direction, take λ r =1000,λ t =20000),I 3 Is an identity matrix; n is the number of corresponding lines remaining after the elimination, d i Is the distance from the edge contour point on the ith corresponding line to the midpoint of the corresponding line in the last frame, l i Is the ith corresponding line, L i For the field of the corresponding line(s),
Figure BDA0003974010770000063
is the ith CAD model point under the camera coordinate system, n' is the number of the sampling points left after elimination,
Figure BDA0003974010770000064
and
Figure BDA0003974010770000065
nearest sampling points, d, for CAD model points, normal vectors and depth cameras in the model coordinate system zi For depth value of nearest sample point, σ d For the standard deviation of depth information (consider the farther a sample is from the sample the worse the quality, thereby controlling the weight of the sample, take σ d =20);
Wherein, the calculation formula of the correlation derivative is,
Figure BDA0003974010770000066
Figure BDA0003974010770000067
Figure BDA0003974010770000068
in the formula, mu i And sigma i The posterior probability distribution of the corresponding line obeys the mean and standard deviation of the normal distribution,
Figure BDA00039740107700000612
is a rotation matrix of the camera coordinate system to the model coordinate system,
Figure BDA0003974010770000069
in the form of an antisymmetric matrix of CAD model points in a model coordinate system, e.g. point X = [ X y z ]] T Is provided with
Figure BDA00039740107700000610
|n xi I and | n yi L is the length of the corresponding line projected to the imaging plane image coordinate system in the directions of the horizontal axis and the vertical axis, f x And f y Is a camera focal length parameter;
step 3.3), the pose change vector predicted value after iterative optimization
Figure BDA00039740107700000611
Updating a pose estimation result:
Figure BDA0003974010770000071
in the formula (I), the compound is shown in the specification,
Figure BDA0003974010770000072
is a transformation matrix of the model coordinate system to the camera coordinate system,
Figure BDA0003974010770000073
and
Figure BDA0003974010770000074
is composed of
Figure BDA0003974010770000075
A rotational component and a translational component of;
step 4), self-adaptive switching of the pose estimation method: performing online pose estimation on the assembly object by using the method in the step 2) to obtain an initial pose, and performing subsequent pose estimation by using the method in the step 3) based on the initial pose in a loop iteration manner; judging the pose estimation stability of the current assembly object by using an evaluation function, and calling the method in the step 2) once when the calculation value of the evaluation function is greater than a set threshold value so as to correct and stabilize the pose estimation result; the method specifically comprises the following steps:
step 4.1), in the starting stage, carrying out online pose estimation on the assembly object in the scene of the aircraft equipment cabin according to the method in the step 2) to obtain an initial pose, taking the initial pose as a first frame, and circularly iterating according to the method in the step 3) to finish pose estimation of the assembly object of the subsequent image frame;
step 4.2), using the F norm of formula (4) as an evaluation function F;
F=||H|| Fn (6)
in the formula, alpha n As a noise parameter, | H | | non-conducting phosphor F Is the F norm of H;
evaluating the pose estimation result in the loop iteration, and when F is larger than a set threshold (the threshold of the avionics equipment is 4, and alpha is respectively selected for the avionics equipment A, B, C with the size from small to large n 2.4, 2.8 and 2.9), the reliability of the current pose estimation result is considered to be low, the pose estimation is performed by inserting the method in the step 3) once, the correction of the error pose estimation result is realized, and the robustness of the pose estimation is improved;
step 5), auxiliary assembly guide visualization: firstly, constructing an assembly process database to realize the access of assembly information including an assembly part three-dimensional model, an assembly tool, an assembly step, a process index and assembly guide information; secondly, according to the position and pose information of the depth camera in the assembly scene obtained in the step 1), combining the position and pose estimation result of the assembly object obtained in the step 4), obtaining the position and pose information of the assembly object in the assembly scene, and rendering assembly information such as three-dimensional models of assembly parts on display equipment; finally, judging whether the assembling step is finished or not according to the pose information of the assembling object, and further guiding the next assembling until the assembling task is finished; the method specifically comprises the following steps:
step 5.1), constructing an assembly process database, and realizing access of assembly information including an assembly part three-dimensional model, an assembly tool, assembly steps, process indexes and assembly guide information;
step 5.2), the pose of the depth camera in the scene of the aircraft equipment cabin is positioned according to the method in the step 1), the pose of the display equipment with the camera shooting function of the user is positioned based on the two-dimensional code and the SLAM algorithm in the scene (the display equipment can be AR glasses such as Hololens or a flat panel display such as iPad), and the pose estimation result of the assembly object in the camera coordinate system obtained by the calculation in the step 4) is combined, so that the pose of the assembly object in the display equipment coordinate system can be converted, and the assembly information is displayed at the corresponding position, specifically comprising the following steps: (1) Rendering the assembly objects related to the assembly steps being executed at corresponding poses in a semitransparent mode, distinguishing the assembly objects in different colors, and highlighting orange to display assembly parts (holes, shafts, interfaces and joints); (2) Playing three-dimensional assembly simulation animation (comprising assembly process, tools and actions) at the corresponding position of the assembly object, and assisting in prompting by assembly guide information such as arrows, curves, circles, exclamation marks, voice and the like; (3) Displaying the technological indexes (such as torque, winding turns and measurement data) which are convenient to browse on the accessory of the assembly object, and displaying other technological indexes and the literal description of the assembly steps on the upper right corner of the equipment;
step 5.3), in each assembly step, judging whether assembly is finished or not and whether an assembly result is correct or not according to the pose estimation result of the assembly object, and automatically guiding the next assembly step, wherein the method specifically comprises the following steps: (1) The pose estimation result of the assembly object does not change more than 2 cm/20 degrees within one second, and the assembly object is considered to be static, namely the assembly step is completed; (2) The position and pose estimation result of the assembly object does not deviate from the correct assembly position and pose of the assembly object in the scene by more than 3 cm/10 degrees, and the assembly result is considered to be correct; (3) if the assembly is not finished, continuing to guide the assembly; if the assembly is completed but the result is wrong, highlighting the wrong assembly part and the outline of the assembly object in red, and prompting the correct assembly position in yellow and continuing to guide the assembly; and if the assembling step is completed and the result is correct, automatically entering the next assembling step until the assembling task is completed, recording an assembling process log and storing and remaining the evidence.
In summary, the invention discloses an augmented reality auxiliary assembly method for a visual field blind area of an aircraft equipment cabin, and firstly provides an assembly object pose estimation method based on an edge area to solve the problems of low speed and low precision of the pose estimation method based on deep learning; meanwhile, a self-adaptive switching method is provided, and self-adaptive switching can be performed when necessary by combining an assembly object pose estimation method based on an edge region and a deep learning, so that the robustness, the real-time performance and the accuracy of positioning of an assembly object are balanced; and the pose estimation result of the assembly object is used for augmented reality auxiliary assembly and assembly result detection, so that the visual assembly and guide assembly of the assembly object in the visual field blind area of the assembly operation of the aircraft equipment cabin are realized, the assembly result is automatically detected, and the assembly efficiency and quality are improved.
Fig. 3 is an input/output diagram of the pose estimation method, which shows an assembly object pose estimation result obtained by an input image through the EfficientPose deep learning network calculation, and is used for initializing or correcting the pose estimation result, and then the pose estimation result of the next frame is calculated through iterative optimization (sampling points, corresponding lines 1, 3 and 6 times of iteration) of the assembly object pose estimation method based on the edge region.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The above-described embodiments are not intended to limit the present invention, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (6)

1. An augmented reality auxiliary assembly method for a visual field blind area of an aircraft equipment cabin is characterized by comprising the following steps:
step 1), self-positioning of a depth camera based on point cloud registration: firstly, installing a depth camera in a designated area in an assembly scene; secondly, registering the point cloud data of the scene model with the point cloud data collected by the depth camera; finally, converting the pose information of the depth camera in the assembly scene according to the point cloud registration result;
step 2), estimating the pose of an assembly object based on deep learning, wherein the assembly object comprises avionics equipment, a pipeline joint and a bolt: firstly, collecting an assembly object pose data set, and enriching the data set by a data enhancement method of color space enhancement and six-degree-of-freedom transformation enhancement; secondly, building a deep learning network, and using a model pre-trained by a public data set as an initial model; thirdly, training and testing the initial model in the pose data set of the assembly object, and fine-tuning to obtain a migration model; finally, outputting the pose of the assembly object to the collected RGB image through a deep learning network by using a migration model in an online mode;
step 3), estimating the pose of the assembly object based on the edge area: firstly, rendering a space viewpoint model of an assembly object CAD model, and obtaining edge contour points and normal vectors of the space viewpoint model; secondly, counting segmentation probabilities of corresponding line segments in the normal vector direction of the edge contour points of the image in the acquired RGBD image in an online mode, wherein the corresponding line segments belong to the foreground or the background, and completing uncertainty modeling; finally, optimizing an uncertainty model, and calculating a corresponding contour and an assembly object pose which best meet image segmentation;
step 4), self-adaptive switching of the pose estimation method: performing online pose estimation on the assembly object by using the method in the step 2) to obtain an initial pose, and performing subsequent pose estimation by using the method in the step 3) based on the initial pose in a loop iteration manner; judging the pose estimation stability of the current assembly object by using an evaluation function, and calling the method in the step 2) once when the calculation value of the evaluation function is greater than a set threshold value so as to correct and stabilize the pose estimation result;
step 5), auxiliary assembly guide visualization: firstly, constructing an assembly process database to realize the access of assembly information including an assembly part three-dimensional model, an assembly tool, an assembly step, a process index and assembly guide information; secondly, according to the pose information of the depth camera in the assembly scene obtained in the step 1), combining the pose estimation result of the assembly object obtained in the step 4), obtaining the pose information of the assembly object in the assembly scene, and rendering the assembly information of the three-dimensional model of the assembled part on a display device; and finally, judging whether the assembling step is finished or not according to the pose information of the assembling object, and further guiding the next assembling step until the assembling task is finished.
2. The method according to claim 1, wherein step 1) is specifically: the depth camera is installed in a designated area of an assembly scene to obtain RGBD images of more operator view blind areas, the central position and the orientation of the depth camera in the area are selected as the initial pose of the depth camera, point cloud data of a scene model and point cloud data acquired by the depth camera are roughly registered by using a PCA (principal component analysis) algorithm, then fine registration is completed by using a GICP (general information processing) algorithm, and a pose matrix M of the depth camera in the assembly scene C Comprises the following steps:
Figure FDA0003974010760000021
in the formula:
Figure FDA0003974010760000022
transformation matrix obtained for fine registration, M C' And obtaining a depth camera initial pose matrix.
3. The method according to claim 2, wherein step 2) is specifically:
step 2.1), an assembling object pose data set under the scene of an airplane equipment cabin is constructed by using an ObjectDataSetTools open source tool, RGBD images of assembling objects under different angles, distances and illumination conditions are collected and labeled, labeling information comprises names and pose information of the assembling objects, wherein the labeling mode of the pose information is that the pose information of the assembling objects in a first RGBD image frame is manually registered and labeled, pose information of the assembling objects in other frames is converted and calculated based on two-dimensional codes beside the assembling objects, and automatic calculation and labeling of the pose information are realized; on the basis of the assembly object pose data set, expanding the assembly object pose data set by a data enhancement method of color space enhancement and six-degree-of-freedom transformation enhancement;
step 2.2), an EfficientPose deep learning network is built, an initial model is pre-trained on the basis of a Linemod public data set, training and testing are carried out on the initial model in an assembly object pose data set, fine adjustment of each weight parameter of the initial model is completed, and a migration model is obtained; the migration model is used for outputting predicted assembly object pose information to the online collected RGB images in the EfficientPose deep learning network.
4. The method according to claim 3, wherein step 3) is specifically:
step 3.1), in order to obtain a dense and uniform space viewpoint model, equally dividing the triangles of each surface of the regular icosahedron into four sub-triangles, iterating for 4 times to obtain 256 regular triangles on each surface, wherein 5120 regular triangles are totally obtained, the virtual camera sampling viewpoint is positioned at the vertex of the triangle, the distance from the center of the regular icosahedron is 1 m, the optical axis points to the center of the regular icosahedron, the CAD model center of an assembly object is set as the center of the regular icosahedron, the space viewpoint model of the CAD model at 2562 sampling viewpoints is rendered, and 200 edge contour points and normal vectors of the CAD model in the space viewpoint model are collected;
step 3.2), selecting a graph projected onto an imaging plane by the CAD model corresponding to the position and orientation estimation result of the previous frame as a mask, taking the edge contour point of the CAD model sampled on the outline of the mask as a midpoint, extending the same distance to two sides along the direction of each normal vector, wherein the distance is 5 to 30, and the distance is reduced along with the increase of the iteration times, so that the formed line segment is a corresponding line; removing a corresponding line of which the difference between the depth value of the CAD model of the edge contour point and the actual depth value at the point is larger than 3 cm, considering that the position is shielded at the moment, and taking the minimum depth value of 8 pixels around the edge contour point in the RGBD image as the actual depth value; simultaneously, eliminating sampling points of which the absolute value of the deviation between the actual depth value of each pixel point in the shade and the depth value of the CAD model is greater than 5 cm, and considering that the position is shielded or the sampling points are taken as a background; projecting the corresponding line onto an imaging plane to form a projection corresponding line, and calculating the probability that each pixel point on the projection corresponding line belongs to the foreground and the background: based on Bayes theory and the mask outline of the previous frame, the posterior probability of each pixel point in the mask outline of the current frame is calculatedAnd the depth information meets the posterior probability of the current frame mask, the combined probability of the two posterior probabilities is maximized by using a Newton method and a Gihonov regularization method, and the predicted value of the attitude change vector theta is aligned
Figure FDA0003974010760000031
Updating a pose estimation result by iterative optimization:
Figure FDA0003974010760000032
wherein the content of the first and second substances,
Figure FDA0003974010760000033
Figure FDA0003974010760000034
in the formula, g is a gradient vector, and the corresponding line and the sampling point are respectively considered by the two addends; h is a Hessian matrix of 6 multiplied by 6, and the two addends respectively consider the corresponding line and the sampling point; lambda [ alpha ] r And λ t Controlling the confidence of the position estimation result of the previous frame corresponding to the prior probability for the rotation and translation regularization parameters, thereby stabilizing the optimization and preventing the development towards the wrong direction, and taking lambda r =1000,λ t =20000;I 3 Is a unit matrix; n is the number of corresponding lines remaining after the elimination, d i Is the distance from the edge contour point on the ith corresponding line to the midpoint of the corresponding line in the last frame, l i Is the ith corresponding line, L i For the field of the corresponding line(s),
Figure FDA0003974010760000035
is the ith CAD model point under the camera coordinate system, n' is the number of the sampling points left after elimination,
Figure FDA0003974010760000036
and
Figure FDA0003974010760000037
nearest sampling points, d, for CAD model points, normal vectors and depth cameras in the model coordinate system zi For depth value of nearest sample point, σ d For standard deviation of depth information, the farther a sampling point is, the worse the quality is, so as to control the weight of the sampling point, and take sigma d =20;
Wherein, the calculation formula of the correlation derivative is,
Figure FDA0003974010760000041
Figure FDA0003974010760000042
Figure FDA0003974010760000043
in the formula, mu i And sigma i The posterior probability distribution of the corresponding line obeys the mean and standard deviation of the normal distribution,
Figure FDA0003974010760000044
is a rotation matrix of the camera coordinate system to the model coordinate system,
Figure FDA0003974010760000045
in the form of an antisymmetric matrix of CAD model points in a model coordinate system, e.g. point pairs
Figure FDA00039740107600000413
Is provided with
Figure FDA0003974010760000046
|n xi I and | n yi L is the length of the corresponding line projected to the imaging plane image coordinate system in the directions of the transverse axis and the longitudinal axisDegree f x And f y Is a camera focal length parameter;
step 3.3), the pose change vector predicted value after iterative optimization
Figure FDA0003974010760000047
Updating a pose estimation result:
Figure FDA0003974010760000048
in the formula (I), the compound is shown in the specification,
Figure FDA0003974010760000049
is a transformation matrix of the model coordinate system to the camera coordinate system,
Figure FDA00039740107600000410
and
Figure FDA00039740107600000411
is composed of
Figure FDA00039740107600000412
A rotational component and a translational component.
5. The method according to claim 4, wherein step 4) is specifically:
step 4.1), in the starting stage, carrying out online pose estimation on the assembly object in the scene of the aircraft equipment cabin according to the method in the step 2) to obtain an initial pose, taking the initial pose as a first frame, and circularly iterating according to the method in the step 3) to finish pose estimation of the assembly object of the subsequent image frame;
step 4.2), using the F norm of formula (4) as an evaluation function F;
F=||H|| Fn (6)
in the formula, alpha n As noise parameter, H F Is the F norm of H;
pose estimation in loop iterationEvaluating the counting result, when F is larger than the set threshold, taking 4 as the threshold of the avionics equipment, and respectively taking alpha for the avionics equipment A, B, C with the size from small to large n 2.4, 2.8 and 2.9, the reliability of the current pose estimation result is considered to be low, and the pose estimation is performed by inserting the method in the step 3) once, so that the correction of the error pose estimation result is realized, and the robustness of the pose estimation is improved.
6. The method according to claim 5, wherein step 5) is specifically:
step 5.1), constructing an assembly process database, and realizing access of assembly information including an assembly part three-dimensional model, an assembly tool, assembly steps, process indexes and assembly guide information;
step 5.2), positioning the pose of the depth camera in the scene of the aircraft equipment cabin according to the method in the step 1), positioning the pose of the display equipment with the camera shooting function of the user based on the two-dimensional code and the SLAM algorithm in the scene, combining the pose estimation result of the assembly object in the camera coordinate system obtained by calculation in the step 4), and converting the pose of the assembly object in the display equipment coordinate system, thereby displaying the assembly information at the corresponding position, which specifically comprises the following steps: (1) Rendering the assembly objects related to the assembly steps in a semitransparent mode at corresponding poses, distinguishing the assembly objects in different colors, and highlighting holes, shafts, interfaces and joints in orange display assembly parts; (2) Playing a three-dimensional assembly simulation animation at a corresponding position of an assembly object, wherein the three-dimensional assembly simulation animation comprises an assembly process, tools and actions, and is assisted to prompt by assembly guide information of arrows, curves, circles, exclamation marks and voices; (3) Displaying process indexes including torque, winding turns and measurement data which are convenient to browse on an assembly object accessory, and displaying other process indexes and the text description of the assembly steps on the upper right corner of the equipment;
step 5.3), in each assembly step, judging whether assembly is finished or not and whether an assembly result is correct or not according to the pose estimation result of the assembly object, and automatically guiding the next assembly step, wherein the method specifically comprises the following steps: (1) The pose estimation result of the assembly object does not change more than 2 cm/20 degrees within one second, and the assembly object is considered to be static, namely the assembly step is completed; (2) The deviation between the pose estimation result of the assembly object and the correct assembly pose of the assembly object in the scene is not more than 3 cm/10 degrees, and the assembly result is considered to be correct; (3) if the assembly is not finished, continuing to guide the assembly; if the assembly is finished but the result is wrong, highlighting the wrong assembly part and the contour of the assembly object in red, and prompting the correct assembly position in yellow and continuing to guide the assembly; and if the assembling step is completed and the result is correct, automatically entering the next assembling step until the assembling task is completed, recording an assembling process log and storing and remaining the evidence.
CN202211529490.5A 2022-11-30 2022-11-30 Augmented reality auxiliary assembly method for visual field blind area of aircraft equipment cabin Pending CN115797099A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211529490.5A CN115797099A (en) 2022-11-30 2022-11-30 Augmented reality auxiliary assembly method for visual field blind area of aircraft equipment cabin

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211529490.5A CN115797099A (en) 2022-11-30 2022-11-30 Augmented reality auxiliary assembly method for visual field blind area of aircraft equipment cabin

Publications (1)

Publication Number Publication Date
CN115797099A true CN115797099A (en) 2023-03-14

Family

ID=85444482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211529490.5A Pending CN115797099A (en) 2022-11-30 2022-11-30 Augmented reality auxiliary assembly method for visual field blind area of aircraft equipment cabin

Country Status (1)

Country Link
CN (1) CN115797099A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116737483A (en) * 2023-08-11 2023-09-12 成都飞机工业(集团)有限责任公司 Assembly test interaction method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116737483A (en) * 2023-08-11 2023-09-12 成都飞机工业(集团)有限责任公司 Assembly test interaction method, device, equipment and storage medium
CN116737483B (en) * 2023-08-11 2023-12-08 成都飞机工业(集团)有限责任公司 Assembly test interaction method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113139453B (en) Orthoimage high-rise building base vector extraction method based on deep learning
CN112925223B (en) Unmanned aerial vehicle three-dimensional tracking virtual test simulation system based on visual sensing network
CN112258658B (en) Augmented reality visualization method based on depth camera and application
CN114782691A (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
CN109299643A (en) A kind of face identification method and system based on big attitude tracking
CN103994755B (en) A kind of space non-cooperative object pose measuring method based on model
US7711507B2 (en) Method and device for determining the relative position of a first object with respect to a second object, corresponding computer program and a computer-readable storage medium
CN110838145B (en) Visual positioning and mapping method for indoor dynamic scene
CN112556719B (en) Visual inertial odometer implementation method based on CNN-EKF
CN113393577B (en) Oblique photography terrain reconstruction method
CN115797099A (en) Augmented reality auxiliary assembly method for visual field blind area of aircraft equipment cabin
CN105258680A (en) Object pose measurement method and device
CN114693720A (en) Design method of monocular vision odometer based on unsupervised deep learning
CN114581632A (en) Method, equipment and device for detecting assembly error of part based on augmented reality technology
CN114332243A (en) Rocket booster separation attitude measurement method based on perspective projection model
CN113393524A (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
US20240020909A1 (en) Image texture generation method based on 3d simplified model and related device
CN114419259B (en) Visual positioning method and system based on physical model imaging simulation
CN116612091A (en) Construction progress automatic estimation method based on multi-view matching
CN111882663A (en) Visual SLAM closed-loop detection method achieved by fusing semantic information
CN116202487A (en) Real-time target attitude measurement method based on three-dimensional modeling
CN115953460A (en) Visual odometer method based on self-supervision deep learning
CN115511970A (en) Visual positioning method for autonomous parking
Lee et al. Road following in an unstructured desert environment based on the EM (expectation-maximization) algorithm
Han et al. Towards To A Hybrid Model-Making Method Based On Translations Between Physical And Digital Models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination