CN115239636A - Assembly detection method based on augmented reality technology - Google Patents

Assembly detection method based on augmented reality technology Download PDF

Info

Publication number
CN115239636A
CN115239636A CN202210738572.4A CN202210738572A CN115239636A CN 115239636 A CN115239636 A CN 115239636A CN 202210738572 A CN202210738572 A CN 202210738572A CN 115239636 A CN115239636 A CN 115239636A
Authority
CN
China
Prior art keywords
point cloud
assembly
model
animation
overlapped
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210738572.4A
Other languages
Chinese (zh)
Inventor
欧林林
喻志祥
赵嘉楠
禹鑫燚
魏岩
周利波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202210738572.4A priority Critical patent/CN115239636A/en
Publication of CN115239636A publication Critical patent/CN115239636A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • Architecture (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An assembly detection method based on augmented reality technology comprises the following steps: 1) Performing 3D modeling on the physical world assembly object to obtain a 3D model and a source point cloud thereof, and additionally assembling animation through a unity3D engine; 2) Acquiring physical world depth information based on a HoloLens2 researcher mode to obtain three-dimensional point cloud, and preprocessing a target point cloud; 3) Carrying out point cloud registration on the source point cloud and the target point cloud to obtain a pose transformation matrix, matching the virtual model and the physical model through the transformation matrix, and setting a space anchor point; 4) And making a UI (user interface) based on the Unity3D, and controlling the virtual model assembly animation and prompting the assembly detailed information. 5) And searching the overlapped point cloud by adopting an Octree structure, calculating the point cloud overlapping degree, and calculating the relative error through the number of overlapped points so as to evaluate the assembly completion degree.

Description

Assembly detection method based on augmented reality technology
Technical Field
The invention relates to the technical field of digital twinning, in particular to an assembly detection method based on an augmented reality technology.
Background
The augmented reality technology is a new technology for seamlessly integrating real world information and virtual world information, and is characterized in that entity information (visual information, sound, taste, touch and the like) which is difficult to experience in a certain time and space range of the real world originally is overlapped after being simulated by scientific technologies such as computers, virtual information is applied to the real world and is perceived by human senses, and therefore sensory experience beyond reality is achieved. The Hololens2 device released by Microsoft official is a comfortable-to-wear mixed reality device, and is applied to modernization of a factory to adapt to rapid change of industry 4.0, construction innovation, efficient and sustainable production environment and enable productivity to reach a new level in industrial automation and staff training.
The assembly refers to a process of assembling parts according to specified technical requirements, debugging and checking the parts to form qualified products, but due to the diversity of the products and the complexity of the assembly process, missing steps are easy to generate in the assembly, and the parts are not installed in place, so that the product safety and the enterprise benefit are reduced. The traditional assembly training is that a technician with rich technical experience guides and trains new staff, but due to limited energy of people and complexity of guiding the assembly process, a good effect cannot be achieved, and the new staff can obtain better experience by continuously trial and error.
In order to solve the problems, an augmented reality assembly technology is continuously developed, and a Hololens depth data-based assembly guidance method and system are proposed by a peritron glow (the peritron glow, a Hololens depth data-based assembly guidance method and system [ P ] chinese patent, CN113706689A, 2021.11.26). Yangkang proposes a key technology research and application of augmented reality for complex product assembly (Yangkang, key technology research and application of augmented reality for complex product assembly [ D ]. Jiangsu: nanjing university of aerospace, 2019.), the technology is calibrated by Kinect tracking and three-dimensional registration of a physical model, and HoloLens are assembled, but the method needs Kinect and HoloLens for data transmission, so that the real-time property is influenced. The method adopts an assembly detection method based on the augmented reality technology, has real-time feedback on the assembly effect, and adopts HoloLens2 to interact with the PC terminal, thereby reducing time consumption and improving real-time property.
Disclosure of Invention
The invention aims to solve the technical problems in the prior art and provides an assembly detection method based on an augmented reality technology.
The assembly refers to a process of assembling parts according to specified technical requirements, debugging and checking the parts to form qualified products, but due to the diversity of the products and the complexity of the assembly process, missing steps are easy to generate in the assembly, and the parts are not installed in place, so that the product safety and the enterprise benefit are reduced. The traditional assembly training is that a technician with rich technical experience guides and trains new staff, but due to limited energy of people and complexity of guiding the assembly process, a good effect cannot be achieved, and the new staff can obtain better experience by continuously trial and error.
The augmented reality technology is a new technology for seamlessly integrating real world information and virtual world information, and is characterized in that entity information (visual information, sound, taste, touch and the like) which is difficult to experience in a certain time space range of the real world originally is overlapped after being simulated by scientific technologies such as computers, the virtual information is applied to the real world and is perceived by human senses, and therefore sensory experience beyond reality is achieved. The Hololens2 device released by microsoft official is a comfortable wearing mixed reality device, and is applied to modernization of factories to adapt to rapid change of industry 4.0 and build innovation, efficient and sustainable production environment and to enable productivity to reach a new level in industrial automation and staff training.
The invention overcomes the problems in the prior art and provides an assembly detection method based on an augmented reality technology.
The method is based on data acquisition of Hololens2 equipment, data processing of a PC terminal, fabrication and assembly of animation and attachment to a model through SolidWorks, assembly interaction of a virtual world and a real world through a Unity3D engine, and assembly effect detection through point cloud overlap ratio. The invention discloses an assembly detection method based on an augmented reality technology, which mainly comprises five parts of model preparation, data acquisition, point cloud processing, virtual-real fusion and effect detection.
In order to solve the technical problem, the invention provides an assembly detection method based on an augmented reality technology, which comprises the following steps:
step one, performing 3D modeling on the physical world assembly object to obtain a 3D model and a source point cloud thereof, and additionally assembling animation through a Unity3D engine.
And step two, acquiring physical world depth information through a Hololens2 researcher mode to obtain a three-dimensional point cloud, and preprocessing the target point cloud.
And step three, performing point cloud registration on the source point cloud and the target point cloud to obtain a pose transformation matrix, matching the virtual model and the physical model through the transformation matrix, and setting a space anchor point.
And fourthly, manufacturing a UI (user interface) based on the Unity3D, and controlling the virtual model assembly animation and prompting the assembly detailed information.
And step five, searching the overlapped point clouds by adopting an Octree structure, calculating the point cloud overlapping degree, and calculating a relative error through the number of the overlapped points, thereby evaluating the assembly completion degree.
Wherein, the first step specifically comprises:
firstly, obtaining an assembly STEP specification of an object to be assembled, then adopting SolidWorks software to perform equal-proportion 3D modeling on a model to be assembled, importing a model file in a STEP file format into 3DMax software to perform mapping and rendering processing, then storing the processed model file as a FBX format file, firstly importing the FBX model into Python to call an Open3D library to convert the model into a point cloud model in a PCD format, storing the point cloud model in a database to be called conveniently, and secondly importing Unity3D to perform assembly animation manufacturing and STEP information.
The method comprises the steps of firstly selecting an initial state model of an object to be assembled, then adding Animation information of part assembly on the basis of the initial state model, wherein the part assembly process needs to meet step information in an assembly specification, adding a Rigidody to a sub-object of the assembly object, canceling a user visibility option, selecting an Is Kinematic option, and checking a Loop Time option for the Animation object, so that the assembly object Animation can play a role in real-Time reminding in the assembly process.
Wherein, the second step specifically comprises:
before step two is performed, a researcher mode of the Hololens2 is required to be enabled, the researcher mode is an application program used by Hololens for accessing key sensors, and the Hololens2 adds data access to a visible light environment tracking camera, a depth camera and the like on the basis of the Hololens 1. And after the researcher mode is opened, the depth data is acquired through the Hololens2 depth camera and is transmitted to the PC side for processing.
The depth data processing is a process of solving a final projection matrix P from a world coordinate system to a camera coordinate system and then to an image coordinate system by adopting a camera calibration principle. The process of solving the external reference matrix is from a world coordinate system to a camera coordinate system, the external reference matrix is obtained by printDepthTextrinetics in HL2research Mode, the process of solving the internal reference matrix is from the camera coordinate system to an image coordinate system, the internal reference matrix is obtained by a Zhang Zhengyou chessboard pattern calibration method, and finally the three-dimensional point cloud is obtained through the matrix operation of the formula (1).
Figure BDA0003713015860000041
Wherein u and v are arbitrary coordinate points in an image coordinate system, and u 0 ,v 0 Respectively, the center coordinate, x, of the image w ,y w ,z w Representing three-dimensional coordinate points in the world coordinate system. z is a radical of c The z-axis value representing the camera coordinates, i.e. the distance of the object from the camera. R and T are respectively a rotation matrix of 3x3 and a translation matrix of 3x1 in the external reference matrix.
Wherein the third step specifically comprises:
firstly, preprocessing the target point cloud obtained in the second step, cutting the target point cloud to obtain the target point cloud, wherein useful areas in the point cloud are cut by adopting selectionpolygonVolume classes in an Open3D library in Python because wall and sundry information exists around the object to be assembled. And then, performing ICP registration on the processed target point cloud and the source point cloud through a registration _ ICP class in an Open3D library, and outputting the source point cloud to a target point cloud pose transformation matrix.
The pose transformation matrix is sent to the HoloLens2 through TCP/IP communication, the HoloLens2 calls a related model and transforms the position of the model through the pose transformation matrix under a world coordinate system to achieve the effect of virtual-real fusion, as the virtual model can deviate under the condition of camera shake, a space anchor point is added to the fused virtual model, the space anchor point in the HoloLens2 is a method capable of keeping an object at a specific position and in a rotating state, the virtual model is fixed on a physical model through a space anchor point technology, and animation demonstration is convenient to assemble later.
Wherein, the fourth step specifically comprises:
the buttons that first need to be added are as follows: two forward and backward buttons for controlling the assembly steps of the front and rear frames of the animation; a button for adding and deleting the spatial anchor point, which is used for flexible operation of the spatial anchor point; an assembly evaluation button for evaluating the assembly completion; the text boxes that need to be added are as follows: a detailed information for describing parts required for assembling so as to correspond to the found assembled parts; a text detail describing the assembly steps to assist in the assembly operation.
When adding the button, a pressable button Holoflens 2 script in the MRTK package needs to be added to give a depressible button attribute, an interactible script gives an interactible attribute, and a near interactive touch attribute. When adding text components, the text information needs to be in one-to-one correspondence with the assembly steps. The SolverHandler script and the RadiaLView script in the MRTK package are required to be added to the whole UI interface so as to enable the GameObject to follow the user's gaze, wherein the SolverHandler script is used for setting an object for tracking reference, a reference point selects Head, namely a main camera, and the RadiaLView script is a trailing component and is used for keeping a specific part of the GameObject in a cone of a user view.
Wherein the fifth step specifically comprises:
under the prompt of the animation and text steps in the fourth step, assembling the assembly body to be assembled, evaluating the assembling completion degree after finishing each step, and calculating and evaluating the contact ratio between two point clouds searched by Octree, wherein the specific evaluation method comprises the following steps: firstly, a source point cloud of a virtual assembly part is generated in the first step, a target point cloud needs to be acquired according to depth data of an assembled physical model acquired in the second step, then a PCL library is called to search an overlapped part of the source point cloud and the target point cloud by using an Octree structure, then the relative error of the number of overlapped points of the source point cloud and the target point cloud is calculated, and finally the assembly completion degree is judged by setting a threshold value.
The specific steps of Octree structure search are firstly establishing an Octree for a source point cloud, then traversing all points in a target point cloud, inquiring whether the voxels corresponding to the source point cloud have the same point cloud, and if so, determining the points are overlapped point clouds.
The specific steps of the operation processing of the points of the overlapped part and the total point cloud are carried out from the assembled partsAcquiring depth data at N angles to obtain N point clouds at different angles, searching overlapping parts of the point clouds and the source point clouds respectively by adopting an Octree structure, calculating relative error values of the overlapping parts and the source point clouds, and finally taking the average value of the N results as an experimental result. The calculation formula is shown in formula (2), wherein
Figure BDA0003713015860000051
The number of the overlapped part is counted, P is the number of the source point cloud,
Figure BDA0003713015860000052
is a relative error result.
Figure BDA0003713015860000053
The threshold value is selected by carrying out corresponding experiment data acquisition and calculation before an assembly experiment, the assembly detection is to detect whether the assembly part meets the requirements of the assembly step, the average value of the relative errors of the contact ratio of the target point cloud and the source point cloud which are assembled successfully at N different angles in K steps is selected, and finally the average value is selected as the upper limit of the assembly completion evaluation value. The calculation formula is shown in formula (3).
Figure BDA0003713015860000054
The final criterion for the assembly completion is shown in formula (4), wherein
Figure BDA0003713015860000055
In order to be a relative error in the assembly process,
Figure BDA0003713015860000056
for the relative error found in preliminary experiments, if
Figure BDA0003713015860000057
The time UI interface displays the completed if
Figure BDA0003713015860000058
The UI interface displays uncompompleded.
Figure BDA0003713015860000059
The invention relates to an assembly detection method based on augmented reality technology, which is characterized in that data are collected based on a Hololens2 device, a PC (personal computer) end processes the data, assembly animations are manufactured through SolidWorks and attached to a model, assembly interaction of a virtual world and a real world is realized through a Unity3D (unified device-to-device) engine, and an assembly effect is detected through point cloud contact ratio. The method mainly comprises five parts of model preparation, data acquisition, point cloud processing, virtual-real fusion and effect detection. Firstly, 3D modeling is carried out on a physical world assembly object to obtain a 3D model and a source point cloud thereof, and an assembly animation is additionally arranged through a Unity3D engine. And then acquiring physical world depth information through a Hololens2 researcher mode to obtain a three-dimensional point cloud, and preprocessing the target point cloud. And then point cloud registration is carried out on the source point cloud and the target point cloud to obtain a pose transformation matrix, the virtual model and the physical model are matched through the transformation matrix, and a space anchor point is set. In order to improve the HoloLens2 operation experience, a UI (user interface) is manufactured based on Unity3D, and the virtual model assembly animation is controlled and the detailed assembly information is prompted. And finally, searching the overlapped point cloud by adopting an Octree structure, calculating the point cloud overlapping degree, and calculating the relative error through the number of overlapped points so as to evaluate the assembly completion degree. Therefore, the method and the device can achieve the effect of fusing virtuality and reality for the assembly environment, can evaluate the assembly completion degree and feed back the assembly completion degree to operators, and achieve good assembly operation experience and a complete assembly process.
The invention has the advantages that: and (3) adding assembly Animation to the 3D model through an Animation system in the Unity3D, so that an operator can receive assembly signals in real time and perform assembly operation. The integration process of acquiring depth data and assembling operation by using the Hololens2 is adopted, and a space anchor point is set for the virtual model, so that the stability of the virtual model in the physical world is facilitated, and the equipment requirement cost is reduced. And searching the overlapped point cloud by adopting an Octree structure, calculating the point cloud overlapping degree, and calculating relative errors through the number of overlapped points, so that the assembly completion degree is evaluated and fed back to operators, and the assembly accuracy and the assembly experience of the operators are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is within the scope of the present invention for those skilled in the art to obtain other drawings based on the drawings without inventive exercise.
Fig. 1 is a diagram of an acquisition pose matrix of an assembly detection method based on an augmented reality technology according to an embodiment of the present invention.
Fig. 2 is a block diagram of effect detection of an assembly detection method based on an augmented reality technology according to an embodiment of the present invention.
Detailed Description
The present invention is further described in detail below with reference to the accompanying drawings.
As shown in fig. 1 to fig. 2, an assembly detection method based on an augmented reality technology provided by an embodiment of the present invention is constructed based on a Hololens2, and is mainly divided into five parts, namely, model preparation, data acquisition, point cloud processing, virtual-real fusion, and effect detection. Fig. 1 depicts three parts of model preparation, data acquisition and point cloud processing, and fig. 2 depicts two parts of virtual-real fusion and effect detection.
The three parts of model preparation, data acquisition and point cloud processing are mainly divided into an online state and an offline state, wherein the online state is mainly data acquisition and point cloud processing, and the offline state is mainly model preparation. In an off-line state, according to a physical assembly model, 3D modeling is carried out by adopting SolidWorks and stored in a STEP format, the 3D modeling is transmitted to 3DMAX software, the 3DMAX software is used for carrying out mapping rendering operation on a virtual model so as to achieve the similarity with the color information of the physical model, the virtual model of the mapping rendering is stored in an FBX file format, finally, a model file is led into Python, and an Open3D library is called to convert the FBX file into a PCD source point cloud file.
In an online state, mainly using a Hololens2 and a PC end, firstly, acquiring depth data of a model to be assembled through a depth camera of the Hololens2, transmitting a depth image in a format of PNG to a Python end through TCP/IP communication, calling Open3D and an internal and external parameter matrix by the Python end to convert the depth data into point cloud, wherein the internal parameter matrix is calculated by a Zhang Yongyou chessboard lattice calibration method, and the external parameter matrix is obtained by PrintDepthTexrinics in HL2 ResearchMode. The point cloud acquired by the depth camera has irrelevant information such as wall surface, ground surface, sundries and the like, point cloud cutting is needed, needed point cloud information is obtained, and the needed point cloud information is stored as target point cloud in a PCD format. And finally, carrying out ICP registration on the source point cloud obtained in the off-line state and the target point cloud obtained in the on-line state to obtain a pose transformation matrix and transmitting the pose transformation matrix to an assembly scene.
The virtual-real fusion and effect detection part consists of a Hololens2, a local computer and a database, a pose transformation matrix obtained from the graph 1 is transmitted to a Hololens2 assembly scene through TCP/IP communication, a virtual model in the database is called, the pose of the virtual model is transformed in the scene, and a spatial anchor point technology in the Hololens2 is added to fix the virtual model in the assembly scene.
During assembly operation, assembly Animation in a database needs to be called and a UI interface needs to be made, wherein the assembly Animation is made by an Animation plug-in Unity3D, and buttons needed to be added to the UI interface are as follows: two forward and backward buttons for controlling the assembly steps of the front and rear frames of the animation; a button for adding and deleting the spatial anchor point, which is used for the flexible operation of the spatial anchor point; an assembly evaluation button for evaluating the assembly completion; the text boxes that need to be added are as follows: a detailed information for describing parts required for assembling so as to correspond to the found assembled parts; a text detail describing the assembly steps to assist in the assembly operation. When adding a button, a pressable button Holoflens 2 script in an MRTK package needs to be added to give a depressible button attribute, an interactible script gives an interactible attribute, and a near-interactionTouchable script gives a near-interactive touch attribute. When adding text components, the text information needs to be in one-to-one correspondence with the assembly steps.
After the parts are installed, depth data of the east, south, west and north angles of a scene are collected through Hololens2, the depth data of the four angles are transmitted to a Python end through TCP/IP communication according to a format depth image of PNG, the Python end calls Open3D and an internal and external parameter matrix to convert the depth data into point cloud, wherein the internal parameter matrix is calculated through Zhangzhongyou chessboard format scaling method, and the external parameter matrix is obtained by printDepthExtensitics in HL2 ResearchMode. The point cloud acquired by the depth camera has irrelevant information such as wall surface, ground surface, sundries and the like, point cloud cutting is needed to obtain needed point cloud information, and the target point cloud in the PCD format is stored.
And performing Octree structure search on the four target point clouds and a source point cloud stored in a database to obtain an overlapped point cloud, calculating relative error values of the overlapped part and the source point cloud, and finally taking the average value of the four results as an experimental result. As shown in formula (5), wherein
Figure BDA0003713015860000081
The number of overlapping points, P the number of source point clouds, and X the relative error result.
Figure BDA0003713015860000082
The pre-experiment error in the database is obtained by performing five times of repeated experiments on the target step before the experiment, calculating an average value of the relative errors of the contact ratio between the target point cloud and the source point cloud which are assembled successfully from four different angles, and finally selecting the average value as an upper limit of the assembly completion evaluation value, as shown in formula (6), wherein the average value is
Figure BDA0003713015860000083
The number of points of the overlapped part, P is the number of points of the source point cloud, and Y is the relative error result.
Figure BDA0003713015860000084
And finally, the judgment standard of the assembly completion degree is shown as a formula (7), wherein X is the relative error in the assembly process, Y is the relative error obtained by a preparation experiment, the value of Y is 0.75, f is the judgment result, if X is less than Y, the assembly is successfully displayed on the UI interface, and if X is more than or equal to Y, the assembly is failed to be displayed on the UI interface.
Figure BDA0003713015860000085
The method comprises the steps of obtaining point clouds through depth data of the Hololens2, conducting point cloud registration with source point clouds to obtain a pose transformation matrix, enabling a virtual model and a physical model to be subjected to virtual-real fusion and assembly operation through the obtained matrix, conducting overlapped point cloud calculation on the point clouds subjected to assembly and the source point clouds to obtain relative errors, and comparing the relative errors with pre-experimental errors to obtain an assembly result.
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.

Claims (5)

1. An assembly detection method based on augmented reality technology comprises the following steps:
step one, performing 3D modeling on the physical world assembly object to obtain a 3D model and a source point cloud thereof, and additionally assembling animation through a unity3D engine.
And step two, acquiring physical world depth information through a HoloLens2 researcher mode to obtain a three-dimensional point cloud, and preprocessing the target point cloud.
And step three, performing point cloud registration on the source point cloud and the target point cloud to obtain a pose transformation matrix, matching the virtual model and the physical model through the transformation matrix, and setting a space anchor point.
And fourthly, manufacturing a UI (user interface) based on the Unity3D, and controlling the virtual model assembly animation and prompting the assembly detailed information.
And step five, searching the overlapped point clouds by adopting an Octree structure, calculating the point cloud overlapping degree, and calculating a relative error through the number of the overlapped points, thereby evaluating the assembly completion degree.
2. The method of claim 1, wherein the first step specifically comprises:
firstly, obtaining an assembly STEP specification of an object to be assembled, then adopting SolidWorks software to perform equal-proportion 3D modeling on a model to be assembled, importing a built model file in a STEP file format into 3DMax software to perform mapping and rendering processing, then storing the processed model file as a FBX format file, firstly importing the obtained FBX model into Python to call an Open3D library to convert the model into a point cloud model in a PCD format, storing the point cloud model in a database to be convenient to call, and secondly importing Unity to perform assembly animation production and STEP information.
The method for manufacturing the Unity3D assembly Animation adopts an Animation system provided by an engine, firstly, an initial state model of an object to be assembled Is selected, then Animation information of part assembly Is added on the basis of the initial state model, the part assembly process needs to meet step information in an assembly specification, rigiddy needs to be added to a sub-object of the assembly object, a user visibility option Is canceled, an Is Kinematic option Is selected, and a Loop Time option Is selected for the Animation object, so that the assembly object Animation can play a real-Time reminding role in the assembly process.
3. The method according to claim 1, wherein the second step specifically comprises:
before step two is performed, a researcher mode of the HoloLens2 is required to be enabled, the researcher mode is an application program used by the HoloLens for accessing key sensors, and the HoloLens2 adds data access to a visible light environment tracking camera, a depth camera and the like on the basis of the HoloLens 1. And after the researcher mode is opened, the depth data is acquired through the Hololens2 depth camera and is transmitted to the PC side for processing.
The processing of the depth data is a process of solving a final projection matrix P from a world coordinate system to a camera coordinate system and then to an image coordinate system by adopting a camera calibration principle. The process of solving the external parameter matrix is from a world coordinate system to a camera coordinate system, the external parameter matrix is obtained by printdepthtextrinics in HL2research Mode, the process of solving the internal parameter matrix is from the camera coordinate system to an image coordinate system, the internal parameter matrix is obtained by a Zhang Zhengyou chessboard pattern calibration method, and finally three-dimensional point cloud is obtained through matrix operation of a formula (1).
Figure FDA0003713015850000021
Wherein u and v are arbitrary coordinate points in an image coordinate system, and u 0 ,v 0 Respectively, the center coordinate, x, of the image w ,y w ,z w Representing three-dimensional coordinate points in the world coordinate system. z is a radical of formula c The z-axis value representing the camera coordinates, i.e. the distance of the object from the camera. R and T are respectively a rotation matrix of 3x3 and a translation matrix of 3x1 in the external reference matrix.
4. The method according to claim 1, wherein the third step specifically comprises:
firstly, preprocessing the target point cloud obtained in the second step, cutting the target point cloud to obtain the target point cloud, wherein useful areas in the point cloud are cut by adopting selectionpolygonVolume classes in an Open3D library in Python because wall and sundry information exists around the object to be assembled. And then, carrying out ICP (inductively coupled plasma) registration on the processed target point cloud and the source point cloud through a registration _ ICP (iterative _ ICP) type in an Open3D (three-dimensional) library, and outputting a position transformation matrix from the source point cloud to the target point cloud.
The pose transformation matrix is sent to the HoloLens2 through TCP/IP communication, the HoloLens2 calls a relevant model and transforms the position of the relevant model through the pose transformation matrix under a world coordinate system, so that the effect of virtual-real fusion is achieved, as the virtual model can deviate under the condition of camera shake, a space anchor point is added to the fused virtual model, the space anchor point in the HoloLens2 is a method capable of keeping an object at a specific position and in a rotating state, the virtual model is fixed on a physical model through a space anchor point technology, and assembly animation demonstration is convenient to carry out later.
5. The method according to claim 1, wherein the step five specifically comprises:
assembling the assembly body to be assembled under the prompt of the animation and text steps in the fourth step, evaluating the assembling completion degree after finishing each step, and calculating and evaluating the contact ratio between two point clouds by adopting Octree, wherein the specific evaluation method comprises the following steps: firstly, a source point cloud of a virtual assembly part is generated in the first step, a target point cloud needs to be acquired according to depth data of an assembled physical model acquired in the second step, then a PCL library is called to search an overlapped part of the source point cloud and the target point cloud by using an Octree structure, then the relative error of the number of overlapped points of the source point cloud and the target point cloud is calculated, and finally the assembly completion degree is judged by setting a threshold value.
The specific steps of Octree structure search are firstly establishing an Octree for a source point cloud, then traversing all points in a target point cloud, inquiring whether the voxels corresponding to the source point cloud have the same point cloud, and if so, determining the points are overlapped point clouds.
The specific steps of the operation processing of the points of the overlapped part and the total point clouds are that N angles are carried out on the assembled parts to acquire depth data, so that N point clouds with different angles are obtained, then an Octree structure is adopted to search the overlapped parts of the point clouds and the source point cloud, the relative error value calculation is carried out on the overlapped parts and the source point cloud, and finally the average value of the N results is taken as an experimental result. The calculation formula is shown in formula (2), wherein
Figure FDA0003713015850000031
The number of the overlapped part is counted, P is the number of the source point cloud,
Figure FDA0003713015850000032
as a result of relative error.
Figure FDA0003713015850000033
The threshold value is selected by carrying out corresponding experiment data acquisition and calculation before an assembly experiment, the assembly detection is to detect whether the assembly part meets the requirements of the assembly step, the average value of the relative errors of the contact ratio of the target point cloud and the source point cloud which are assembled successfully at N different angles in K steps is selected, and finally the average value is selected as the upper limit of the assembly completion evaluation value. The calculation formula is shown in formula (3).
Figure FDA0003713015850000034
The final criterion for the assembly completion is shown in formula (4), wherein
Figure FDA0003713015850000035
In order to be a relative error in the assembly process,
Figure FDA0003713015850000036
for the relative error found in preliminary experiments, if
Figure FDA0003713015850000037
The time UI interface displays the completed if
Figure FDA0003713015850000038
The time UI interface displays uncomputed.
Figure FDA0003713015850000039
CN202210738572.4A 2022-06-24 2022-06-24 Assembly detection method based on augmented reality technology Pending CN115239636A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210738572.4A CN115239636A (en) 2022-06-24 2022-06-24 Assembly detection method based on augmented reality technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210738572.4A CN115239636A (en) 2022-06-24 2022-06-24 Assembly detection method based on augmented reality technology

Publications (1)

Publication Number Publication Date
CN115239636A true CN115239636A (en) 2022-10-25

Family

ID=83671827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210738572.4A Pending CN115239636A (en) 2022-06-24 2022-06-24 Assembly detection method based on augmented reality technology

Country Status (1)

Country Link
CN (1) CN115239636A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758237A (en) * 2023-08-17 2023-09-15 山东萌山钢构工程有限公司 Assembly building monitoring system based on real-time modeling

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758237A (en) * 2023-08-17 2023-09-15 山东萌山钢构工程有限公司 Assembly building monitoring system based on real-time modeling
CN116758237B (en) * 2023-08-17 2023-10-20 山东萌山钢构工程有限公司 Assembly building monitoring system based on real-time modeling

Similar Documents

Publication Publication Date Title
US10692287B2 (en) Multi-step placement of virtual objects
CN109255813B (en) Man-machine cooperation oriented hand-held object pose real-time detection method
US10489651B2 (en) Identifying a position of a marker in an environment
US10872459B2 (en) Scene recognition using volumetric substitution of real world objects
Fiorentino et al. Spacedesign: A mixed reality workspace for aesthetic industrial design
Ueda et al. A hand-pose estimation for vision-based human interfaces
Dai Virtual reality for industrial applications
CN102253713B (en) Towards 3 D stereoscopic image display system
CN110476142A (en) Virtual objects user interface is shown
CN109313821B (en) Three-dimensional object scan feedback
US7536655B2 (en) Three-dimensional-model processing apparatus, three-dimensional-model processing method, and computer program
Klinker et al. Confluence of computer vision and interactive graphies for augmented reality
US20130063560A1 (en) Combined stereo camera and stereo display interaction
CN104589356A (en) Dexterous hand teleoperation control method based on Kinect human hand motion capturing
CN111191322B (en) Virtual maintainability simulation method based on depth perception gesture recognition
US20190266804A1 (en) Virtual prototyping and assembly validation
CN104656893A (en) Remote interaction control system and method for physical information space
Tao et al. Manufacturing assembly simulations in virtual and augmented reality
CN115239636A (en) Assembly detection method based on augmented reality technology
US20140142900A1 (en) Information processing apparatus, information processing method, and program
Hernoux et al. A seamless solution for 3D real-time interaction: design and evaluation
Fadzli et al. VoxAR: 3D modelling editor using real hands gesture for augmented reality
KR20160141023A (en) The method of dynamic and static gesture recognition using depth camera and interface of immersive media contents
Messaci et al. 3d interaction techniques using gestures recognition in virtual environment
US20230015238A1 (en) Method and Apparatus for Vision-Based Tool Localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination