CN113706689A - Assembly guidance method and system based on Hololens depth data - Google Patents

Assembly guidance method and system based on Hololens depth data Download PDF

Info

Publication number
CN113706689A
CN113706689A CN202110892450.6A CN202110892450A CN113706689A CN 113706689 A CN113706689 A CN 113706689A CN 202110892450 A CN202110892450 A CN 202110892450A CN 113706689 A CN113706689 A CN 113706689A
Authority
CN
China
Prior art keywords
point cloud
hololens
model
data
depth data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110892450.6A
Other languages
Chinese (zh)
Other versions
CN113706689B (en
Inventor
周光辉
肖佳诚
张超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202110892450.6A priority Critical patent/CN113706689B/en
Publication of CN113706689A publication Critical patent/CN113706689A/en
Application granted granted Critical
Publication of CN113706689B publication Critical patent/CN113706689B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an assembly guidance method and system based on Hololens depth data, which convert the depth data of a real environment collected by Hololens equipment into a point cloud model under a world coordinate system; after labeling, converting the point cloud into a standard data set format; constructing a three-dimensional model of a part to be assembled; converting the constructed three-dimensional model of the part into a point cloud model; carrying out space coordinate transformation on the converted part point cloud model, synthesizing the part point cloud model with a plurality of scene point cloud data, and converting the synthesized point cloud into a standard data set format; inputting the standard data set and the synthesized standard data set into an improved Votenet network for training, storing a training result and carrying out auxiliary calibration by combining an ICP method; the method comprises the steps that a Hololens device transmits depth data of a real scene to a remote server, inputs the depth data into a Votenet network, and outputs a detection result; and displaying the information to a user in a holographic image form to realize assembly guidance. The invention effectively pushes the auxiliary assembly information to the assembly personnel in the form of holographic images, realizes assembly guidance and improves the assembly efficiency.

Description

Assembly guidance method and system based on Hololens depth data
Technical Field
The invention belongs to the technical field of intelligent manufacturing and digitization, and particularly relates to an assembly guidance method and system based on Hololens depth data.
Background
The assembly refers to a series of operation activities for assembling parts into a complete product according to a certain process flow, and is an important link in the life cycle of the product. At present, due to the diversity of products and the complexity of complex product assembly procedures, assembly personnel have to conduct assembly guidance by means of paper instruction manuals or electronic documents. However, extracting effective guidance information from a large amount of text information requires a lot of time, and requires a certain level of understanding and cognition for the assembler. Therefore, how to provide an assembly guidance mode which is easier to understand and more efficient becomes a key issue.
The augmented reality technology is a fusion technology which takes a three-dimensional tracking technology, a real-time interaction technology and a virtual-real combination technology as characteristics and superposes multimedia information in a real environment. The augmented reality technology is used for assembling complex products, so that a user can operate a virtual object and a real object simultaneously. Under the environment of virtual-real integration, the perception capability of a user can be enhanced, better interactive experience can be obtained, and the user can not influence the assembly operation of real physical parts under the condition of obtaining effective auxiliary assembly information such as assembly sequence, equipment position, assembly requirements and the like, so that the assembly efficiency is improved, and assembly errors are reduced.
Hololens is as a wear-type augmented reality equipment, and both hands that can liberate the user simultaneously portable are very fit for being used for the assembly to guide. However, most of the current augmented reality applications developed based on Hololens still utilize SDKs such as Vuforia to perform three-dimensional registration based on markers and superimpose virtual information on real objects. In the actual assembly process, there are usually requirements on the surface finish of the parts, and the assembly station may vary, so that there are many limitations on the method of presetting the markers.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an assembly guidance method and system based on the Hololens depth data, which aims at overcoming the defects in the prior art, and utilizes a depth sensor carried by the Hololens to collect environment data, reconstruct a point cloud model of an environment, and identify the type and the spatial pose of a part to be assembled, so that corresponding auxiliary assembly information is pushed to an assembler in the form of a holographic image, and the assembly guidance is realized to improve the assembly efficiency.
The invention adopts the following technical scheme:
an assembly guidance method based on Hololens depth data comprises the following steps:
s1, converting the depth data of the real environment collected by the Hololens equipment into a point cloud model under a world coordinate system;
s2, labeling the point cloud model obtained in the step S1, and converting the point cloud into a standard data set format according to labeling information;
s3, constructing a three-dimensional model of the part to be assembled;
s4, converting the part three-dimensional model constructed in the step S3 into a point cloud model;
s5, performing space coordinate transformation on the part point cloud model converted in the step S4, synthesizing the part point cloud model with a plurality of scene point cloud data, and converting the synthesized point cloud into a standard data set format;
s6, inputting the standard data set converted in the step S2 and the standard data set synthesized in the step S5 into an improved Votenet network for training, storing a training result and carrying out auxiliary calibration by combining an ICP method;
s7, the Hololens device transmits the depth data of the real scene to a remote server, the remote server preprocesses the depth data and inputs the depth data into the Votenet network trained in the step S6, and a detection result is output;
and S8, acquiring the detection result output in the step S7 by the Hololens equipment, and displaying the corresponding auxiliary assembly information to a user in the form of a holographic image to realize assembly guidance.
Specifically, step S1 specifically includes:
s101, obtaining a depth sensor access right through a Hololens device research mode, developing a depth data transmission plug-in through C + +, importing the depth data transmission plug-in into a Unity3D augmented reality application, and deploying the depth data transmission plug-in to a Hololens device;
s102, starting an application, recording the position of the Hololens equipment at the moment as an origin of a Hololens world coordinate system, enabling a depth sensor to work in a LongThrow working mode, enabling the Hololens equipment to move in a scene, and after socket connection with a server is established, enabling the Hololens equipment to send a real-time depth data frame DepthFrame to a background server;
s103, analyzing the DepthFrame in the step S102 to obtain a space transformation matrix T from the origin rigNode of the current equipment to the origin of the world coordinate systemrig2worldThe image width imgWidth and the image height imgHeight of the depth image and the depth data D corresponding to each pixel point;
s104, according to the depth data obtained in the step S103, any point (u, v) on the two-dimensional depth image is processedTConverting into three-dimensional coordinates under a sensor coordinate system:
s105, calculating the coordinates of each point in a standard coordinate system through coordinate transformation according to the three-dimensional coordinates in the sensor coordinate system obtained in the step S104;
and S106, converting the three-dimensional point set obtained in the step S105 into point cloud data, and storing the point cloud data into a pcd file.
Further, in step S104, the three-dimensional coordinates in the sensor coordinate system are:
Figure BDA0003196458830000031
Figure BDA0003196458830000032
wherein, (X, Y, Z)T camCoordinates under a sensor coordinate system; (u, v)TRepresenting the coordinates of any point pixel on the two-dimensional depth image; f represents a mapping relation related to the internal parameters of the sensor, and (u, v)TThe mapping is (x, y, z), z being the three-dimensional coordinate of 1.
Further, in step S105, the coordinates of each point are:
Figure BDA0003196458830000041
Figure BDA0003196458830000042
wherein, (X, Y, Z)T WorldAnd (X, Y, Z)T camRespectively representing coordinates under a standard world coordinate system and a sensor coordinate system; t iscam2rigA transformation matrix representing the sensor coordinate system to the origin of the Hololens device, related to the position of the sensor on the device; t isrig2worldA transformation matrix representing the origin of the Hololens world coordinate system defined by the Hololens device at program startup; t isworld2WorldRepresenting the transformation of the Hololens world coordinate system to the standard world coordinate system.
Specifically, step S5 specifically includes:
s501, preprocessing the point cloud model in the step S4, removing outliers and noise points, traversing each point in the point cloud model, and calculating the geometric center and the model size of the part;
s502, selecting 30 scenes from a common indoor scene data set SUNRGBD, and extracting point cloud models of the 30 scenes;
s503, making the point cloud model corresponding to the part in the step S501 move and rotate randomly in the scene point cloud model extracted in the step S502, taking the part point cloud model and the scene point cloud model after each transformation as a synthetic point cloud model, and calculating 8 vertexes of the part point cloud model and the 3D boundary frame of the part point cloud model after the transformation;
s504, searching a point in the 3D bounding box in the step S503, calculating the distance between the point and the geometric center of the part, and storing the distance as a vote.
S505, according to the point cloud model synthesized in the step S503, using a two-dimensional array obb [ obj _ num,10 ]]Representing part category and spatial pose, obj _ num representing the number of objects present in the synthesized point cloud scene,
Figure BDA0003196458830000053
indicating partsCenter after transformation, obb [ i,3:6]Representing the length, width and height of the object, obb [ i,6:9 ]]Indicating the angle through which the object has been rotated along the x-axis, y-axis, and z-axis, respectively, obb [ i,9 ]]Representing the semantic category sem _ class of the part, and saving the two-dimensional array as an obb.npz file;
and S506, downsampling the point cloud model synthesized in the step S503 by adopting an FPS (field programmable gate array) farthest point sampling method to finally obtain 20000 points, and storing the 20000 points as a pc.
Further, in step S501, the geometric center of the part and the model size are specifically:
Figure BDA0003196458830000051
Figure BDA0003196458830000052
wherein x ismin,ymin,zmin,xmax,ymax,zmaxRepresenting the boundary of the model point cloud; l, w, h are defined as half the length, width and height of the model.
Further, in step S503, the point cloud data Pcd' after transformation is:
Pcd′=R·Pcd+T
where Pcd denotes the point cloud data before transformation, R denotes a matrix related to rotation, and T denotes a matrix related to translation.
Specifically, step S6 specifically includes:
s601, an improved Votenet network adopts Pointnet + + as a backbone, the grouping radius of setAbstraction in the Pointnet + + is set to be factor x [0.2,0.4,0.8,1.0], and the factor is a proportional factor related to the size of a part to be detected, so that setAbstraction can better acquire local key points and characteristic vectors of the part;
s602, modifying a one-dimensional convolutional layer of a prediction characteristic vector generated by a Porposal Net in a Votenset network into Conv1d (128,2+3+ num _ header _ bin multiplied by 6+ num _ size _ cluster multiplied by 4+ num _ cluster, 1), and predicting the rotation angles of the part in three directions on the original basis;
s603, obtaining accurate spatial position and rotation direction of the part according to the prediction result obtained in the step S602, segmenting the part from the original point cloud, obtaining the key point ojb _ xyz and the feature vector ojb _ feature of the point cloud in the step S601, finding the corresponding key point in the part template point cloud through similarity evaluation of the feature vector, registering through an ICP method, and obtaining a secondary calibration matrix Tcalib
Specifically, step S7 specifically includes:
s701, a user wears a Hololens device, acquires depth data of an environment through a depth sensor and sends the depth data to a remote server in a data frame DepthFrame mode;
s702, the remote server acquires and analyzes the depth data frame DepthFrame in the step S701, and a point cloud model of the environment is constructed;
s703, filtering the original point cloud model converted in the step S702, setting a distance threshold value clamp _ min to be 0.2 and a distance threshold value clamp _ max to be 0.8, and removing points which are less than the clamp _ min or more than the clamp _ max from a user as invalid points;
s704, inputting the point cloud model processed in the step S703 into the Votenet network trained in the step S6, and outputting a prediction result, wherein the prediction result comprises a geometric center of an object in space, a size of the model, a rotation angle _ angle of the model and a semantic type sem _ class;
s705, obtaining the pose of the corresponding part in the space according to the prediction result in the step S704, selecting point clouds in the area, and obtaining the calibrated space pose through ICP (inductively coupled plasma) registration;
s706, updating the result of the step S705 in real time by using the publish-subscribe function of the data middleware Redis, and when the pose change of the part exceeds a threshold value, the server publishes the spatial pose to the Redis, and the value in the Hololens device is updated accordingly.
Another technical solution of the present invention is an assembly guidance system based on Hololens depth data, comprising:
the acquisition module is used for converting the depth data of the real environment acquired by the Hololens equipment into a point cloud model under a world coordinate system;
the marking module marks the point cloud model obtained by the acquisition module and converts the point cloud into a standard data set format according to marking information;
the three-dimensional module is used for constructing a three-dimensional model of the part to be assembled;
the conversion module is used for converting the three-dimensional model of the part constructed by the three-dimensional module into a point cloud model;
the synthesis module is used for carrying out space coordinate transformation on the part point cloud model converted by the conversion module, synthesizing the part point cloud model with a plurality of scene point cloud data, and converting the synthesized point cloud into a standard data set format;
the training module is used for inputting the standard data set converted by the labeling module and the standard data set synthesized by the synthesis module into an improved Votenet network for training, storing a training result and carrying out auxiliary calibration by combining an ICP method;
the detection module is used for enabling a user to wear the Hololens equipment, transmitting the depth data of the real scene to the remote server, preprocessing the data by the remote server, inputting the preprocessed data into the network trained by the training module, and outputting a detection result;
and the guidance module is used for acquiring a detection result output by the detection module by the Hololens equipment, and displaying corresponding auxiliary assembly information to a user in a holographic image form to realize assembly guidance.
Compared with the prior art, the invention has at least the following beneficial effects:
the invention relates to an assembly guidance method based on Hololens depth data, which is characterized in that a point cloud model of a scene is restored by utilizing original depth data acquired by a sensor of the Hololens, point cloud is used as input, and the type and the spatial pose of a part to be assembled in the scene are detected through a Votenset network and an ICP method, so that corresponding assembly guidance information is provided for an assembler; the method does not need to preset markers or extra sensors, can fully exert the advantages of easy carrying and liberation of two hands of the Hololens equipment, realizes flexible assembly guidance of different stations in different scenes, and has good universality.
Furthermore, through a Hololens research mode, the authority of the depth sensor is obtained, a data transmission plug-in is developed, the real-time depth image of the scene is transmitted to the background server, the background server converts the depth image into a point cloud model through the imaging principle, subsequent detection can take the scene real-time point cloud as input, and the timeliness of assembly guidance is improved.
Furthermore, point cloud data are reconstructed based on the depth image, a real scene is represented through points in a three-dimensional space, the geometric shape of the surface of an object can be better reflected, and meanwhile, the subsequent detection network is convenient to input.
Furthermore, in the actual assembly process, personnel are usually in a moving state, the coordinate system of the sensor can be changed continuously, a standard world coordinate system is arranged to convert points into the coordinate system, the consistency of point cloud coordinates can be guaranteed, meanwhile, point clouds obtained by conversion of a single-frame depth image of the sensor are usually sparse, and multi-frame data can be combined conveniently by converting the point clouds into a unified coordinate system, so that the point clouds are more complete.
Furthermore, on the basis of labeling the real scene point cloud, the 3D model of the part to be detected is converted into the point cloud, the part point cloud is randomly translated and rotated, meanwhile, the 3D boundary frame is calculated to generate labeling information, the labeling information is synthesized with the scene point clouds, the real data and the synthesized data are jointly used as a final data set, and the problem that the assembled part data set is difficult to construct and label is solved.
Further, for synthetic data, on one hand, the data size is large, on the other hand, random transformation of the part point cloud is not obvious, manual marking is difficult, and corresponding marking information can be accurately obtained through obtaining the geometric center and the size of the part point cloud and through matrix calculation according to rotation and translation of each time.
Furthermore, the accurate position of the part in the three-dimensional space and the rotation direction approximately around the x, y and z axes can be detected by modifying the 3D target detection network Votenset, on the basis, key and feature vectors are obtained from the predicted region, corresponding points in the part template point cloud are found through similarity evaluation, and the ICP method is used for carrying out more detailed registration, so that the more accurate spatial pose of the part is obtained.
Furthermore, in the actual assembly process, a user wears the Hololens to transmit the real-time depth data of a seen scene to the background server, the server processes the data and inputs the data into the detection network, the detection result of the time is compared with the detection result of the last time, when the change exceeds a threshold value, the part pose changes obviously, the server issues a new result to Redis, and the Hololens obtains an updated value accordingly, so that the transmission of redundant information is reduced and the detection result is more stable under the condition of ensuring the real-time performance of the data.
In conclusion, the invention develops the Hololens real-time depth data transmission plug-in, applies the improved Votenset network to the field of assembly part detection, realizes the detection of the spatial pose of the assembly part through the point cloud data converted by the depth data, can effectively push auxiliary assembly information to an assembly worker in the form of holographic images, realizes assembly guidance and improves the assembly efficiency.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a flowchart of the present invention of the assembly guidance method based on Hololens depth data;
FIG. 2 is a partial point cloud data map of a data set constructed in accordance with the present invention, wherein (a) is real environment point cloud data and (b) is synthetic point cloud data;
FIG. 3 is a diagram of the prediction based on the Hololens point cloud, in which (a) is the initial prediction result and (b) is the result after calibration;
fig. 4 is a diagram of pushing related assembly guidance information based on the prediction result, wherein (a) is a detection result of the fan, (b) is assembly information related to the fan, and (c) is component information for the next assembly of the fan.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be understood that the terms "comprises" and/or "comprising" indicate the presence of the stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Various structural schematics according to the disclosed embodiments of the invention are shown in the drawings. The figures are not drawn to scale, wherein certain details are exaggerated and possibly omitted for clarity of presentation. The shapes of various regions, layers and their relative sizes and positional relationships shown in the drawings are merely exemplary, and deviations may occur in practice due to manufacturing tolerances or technical limitations, and a person skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions, according to actual needs.
The invention provides an assembly guidance method based on Hololens depth data, which comprises the steps of firstly starting an augmented reality application of a Hololens device and transmitting depth data of a scene to a server; then converting the depth data into point cloud data by the server, inputting the point cloud data into a three-dimensional target detection network, predicting the category and the spatial pose of the part, and performing accurate calibration by utilizing ICP (inductively coupled plasma); and the Hololens equipment obtains a final prediction result, and pushes the most appropriate auxiliary assembly information to the assembly personnel in a holographic image form by combining a pre-constructed knowledge graph, so that the assembly guidance effect is realized. The invention does not need any preset marker and sensor, can realize the assembly guidance of different parts and different stations, and simultaneously takes the real-time data of the environment as input, thereby having better timeliness.
Hololens is the wear-type augmented reality equipment of Microsoft development, can use completely independently, need not the cable connection, and the user wears Hololens and can uses the real world to obtain holographic experience as the carrier, and Hololens itself has a plurality of sensors simultaneously and includes 4 visible light cameras, 2 infrared cameras, 1 tof depth sensor etc. has the ability of environmental perception.
Referring to fig. 1, the assembly guidance method based on the Hololens depth data of the present invention includes the following steps:
s1, converting the depth data of the real environment collected by the Hololens equipment into a point cloud model under a world coordinate system;
s101, obtaining a depth sensor access right through a Hololens device research mode, developing a depth data transmission plug-in through C + +, importing the depth data transmission plug-in into a Unity3D augmented reality application, and deploying the depth data transmission plug-in to a Hololens device;
s102, starting an application, recording the position of the equipment at the moment as the origin of a Hololens world coordinate system, setting a depth sensor into a LongThrow working mode, wearing the Hololens equipment by a user to move in a scene, and after establishing socket connection with a server, sending a real-time depth data frame DepthFrame to the server by the Hololens;
s103, analyzing the DepthFrame in the step S102 to obtain a space transformation matrix T from the current equipment origin rigNode to the world coordinate system originrig2worldThe image width imgWidth and the image height imgHeight of the depth image and the depth data D corresponding to each pixel point;
Figure BDA0003196458830000121
wherein the depth data D is equal to three-dimensional space points (X, Y, Z) in the sensor coordinate systemT camDistance to the sensor.
S104, any point (u, v) on the two-dimensional depth imageTCan be converted into three-dimensional coordinates in the sensor coordinate system:
Figure BDA0003196458830000122
Figure BDA0003196458830000123
wherein, (X, Y, Z)T camCoordinates under a sensor coordinate system; (u, v)TRepresenting the coordinates of any point pixel on the two-dimensional depth image; f represents a mapping relation related to the internal parameters of the sensor, and (u, v)TMapping to three-dimensional coordinates of (x, y, z), z being 1;
and S105, calculating the coordinates of each point in a standard coordinate system (the z axis is upward, the y axis is forward, and the x axis is rightward) through coordinate transformation:
Figure BDA0003196458830000124
Figure BDA0003196458830000125
wherein, (X, Y, Z)T WorldAnd (X, Y, Z)T camRespectively representing coordinates under a standard world coordinate system and a sensor coordinate system; t iscam2rigA transformation matrix representing the sensor coordinate system to the origin of the Hololens device, related to the position of the sensor; t isrig2worldA transformation matrix representing the origin of the Hololens world coordinate system defined at startup of the program by the Hololens device; the Hololens world coordinate system defaults to y-axis up, -z-axis forward, x right Tworld2WorldRepresenting its transformation into a standard world coordinate system;
and S106, storing the point cloud data obtained through the depth data conversion into a pcd file.
S2, marking the point cloud model by using marking software, and converting the point cloud into a standard data set format according to marking information;
and marking the parts in the point cloud by using a 3D boundary box through marking software so as to obtain a geometric center, a model size, a rotation angle and a semantic type sem class of the parts, and finally storing the parts in a standard format of a data set.
S3, constructing a three-dimensional model of the part to be assembled;
three-dimensional modeling software such as SolidWorks is used to build a three-dimensional model of the part.
S4, converting the part three-dimensional model in the step S3 into a point cloud model;
and converting the three-dimensional model of the part into a point cloud model through pcl.
S5, performing space coordinate transformation on the point cloud model of the part in the step S4, synthesizing the point cloud model with a plurality of scene point cloud data, and converting the synthesized point cloud into a standard data set format;
s501, preprocessing the point cloud model in the step S4, removing outliers and noise points, traversing each point in the point cloud, and calculating the geometric center of the part, the size of the model:
Figure BDA0003196458830000131
Figure BDA0003196458830000132
wherein x ismin,ymin,zmin,xmax,ymax,zmaxRepresenting the boundary of the model point cloud; l, w, h are defined as half the length, width and height of the model.
S502, selecting 30 scenes from a common indoor scene data set SUNRGBD due to the fact that the real environment point cloud data obtained in the step S1 are limited, extracting a point cloud model of the scene and synthesizing the point cloud model with a part point cloud model;
s503, enabling the model to move randomly and rotate in the scene to achieve data enhancement, and calculating point clouds after transformation and 8 vertexes of a 3D boundary box of the model;
Pcd′=R·Pcd+T
Figure BDA0003196458830000141
Figure BDA0003196458830000142
wherein Pcd and Pcd' respectively represent point cloud data before and after transformation; box _ kernels represents the vertices of the transformed part 3D bounding box; r represents a matrix related to rotation, wherein alpha, beta and gamma are rotation angles around an x axis, a y axis and a z axis respectively and range from [0,2 pi ]; t represents a matrix associated with translation, where the values are the distance translated in three directions.
S504, searching for a point in the 3D bounding box, calculating the distance between the point and the geometric center of the part, and storing the distance as a vote.
S505, using a two-dimensional array obb [ obj _ num,10]Representing part categories and spatial poses, wherein obj _ num represents the number of objects present in the synthesized point cloud scene,
Figure BDA0003196458830000143
denotes the centre of the part after transformation, obb [ i,3:6]Representing the length, width and height of the object, obb [ i,6:9 ]]Indicating the angle through which the object has been rotated along the x-axis, y-axis, and z-axis, respectively, obb [ i,9 ]]Representing the semantic category sem _ class of the part, and saving the two-dimensional array as an obb.npz file;
and S506, downsampling the synthesized point cloud by adopting FPS (field programmable gate array) farthest point sampling to finally obtain 20000 points, and storing the 20000 points as a pc.
S6, inputting the data sets constructed in the steps S2 and S5 into an improved Votenet network for training, storing a final training result, and performing auxiliary calibration by combining an ICP method;
s601, a Votenet network adopts Pointnet + + as a backbone, and the grouping radius of setAbstraction in the Pointnet + + is set to be factor x [0.2,0.4,0.8 and 1.0], wherein the factor is a proportional factor related to the size of a part to be detected, so that setAbstraction can better acquire local key points and characteristic vectors of the part;
s602, modifying a one-dimensional convolutional layer for generating a prediction characteristic vector in a Porposal Net in Votenset into Conv1d (128,2+3+ num _ header _ bin multiplied by 6+ num _ size _ cluster multiplied by 4+ num _ cluster, 1), and predicting the rotation angles of the part in three directions on the original basis;
s603, obtaining accurate spatial position and approximate rotation direction of the part according to the prediction result, segmenting the region from the original point cloud, obtaining the key point ojb _ xyz and the feature vector ojb _ feature of the part point cloud from the step S601, finding the corresponding key point in the part template point cloud through similarity evaluation of the feature vector, registering through an ICP method, and obtaining a secondary calibration matrix Tcalib
S7, the user wears a Hololens device, the depth data of the real scene are transmitted to a remote server, the server preprocesses the data and inputs the preprocessed data into the network trained in the step S6, and a detection result is output;
s701, a user wears a Hololens, an application program is opened, the environment with the parts to be assembled is observed, and a depth sensor collects depth data of the environment and sends the depth data to a remote server;
s702, analyzing the DepthFrame by the server, and constructing a point cloud model of the environment according to the method in the step S1;
s703, filtering the converted original point cloud, removing noise points and outliers, considering the distance between a part and a user during assembly, setting a distance threshold value clamp _ min to be 0.2 and a distance threshold value clamp _ max to be 0.8, and removing points too close to or too far away from the user as invalid points;
s704, inputting the processed point cloud into a trained Votenet network, and outputting a prediction result, wherein the prediction result comprises a geometric center of an object in space, the size of a model, the rotation angle rotation _ angle of the model and the semantic category sem _ class;
s705, obtaining a calibrated spatial pose through ICP registration;
s706, updating the result of the step S705 in real time by using the publish-subscribe function of the data middleware Redis, and when the pose change of the part exceeds a certain threshold, the server publishes the spatial pose to the Redis, and the value in the Hololens is updated accordingly.
And S8, obtaining the returned detection result by the Hololens, and displaying the corresponding auxiliary assembly information to the user in the form of a holographic image.
An assembly semantic model is established in advance by using a knowledge graph, the assembly structure, the assembly process and the assembly relation among parts are described, the category of the current part is determined according to semantic information in a detection result, and the assembly state of the part is judged according to the space pose of the part and the distance between the parts, so that the most possible assembly process or other auxiliary assembly information in the next step is displayed to a user in a holographic image form.
The augmented reality application developed by Hololens is opened, the assembly state of the current part can be deduced according to the part detection result and the pre-constructed knowledge graph of the assembly semantic model, the most appropriate guidance information is pushed to an assembler in the form of holographic images, the assembler does not need to manually search and screen, the information acquisition efficiency is higher, the display form is more visual, and the assembly guidance efficiency can be effectively improved
In another embodiment of the present invention, an assembly guidance system based on the Hololens depth data is provided, which can be used to implement the assembly guidance method based on the Hololens depth data, and specifically, the assembly guidance system based on the Hololens depth data includes an acquisition module, a labeling module, a three-dimensional module, a conversion module, a synthesis module, a training module, a detection module, and a guidance module.
The acquisition module is used for converting depth data of a real environment acquired by the Hololens equipment into a point cloud model under a world coordinate system;
the marking module is used for marking the point cloud model obtained by the acquisition module by using marking software and converting the point cloud into a standard data set format according to marking information;
the three-dimensional module is used for constructing a three-dimensional model of the part to be assembled;
the conversion module is used for converting the three-dimensional model of the part constructed by the three-dimensional module into a point cloud model;
the synthesis module is used for carrying out space coordinate transformation on the part point cloud model converted by the conversion module, synthesizing the part point cloud model with a plurality of scene point cloud data, and converting the synthesized point cloud into a standard data set format;
the training module is used for inputting the standard data set converted by the labeling module and the standard data set synthesized by the synthesis module into an improved Votenet network for training, storing a training result and carrying out auxiliary calibration by combining an ICP method;
the device comprises a detection module, a remote server and a training module, wherein the detection module is used for transmitting the depth data of a real scene to the remote server by the Hololens device, preprocessing the depth data by the remote server, inputting the preprocessed depth data into a Votenset network trained by the training module, and outputting a detection result;
and the guidance module is used for acquiring a detection result output by the detection module by the Hololens equipment, and displaying corresponding auxiliary assembly information to a user in a holographic image form to realize assembly guidance.
In yet another embodiment of the present invention, a terminal device is provided that includes a processor and a memory for storing a computer program comprising program instructions, the processor being configured to execute the program instructions stored by the computer storage medium. The Processor may be a Central Processing Unit (CPU), or may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable gate array (FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, etc., which is a computing core and a control core of the terminal, and is adapted to implement one or more instructions, and is specifically adapted to load and execute one or more instructions to implement a corresponding method flow or a corresponding function; the processor of the embodiment of the invention can be used for the operation of the assembling guidance method based on the Hololens depth data, and comprises the following steps:
converting depth data of a real environment collected by a Hololens device into a point cloud model under a world coordinate system; marking the point cloud model by using marking software, and converting the point cloud into a standard data set format according to marking information; constructing a three-dimensional model of a part to be assembled; converting the three-dimensional model of the part into a point cloud model; carrying out space coordinate transformation on the converted part point cloud model, synthesizing the part point cloud model with a plurality of scene point cloud data, and converting the synthesized point cloud into a standard data set format; inputting the converted standard data set and the synthesized standard data set into an improved Votenet network for training, storing a training result and carrying out auxiliary calibration by combining an ICP method; the method comprises the steps that a Hololens device transmits depth data of a real scene to a remote server, the remote server preprocesses the depth data, inputs the depth data into a trained Votenet network, and outputs a detection result; and the Hololens equipment obtains the detection result, and displays the corresponding auxiliary assembly information to the user in the form of holographic images to realize assembly guidance.
In still another embodiment of the present invention, the present invention further provides a storage medium, specifically a computer-readable storage medium (Memory), which is a Memory device in a terminal device and is used for storing programs and data. It is understood that the computer readable storage medium herein may include a built-in storage medium in the terminal device, and may also include an extended storage medium supported by the terminal device. The computer-readable storage medium provides a storage space storing an operating system of the terminal. Also, one or more instructions, which may be one or more computer programs (including program code), are stored in the memory space and are adapted to be loaded and executed by the processor. It should be noted that the computer-readable storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory.
One or more instructions stored in a computer-readable storage medium may be loaded and executed by a processor to implement the corresponding steps of the above-described embodiments with respect to the Hololens depth data-based assembly guidance method; one or more instructions in the computer-readable storage medium are loaded by the processor and perform the steps of:
converting depth data of a real environment collected by a Hololens device into a point cloud model under a world coordinate system; marking the point cloud model by using marking software, and converting the point cloud into a standard data set format according to marking information; constructing a three-dimensional model of a part to be assembled; converting the three-dimensional model of the part into a point cloud model; carrying out space coordinate transformation on the converted part point cloud model, synthesizing the part point cloud model with a plurality of scene point cloud data, and converting the synthesized point cloud into a standard data set format; inputting the converted standard data set and the synthesized standard data set into an improved Votenet network for training, storing a training result and carrying out auxiliary calibration by combining an ICP method; the method comprises the steps that a Hololens device transmits depth data of a real scene to a remote server, the remote server preprocesses the depth data, inputs the depth data into a trained Votenet network, and outputs a detection result; and the Hololens equipment obtains the detection result, and displays the corresponding auxiliary assembly information to the user in the form of holographic images to realize assembly guidance.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the invention provides an assembly guidance method based on Hololens depth data, and a data set is constructed by labeling a real environment point cloud and synthesizing data. The process of marking the real environment point cloud comprises the steps of wearing Hololens to move in a scene, observing a part to be assembled placed on a desktop or a handheld part, transmitting depth data to a server, converting the depth data into the point cloud, and marking the point cloud by using marking software. The data synthesis process comprises the steps of selecting 30 scenes in the SUNRGBD data set, extracting point cloud models of the scenes, converting parts to be assembled into the point cloud models, enabling the point clouds of the parts to randomly move and rotate in the scene point clouds, and calculating corresponding labeling information. Finally, 2000 data sets of the assembly parts at different poses in different point cloud scenes are constructed, and part of point cloud data are shown in fig. 2.
Then inputting the data set into a three-dimensional target detection network for training, generating key points and feature vectors by a Votenet network through local and global features of different object point clouds in a Pointnet + + learning scene, generating votes through the voting network, predicting the offset distance of the geometric center of an object, predicting the geometric center of the object through clustering, finally outputting the geometric center, the length, width and height dimensions and the rotation angles in three directions of the object, and showing the result of the test result that the position part of each part of the assembly body in the space can be accurately detected as shown in Table 1.
Figure BDA0003196458830000201
However, in some cases, the prediction effect on the rotation angle is not ideal, the standard Votenet network only considers the rotation in the z-axis direction, the accuracy of the prediction result is judged by judging whether IoU of the predicted 3D boundary box and the real boundary box exceeds the threshold value, and after the rotation in three directions is considered, IoU cannot be simply calculated, so that the accuracy of the prediction result is judged by marking the vertexes of the 3D boundary box of the part by 1-8, and calculating whether the Euclidean distance of the corresponding 4 points exceeds the threshold value. Taking the impeller detection result as an example, the average prediction precision considering the position and the rotation is 53.7%; according to the initial prediction result, the points of the possible areas of the object and the characteristics of the points are segmented from the whole point cloud scene, corresponding points are selected from the part template point cloud through characteristic vector similarity calculation, the ICP method is utilized for further registration, the average prediction precision is improved to 83.4%, and the initial prediction result and the calibrated result are shown in figure 3.
The method comprises the steps of utilizing the publish/subscribe function of Redis to achieve real-time updating of the final detection result, representing the detection results of the previous and subsequent times by vectors, calculating Euclidean distance of the detection results, setting a threshold psi, if the detection results are larger than the threshold, indicating that parts are obviously changed, publishing the latest detection result to the Redis, pushing information to Hololens augmented reality application subscribed to the Redis, and enabling a user to obtain the latest information including the part types and the space poses of the part types. According to the pre-constructed knowledge graph, the current assembly state of the part can be deduced, the most reasonable auxiliary assembly information is pushed to the assembly personnel, and the assembly guidance with good timeliness and intuition is realized, as shown in fig. 4.
In summary, according to the assembly guidance method and system based on the Hololens depth data, parts are detected by using the environmental data acquired by equipment, no marker needs to be preset, no additional sensor needs to be arranged, the assembly guidance method and system can be applied to different assembly stations of different assembly objects, and the assembly guidance method and system are more universal and flexible; the environment real-time data is used as input, so that the timeliness is good; auxiliary assembly information is pushed in a holographic image mode, so that assembly guidance is more visual and efficient.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (10)

1. An assembly guidance method based on Hololens depth data is characterized by comprising the following steps:
s1, converting the depth data of the real environment collected by the Hololens equipment into a point cloud model under a world coordinate system;
s2, labeling the point cloud model obtained in the step S1, and converting the point cloud into a standard data set format according to labeling information;
s3, constructing a three-dimensional model of the part to be assembled;
s4, converting the part three-dimensional model constructed in the step S3 into a point cloud model;
s5, performing space coordinate transformation on the part point cloud model converted in the step S4, synthesizing the part point cloud model with a plurality of scene point cloud data, and converting the synthesized point cloud into a standard data set format;
s6, inputting the standard data set converted in the step S2 and the standard data set synthesized in the step S5 into an improved Votenet network for training, storing a training result and carrying out auxiliary calibration by combining an ICP method;
s7, the Hololens device transmits the depth data of the real scene to a remote server, the remote server preprocesses the depth data and inputs the depth data into the Votenet network trained in the step S6, and a detection result is output;
and S8, acquiring the detection result output in the step S7 by the Hololens equipment, and displaying the corresponding auxiliary assembly information to a user in the form of a holographic image to realize assembly guidance.
2. The method according to claim 1, wherein step S1 is specifically:
s101, obtaining a depth sensor access right through a Hololens device research mode, developing a depth data transmission plug-in through C + +, importing the depth data transmission plug-in into a Unity3D augmented reality application, and deploying the depth data transmission plug-in to a Hololens device;
s102, starting an application, recording the position of the Hololens equipment at the moment as an origin of a Hololens world coordinate system, enabling a depth sensor to work in a LongThrow working mode, enabling the Hololens equipment to move in a scene, and after socket connection with a server is established, enabling the Hololens equipment to send a real-time depth data frame DepthFrame to a background server;
s103, analyzing the DepthFrame in the step S102 to obtain the coordinates from the origin rigNode to the world of the current equipmentSpace transformation matrix T of originrig2worldThe image width imgWidth and the image height imgHeight of the depth image and the depth data D corresponding to each pixel point;
s104, according to the depth data obtained in the step S103, any point (u, v) on the two-dimensional depth image is processedTConverting into three-dimensional coordinates under a sensor coordinate system:
s105, calculating the coordinates of each point in a standard coordinate system through coordinate transformation according to the three-dimensional coordinates in the sensor coordinate system obtained in the step S104;
and S106, converting the three-dimensional point set obtained in the step S105 into point cloud data and storing the point cloud data.
3. The method according to claim 2, wherein in step S104, the three-dimensional coordinates in the sensor coordinate system are:
Figure FDA0003196458820000021
Figure FDA0003196458820000022
wherein, (X, Y, Z)T camCoordinates under a sensor coordinate system; (u, v)TRepresenting the coordinates of any point pixel on the two-dimensional depth image; f represents a mapping relation related to the internal parameters of the sensor, and (u, v)TThe mapping is (x, y, z), z being the three-dimensional coordinate of 1.
4. The method according to claim 2, wherein in step S105, the coordinates of each point are:
Figure FDA0003196458820000023
Figure FDA0003196458820000031
wherein, (X, Y, Z)T WorldAnd (X, Y, Z)T camRespectively representing coordinates under a standard world coordinate system and a sensor coordinate system; t iscam2rigA transformation matrix representing the sensor coordinate system to the origin of the Hololens device; t isrig2worldA transformation matrix representing the origin of the Hololens world coordinate system defined by the Hololens device at program startup; t isworld2WorldRepresenting the transformation of the Hololens world coordinate system to the standard world coordinate system.
5. The method according to claim 1, wherein step S5 is specifically:
s501, preprocessing the point cloud model in the step S4, traversing each point in the point cloud model, and calculating the geometric center and the model size of the part;
s502, selecting 30 scenes from an indoor scene data set SUNRGBD, and extracting point cloud models of the 30 scenes;
s503, randomly moving and rotating the point cloud model corresponding to the part in the step S501 in the scene point cloud model extracted in the step S502, taking the part point cloud model and the scene point cloud model after each transformation as a synthetic point cloud model, and simultaneously calculating 8 vertexes of the part point cloud model and the 3D boundary frame of the part point cloud model after the transformation;
s504, searching points in the 3D boundary frame in the step S503, and calculating the distance between the points and the geometric center of the part;
s505, according to the point cloud model synthesized in the step S503, using a two-dimensional array obb [ obj _ num,10 ]]Representing part category and spatial pose, obj _ num representing the number of objects present in the synthesized point cloud scene,
Figure FDA0003196458820000032
denotes the centre of the part after transformation, obb [ i,3:6]Representing the length, width and height of the object, obb [ i,6:9 ]]Indicating the angle through which the object has been rotated along the x-axis, y-axis, and z-axis, respectively, obb [ i,9 ]]Representing semantic classes of partsSorting the sems _ class, and storing the two-dimensional array;
s506, downsampling the point cloud model synthesized in the step S503 by adopting an FPS (flat panel display) farthest point sampling method to finally obtain 20000 points, and storing.
6. The method according to claim 5, wherein in step S501, the geometric center of the part, the model size, is specified as:
Figure FDA0003196458820000041
Figure FDA0003196458820000042
wherein x ismin,ymin,zmin,xmax,ymax,zmaxRepresenting the boundary of the model point cloud; l, w, h are defined as half the length, width and height of the model.
7. The method according to claim 5, wherein in step S503, the transformed point cloud data Pcd' is:
Pcd′=R·Pcd+T
where Pcd denotes the point cloud data before transformation, R denotes a matrix related to rotation, and T denotes a matrix related to translation.
8. The method according to claim 1, wherein step S6 is specifically:
s601, an improved Votenet network adopts Pointnet + + as a backbone, the grouping radius of setAbstraction in the Pointnet + + is set to be factor x [0.2,0.4,0.8,1.0], and the factor is a proportional factor related to the size of a part to be detected, so that setAbstraction can better acquire local key points and characteristic vectors of the part;
s602, modifying a one-dimensional convolutional layer of a prediction characteristic vector generated by a Porposal Net in a Votenset network into Conv1d (128,2+3+ num _ header _ bin multiplied by 6+ num _ size _ cluster multiplied by 4+ num _ cluster, 1), and predicting the rotation angles of the part in three directions on the original basis;
s603, obtaining accurate spatial position and rotation direction of the part according to the prediction result obtained in the step S602, segmenting the part from the original point cloud, obtaining the key point ojb _ xyz and the feature vector ojb _ feature of the point cloud in the step S601, finding the corresponding key point in the part template point cloud through similarity evaluation of the feature vector, registering through an ICP method, and obtaining a secondary calibration matrix Tcalib
9. The method according to claim 1, wherein step S7 is specifically:
s701, a user wears a Hololens device, acquires depth data of an environment through a depth sensor and sends the depth data to a remote server in a data frame DepthFrame mode;
s702, the remote server acquires and analyzes the depth data frame DepthFrame in the step S701, and a point cloud model of the environment is constructed;
s703, filtering the original point cloud model converted in the step S702, setting a distance threshold value clamp _ min to be 0.2 and a distance threshold value clamp _ max to be 0.8, and removing points which are less than the clamp _ min or more than the clamp _ max from a user as invalid points;
s704, inputting the point cloud model processed in the step S703 into the Votenet network trained in the step S6, and outputting a prediction result, wherein the prediction result comprises a geometric center of an object in space, a size of the model, a rotation angle _ angle of the model and a semantic type sem _ class;
s705, obtaining the pose of the corresponding part in the space according to the prediction result in the step S704, selecting point clouds in the area, and obtaining the calibrated space pose through ICP (inductively coupled plasma) registration;
s706, updating the result of the step S705 in real time by using the publish-subscribe function of the data middleware Redis, and when the pose change of the part exceeds a threshold value, the server publishes the spatial pose to the Redis, and the value in the Hololens device is updated accordingly.
10. An assembly guidance system based on Hololens depth data, comprising:
the acquisition module is used for converting the depth data of the real environment acquired by the Hololens equipment into a point cloud model under a world coordinate system;
the marking module marks the point cloud model obtained by the acquisition module and converts the point cloud into a standard data set format according to marking information;
the three-dimensional module is used for constructing a three-dimensional model of the part to be assembled;
the conversion module is used for converting the three-dimensional model of the part constructed by the three-dimensional module into a point cloud model;
the synthesis module is used for carrying out space coordinate transformation on the part point cloud model converted by the conversion module, synthesizing the part point cloud model with a plurality of scene point cloud data, and converting the synthesized point cloud into a standard data set format;
the training module is used for inputting the standard data set converted by the labeling module and the standard data set synthesized by the synthesis module into an improved Votenet network for training, storing a training result and carrying out auxiliary calibration by combining an ICP method;
the detection module is used for enabling a user to wear the Hololens equipment, transmitting the depth data of the real scene to the remote server, preprocessing the data by the remote server, inputting the preprocessed data into the network trained by the training module, and outputting a detection result;
and the guidance module is used for acquiring a detection result output by the detection module by the Hololens equipment, and displaying corresponding auxiliary assembly information to a user in a holographic image form to realize assembly guidance.
CN202110892450.6A 2021-08-04 2021-08-04 Assembly guidance method and system based on Hololens depth data Active CN113706689B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110892450.6A CN113706689B (en) 2021-08-04 2021-08-04 Assembly guidance method and system based on Hololens depth data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110892450.6A CN113706689B (en) 2021-08-04 2021-08-04 Assembly guidance method and system based on Hololens depth data

Publications (2)

Publication Number Publication Date
CN113706689A true CN113706689A (en) 2021-11-26
CN113706689B CN113706689B (en) 2022-12-09

Family

ID=78651524

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110892450.6A Active CN113706689B (en) 2021-08-04 2021-08-04 Assembly guidance method and system based on Hololens depth data

Country Status (1)

Country Link
CN (1) CN113706689B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359529A (en) * 2022-01-10 2022-04-15 大唐融合通信股份有限公司 Model disassembling method, device and system
CN115049730A (en) * 2022-05-31 2022-09-13 北京有竹居网络技术有限公司 Part assembling method, part assembling device, electronic device and storage medium
CN115880685A (en) * 2022-12-09 2023-03-31 之江实验室 Three-dimensional target detection method and system based on votenet model
CN117150676A (en) * 2023-09-06 2023-12-01 华中科技大学 Manufacturing method, system, equipment and medium for point cloud data set of ship shafting flange

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899301A (en) * 2020-06-02 2020-11-06 广州中国科学院先进技术研究所 Workpiece 6D pose estimation method based on deep learning
CN113128405A (en) * 2021-04-20 2021-07-16 北京航空航天大学 Plant identification and model construction method combining semantic segmentation and point cloud processing
CN113129372A (en) * 2021-03-29 2021-07-16 西安理工大学 Three-dimensional scene semantic analysis method based on HoloLens space mapping

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899301A (en) * 2020-06-02 2020-11-06 广州中国科学院先进技术研究所 Workpiece 6D pose estimation method based on deep learning
CN113129372A (en) * 2021-03-29 2021-07-16 西安理工大学 Three-dimensional scene semantic analysis method based on HoloLens space mapping
CN113128405A (en) * 2021-04-20 2021-07-16 北京航空航天大学 Plant identification and model construction method combining semantic segmentation and point cloud processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
信寄遥等: "基于RGB-D相机的多视角机械零件三维重建", 《计算技术与自动化》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359529A (en) * 2022-01-10 2022-04-15 大唐融合通信股份有限公司 Model disassembling method, device and system
CN115049730A (en) * 2022-05-31 2022-09-13 北京有竹居网络技术有限公司 Part assembling method, part assembling device, electronic device and storage medium
CN115049730B (en) * 2022-05-31 2024-04-26 北京有竹居网络技术有限公司 Component mounting method, component mounting device, electronic apparatus, and storage medium
CN115880685A (en) * 2022-12-09 2023-03-31 之江实验室 Three-dimensional target detection method and system based on votenet model
CN115880685B (en) * 2022-12-09 2024-02-13 之江实验室 Three-dimensional target detection method and system based on volntet model
WO2024119776A1 (en) * 2022-12-09 2024-06-13 之江实验室 Three-dimensional target detection method and system based on votenet model
CN117150676A (en) * 2023-09-06 2023-12-01 华中科技大学 Manufacturing method, system, equipment and medium for point cloud data set of ship shafting flange

Also Published As

Publication number Publication date
CN113706689B (en) 2022-12-09

Similar Documents

Publication Publication Date Title
CN113706689B (en) Assembly guidance method and system based on Hololens depth data
Sahu et al. Artificial intelligence (AI) in augmented reality (AR)-assisted manufacturing applications: a review
CN108648269B (en) Method and system for singulating three-dimensional building models
US10593104B2 (en) Systems and methods for generating time discrete 3D scenes
US20200050965A1 (en) System and method for capture and adaptive data generation for training for machine vision
CN103606188B (en) Geography information based on imaging point cloud acquisition method as required
WO2018075053A1 (en) Object pose based on matching 2.5d depth information to 3d information
EP3274964B1 (en) Automatic connection of images using visual features
US10073848B2 (en) Part identification using a photograph and engineering data
CN115345822A (en) Automatic three-dimensional detection method for surface structure light of aviation complex part
US20200057778A1 (en) Depth image pose search with a bootstrapped-created database
CN108537887A (en) Sketch based on 3D printing and model library 3-D view matching process
Yin et al. [Retracted] Virtual Reconstruction Method of Regional 3D Image Based on Visual Transmission Effect
Knyaz Machine learning for scene 3d reconstruction using a single image
CN116843867A (en) Augmented reality virtual-real fusion method, electronic device and storage medium
CN109118576A (en) Large scene three-dimensional reconstruction system and method for reconstructing based on BDS location-based service
Auliaramadani et al. Augmented reality for 3D house design visualization from floorplan image
Wang An AR Map Virtual–Real Fusion Method Based on Element Recognition
Bai et al. Visualization pipeline of autonomous driving scenes based on FCCR-3D reconstruction
Hazarika et al. Multi-camera 3D object detection for autonomous driving using deep learning and self-attention mechanism
Hao et al. Development of 3D feature detection and on board mapping algorithm from video camera for navigation
Roters et al. Quasireal-time 3d reconstruction from low-altitude aerial images
Yang et al. Real-time point cloud registration for flexible hand-held 3D scanning
Harshit et al. Advancements in open-source photogrammetry with a point cloud standpoint
Wang et al. PVONet: point-voxel-based semi-supervision monocular three-dimensional object detection using LiDAR camera systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant