CN115309113A - Guiding method for part assembly and related equipment - Google Patents

Guiding method for part assembly and related equipment Download PDF

Info

Publication number
CN115309113A
CN115309113A CN202210677574.7A CN202210677574A CN115309113A CN 115309113 A CN115309113 A CN 115309113A CN 202210677574 A CN202210677574 A CN 202210677574A CN 115309113 A CN115309113 A CN 115309113A
Authority
CN
China
Prior art keywords
point cloud
assembly
current
cloud data
assembled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210677574.7A
Other languages
Chinese (zh)
Inventor
王若楠
宋忠浩
虞响
郭佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202210677574.7A priority Critical patent/CN115309113A/en
Publication of CN115309113A publication Critical patent/CN115309113A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • G05B19/41805Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM] characterised by assembly

Abstract

The application discloses a guiding method for part assembly and related equipment, wherein the method comprises the following steps: acquiring preset installation operation information of a part, first source point cloud data and scene data in a current assembly scene, wherein the first source point cloud data is point cloud data corresponding to the part; processing scene data to obtain first target point cloud data; registering the first target point cloud data with first source point cloud data corresponding to the current part to be assembled to obtain real-time pose information of the current part to be assembled; and displaying next assembly information to guide the assembly of the next part to be assembled under the condition that the current part to be assembled meets the preset assembly condition based on the real-time pose information and the preset assembly operation information of the current part to be assembled, wherein the next assembly information comprises the name of the next part to be assembled and the assembly pose information of the next part to be assembled. Through the mode, the probability of assembly errors can be reduced, and the assembly efficiency is improved.

Description

Guiding method for part assembly and related equipment
Technical Field
The application relates to the technical field of mechanical assembly, in particular to a guiding method for part assembly and related equipment.
Background
In the traditional process of assembling mechanical equipment, an assembler relies on reading paper data or electronic documents to perform assembly operation, and the following problems exist: although the production assembly document is assisted by characters and mainly used for assembly drawings to describe the assembly process, the visualization effect is not intuitive, assembly personnel need to spend a certain time to obtain effective information, and the assembly efficiency is low; in addition, supervision is lacked in the assembly process, and phenomena such as missing installation or wrong installation can occur due to insufficient understanding of assembly personnel during field assembly, so that the product quality is unqualified.
Disclosure of Invention
The application provides a guiding method for part assembly and related equipment, which can reduce the probability of assembly errors and improve the assembly efficiency.
In order to solve the technical problem, the technical scheme adopted by the application is as follows: there is provided a method of guiding a parts assembly for guiding the parts assembly of a mechanical device, the mechanical device comprising at least two parts, the method comprising: acquiring preset installation operation information of a part, first source point cloud data and scene data in a current assembly scene, wherein the first source point cloud data is point cloud data corresponding to the part; processing scene data to obtain first target point cloud data; registering the first target point cloud data with first source point cloud data corresponding to a current part to be assembled to obtain real-time pose information of the current part to be assembled, wherein the current part to be assembled is a part currently installed in a target assembly scene; and displaying next assembly information to guide the assembly of the next part to be assembled under the condition that the current part to be assembled meets the preset assembly condition based on the real-time pose information and the preset assembly operation information of the current part to be assembled, wherein the next assembly information comprises the name of the next part to be assembled and the assembly pose information of the next part to be assembled.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided an assembly guiding device comprising a memory and a processor connected to each other, wherein the memory is used for storing a computer program, and the computer program is used for implementing the guiding method of the part assembly in the above technical solution when being executed by the processor.
In order to solve the above technical problem, another technical solution adopted by the present application is: the assembly guiding system comprises assembly guiding equipment and acquisition equipment, wherein the assembly guiding equipment is connected with the acquisition equipment and used for receiving scene data acquired by the acquisition equipment and processing the acquisition equipment to obtain virtual assembly information, and the assembly guiding equipment is the assembly guiding equipment in the technical scheme.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a computer-readable storage medium for storing a computer program for implementing the method for guiding the assembly of parts in the above technical solution when the computer program is executed by a processor.
Through above-mentioned scheme, this application's beneficial effect is: the method comprises the steps of firstly, acquiring preset installation operation information related to a part, first source point cloud data and scene data under a current assembly scene, wherein the first source point cloud data is point cloud data corresponding to the part; then processing the scene data to obtain first target point cloud data; registering the first target point cloud data with first source point cloud data corresponding to a current part to be assembled to obtain real-time pose information of the current part to be assembled, wherein the current part to be assembled is a part currently installed in a target assembly scene; then, judging whether the current part to be assembled meets a preset installation condition or not by using the real-time pose information and preset installation operation information of the current part to be assembled, if so, displaying next assembly information and guiding the assembly of the next part to be assembled, wherein the next assembly information comprises the name of the next part to be assembled and the assembly pose information of the next part to be assembled; according to the scheme, the matching display of the virtual assembly information and the real parts can be realized without adding marks to the parts, so that the assembly personnel can conveniently mount according to prompts, and the probability of mounting errors is reduced; moreover, due to the adoption of the point cloud data registration mode, compared with a scheme of profile matching, the method can identify the part and solve the pose of the part in a working scene with low illumination intensity, disordered background and less texture features, realize the combination of virtuality and reality, and can be applied to a real production assembly environment.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic structural diagram of an embodiment of an assembly guide system provided in the present application;
FIG. 2 is a schematic structural view of another embodiment of an assembly guide system provided herein;
FIG. 3 is a schematic flow chart diagram illustrating an embodiment of a method for guiding a part assembly provided herein;
FIG. 4 is a schematic flow chart diagram illustrating another embodiment of a method for guiding a part assembly provided herein;
FIG. 5 is a schematic structural view of yet another embodiment of an assembly guide system provided herein;
FIG. 6 is a schematic structural diagram of an embodiment of an assembly guide apparatus provided herein;
FIG. 7 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be noted that the following examples are only illustrative of the present application, and do not limit the scope of the present application. Likewise, the following examples are only some examples and not all examples of the present application, and all other examples obtained by a person of ordinary skill in the art without any inventive work are within the scope of the present application.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
It should be noted that the terms "first", "second" and "third" in the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of indicated technical features. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The technical terms related to the present application are introduced first:
assembling refers to a process of joining a plurality of parts and components into a product according to technical requirements, and debugging, inspecting and testing the product to be a qualified product.
Augmented Reality (AR) maps virtual information into a real scene in real time by means of a computer graphic image and data interaction technology, the virtual information and the real information are mutually supplemented, and the information amount and the understanding degree of perception of the real scene by an assembler are increased; the enhanced scene may be displayed by a display, a projector, or a head mounted display.
Because the AR technology has the capability of simultaneously displaying real world information and virtual world information, and virtual assembly information (such as a three-dimensional model, assembly animation or text prompt and the like) generated by a computer can be seamlessly superposed in an assembly field, aiming at the problems of low assembly efficiency, long assembly period and the like of the traditional scheme and aiming at assisting the assembly of mechanical products, the AR technology is used for fusing the real assembly field and the virtual assembly information, the cognition and understanding burden of an assembler is reduced, the assembly operation of the assembler is guided, the error is reduced, the assembly efficiency and the assembly quality are improved, and the following detailed description is carried out.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an embodiment of an assembly guiding system provided in the present application, where the assembly guiding system 10 includes an assembly guiding device 11 and a collecting device 12, the assembly guiding device 11 is connected to the collecting device 12, the assembly guiding device 11 is configured to receive scene data collected by the collecting device 12, and process the collecting device 12 to obtain virtual assembly information, where the virtual assembly information includes assembly animation and text information, and a specific structure and function of the assembly guiding device 11 will be described in detail below.
In one embodiment, as shown in FIG. 2, the assembly guidance system 10 further includes a table 13, the mechanical device 14 includes parts 14a-14e, the parts 14a-14e are placed on the table 13, the assembly guidance device 11 includes a host device 111 and a display device 112, the data input of the assembly guidance system 10 is a capture device 12 containing depth information, the capture device 12 may be a camera containing a time of flight (TOF) depth sensor, a high definition RGB camera, and a seven microphone circular array.
In the assembly process, a camera is used for shooting a target assembly scene, depth information and color information are obtained through a Software Development Kit (SDK), and the processing process of data and the realization of a related algorithm are completed by the host device 111; when the system runs, an assembler aims the camera at a target assembly scene, the enhanced picture is displayed through the display device 112, and the assembler can clearly see parts to be assembled and assembly pose information (including positions and directions) of the parts in the next step in the display device 112; moreover, during operation, if the assembler installs incorrectly, the system will give a warning and prompt for the correct installation method.
It can be understood that since the AR glasses or the handheld mobile terminal cannot liberate the hands of the assembler, the display device 112 is selected to display the picture, and the positional relationship is prompted by the display device 112, so that the installation efficiency of the assembler can be improved.
The embodiment provides an assembly navigation scheme of mechanical equipment, which realizes matching display of virtual assembly information and real parts under the condition of not adding marks to the parts, facilitates assembly personnel to install each part according to the virtual assembly information, and prevents installation errors; in addition, in the assembling process of an assembling person, the operation sequence and the operation result can be recorded and interpreted, and the alarm prompt is carried out aiming at the position and the sequence error, so that the installation accuracy is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of an embodiment of a guiding method for component assembly provided in the present application, the method is applied to an assembly guiding apparatus, the assembly guiding apparatus is used for guiding component assembly of a mechanical apparatus, the mechanical apparatus includes at least two components, and the method includes:
s31: the method comprises the steps of obtaining preset installation operation information of parts, first source point cloud data and scene data under a current assembly scene.
Acquiring preset installation operation information which is used for marking the installation sequence and the installation position of each part of the mechanical equipment; for example, as shown in fig. 2, the mounting order may be such that the parts 14a to 14e are mounted in order.
In addition, first source point cloud data corresponding to each part needs to be acquired, the first source point cloud data are point cloud data corresponding to the part and are complete point cloud data of the part, namely 360-degree point cloud data of the part, and the first source point cloud data are obtained by scanning the part. Specifically, the first source point cloud data is acquired in the following manner: the method comprises the steps of installing parts on a scanning rotary table, setting software parameters and capturing quantity (namely the quantity of captured images) of processing software, scanning the parts by using a three-dimensional laser scanner, rotating the parts by 360 degrees so as to acquire image information of the parts and transmit the acquired image information to the processing software, processing the acquired two-dimensional image data (namely the image information) into three-dimensional point cloud data (namely first source point cloud data) by using the processing software, and connecting the point cloud data in a space to form a complete curved surface, namely the process of forming a three-dimensional model of the parts.
S32: and processing the scene data to obtain first target point cloud data.
The scene data is data obtained by shooting a current assembly scene by the acquisition device, includes depth information and color information, and can be processed by a preprocessing method in the related art, such as: filtering or enhancing; and then filtering the preprocessed scene data to obtain first target point cloud data, wherein the first target point cloud data comprises real-time point cloud data of parts in a target assembly scene.
In a specific embodiment, in order to obtain the point cloud data, an Application Programming Interface (API) may be used to obtain an internal parameter matrix of the camera, where the internal parameter matrix includes: 1) Focal length f, representing the distance of the focal point of the camera to the mapping plane; 2) The adjacent pixel points ax and ay respectively represent the number of units occupied by one pixel in the horizontal direction and one pixel in the vertical direction in the image shot by the camera; 3) Coordinates (u 0, v 0) of an image coordinate system origin in the pixel coordinate system represent the number of horizontal pixels and the number of vertical pixels of a phase difference between the center pixel coordinate of the image and the image origin pixel coordinate.
The depth information can be converted into point cloud data in real time according to an internal matrix of the depth camera by using the following formula, namely, the point cloud data of the currently shot object (including the early warning part of an assembler) is reconstructed in real time:
Figure RE-GDA0003875713360000061
wherein, a x 、a y 、u 0 、v 0 As internal parameters of depth camerasCounting; the coordinates P' (Xc, yc, zc) of the point cloud data in the three-dimensional space can be calculated from the coordinates P (u, v) of the pixel points in the depth image.
S33: and registering the first target point cloud data with first source point cloud data corresponding to the current part to be assembled to obtain real-time pose information of the current part to be assembled.
The current to-be-assembled part is one of the parts in the target assembly scene, which is the part currently being installed in the target assembly scene, and the name of the part currently to be installed (i.e. the current to-be-assembled part) and other related information can be generated by using the preset installation operation information, such as: assembling pose information; and registering the first target point cloud data and first source point cloud data (recorded as current source point cloud data to be matched) corresponding to the current part to be assembled by adopting a registration method, so as to obtain the real-time pose information of the current part to be assembled and the name of the current part to be assembled.
For example, assuming that the mechanical device includes 3 parts A1 to A3, and the installation of the part A1 is completed, so that the part to be assembled is the part A2 at present, the source point cloud data to be currently matched is the first source point cloud data corresponding to the part A2, the first source point cloud data includes the real-time point cloud data of the parts A1 to A3, the current source point cloud data to be matched is matched with the first target point cloud data, and which point cloud data in the first target point cloud data belong to the real-time point cloud data of the part A2 can be determined; after the real-time point cloud data of the part A2 are determined, the real-time pose information of the part A2 is calculated, namely the position and the direction of the part A2 are calculated.
S34: and displaying next assembling information to guide the assembling of the next part to be assembled under the condition that the current part to be assembled meets the preset assembling condition based on the real-time pose information and the preset assembling operation information of the current part to be assembled.
Judging whether the current part to be assembled meets preset installation conditions or not based on the real-time pose information and the preset installation operation information of the current part to be assembled; if the current part to be assembled meets the preset installation condition, the current part to be assembled is indicated to be installed correctly, namely the installation position and the installation sequence of the current part to be assembled are correct, at the moment, the related information (namely the next assembly information) of the next part to be assembled can be obtained from the preset installation operation information, the next assembly information is displayed or broadcasted, so that the assembly of the next part to be assembled is guided, and the next assembly information comprises the name of the next part to be assembled and the assembly pose information of the next part to be assembled. For example, as shown in fig. 2, when the part to be assembled is the part 14b at the present time and it is judged that the mounting of the part 14b is satisfactory, information of the part 14c is displayed on the display device.
In one embodiment, a man-machine interaction scheme can be adopted, so that in the guiding and assembling process, an assembling person can quickly complete the interaction with an assembling guiding system through gestures or voice and other means, and system auxiliary functions such as process guiding, assembling information inquiry and the like are realized; the assembler can select a proper interaction mode according to the actual target assembly scene, and the universality of the system is further improved. Specifically, the purpose of human-computer interaction is to make the system understand the instruction sent by the assembler, and execute the corresponding action according to the intention of the assembler, and according to the difference of the interaction objects, the human-computer interaction technology is mainly divided into two types: interaction between an assembler and the virtual object and control of the assembler on the workflow; the interaction of the assembly personnel on the virtual object relates to the operations of creating, moving, rotating or zooming the virtual object; the control of the assembly personnel on the work flow mainly relates to control commands of starting, pausing, information selection, a previous step or a next step and the like of an assembly task, and aims to enable the assembly personnel to quickly acquire required virtual assembly information.
Furthermore, as the hands of the assembly personnel are not completely released by gesture interaction, the assembly continuity is disturbed to a certain degree, and the voice interaction is not suitable for being applied to a noisy assembly environment, the interaction means adopted by the embodiment is the combination of the gesture interaction and the voice interaction and the support; gesture interactions and voice interactions are described in detail below.
(1) Voice interaction
In this embodiment, based on a speech development tool (e.g., microsoft speech SDK) in the related art and combining a Microsoft Foundation Class Library (MFC) Dialog (Dialog) box Class, a speech interaction system for mechanical assembly guidance is developed to improve human-computer interaction efficiency in an actual assembly guidance process and improve assembly guidance learning experience of an assembler. Specifically, the voice interaction function mainly comprises a dictionary management module, a voice recognition module and a voice interaction service module, after an assembler sends a voice command, the voice command is received by the audio equipment, the voice recognition module is called by the voice interaction service module, the received voice information is recognized, and then the corresponding function is executed according to the recognition result.
For example, the speech included in the embodiment mainly includes: a) The two voice commands can broadcast the related assembly information of the current assembly process and provide help for assembly personnel; b) The 'previous step' and the 'next step', wherein the two voice commands are mainly used for providing help for an assembler in the assembly guidance process; c) The two voice commands support real-time video recording and storage of the mechanical assembly process, and provide resource support for later improved assembly procedures.
(2) Gesture interaction
The gesture recognition technology mainly realizes different interaction effects through different hand actions of assembly personnel, and when the assembly personnel are in a noisy working environment, voice interaction can be replaced through gesture interaction. Specifically, a sensor in the camera provides skeleton joint information and can accurately detect a palm joint, so that human skeleton can be tracked through the sensor, joint coordinates are extracted and vector calculation is carried out to obtain the direction pointed by the human body, the pointing gesture recognition of human and robot interaction is realized, and virtual assembly information of the previous step and the next step is provided for assembly personnel.
It will be appreciated that if the assembler guides the installation of the parts through voice interaction or gesture interaction, the assembly guide device needs to respond in time in order to increase the speed of assembly.
The embodiment provides a navigation method for mechanical production assembly, which realizes matching display of virtual assembly information and real parts, facilitates installation by an assembler according to prompts, and reduces the probability of installation errors; due to the fact that the point cloud data are registered, compared with a scheme of contour matching, the method can identify parts and solve the position and posture of the parts in a working scene with low illumination intensity, disordered background and few texture features, achieves virtual-real combination, and can be applied to a real production assembly environment.
Referring to fig. 4, fig. 4 is a schematic flowchart of another embodiment of a guiding method for component assembly provided in the present application, the method is applied to an assembly guiding apparatus, the assembly guiding apparatus is used for guiding assembly of a mechanical apparatus, the mechanical apparatus includes at least two components, and the method includes:
s41: the method comprises the steps of obtaining preset installation operation information of parts, first source point cloud data and scene data under a current assembly scene.
S41 is the same as S31 in the above embodiment, and is not described again here.
S42: and filtering the scene data by adopting a straight-through filtering method to obtain first target point cloud data.
For the acquired scene data, invalid background information and noise may exist, and the noise has adverse effects on the application effects of subsequent target identification, positioning and tracking, so that the scene data is filtered by adopting a direct filtering method to generate first target point cloud data, wherein the first target point cloud data comprises point cloud data of parts in a target assembly scene. Specifically, a dimension and a value range under the dimension can be specified, the scene data is traversed sequentially, whether the value of the scene data in the specified dimension is in the corresponding value range is judged, so that points with values not in the value range are deleted, and the points left after traversing form filtered point cloud data (namely, first target point cloud data).
Furthermore, the core of similarity matching between the first target point cloud data and the first source point cloud data acquired by the three-dimensional laser scanner is to solve an optimal transformation relation between the first target point cloud data and the first source point cloud data. In the target identification process, the sizes of a three-dimensional model of a part acquired by a three-dimensional laser scanner and the part in a target assembly scene are completely consistent, the part appearing in the environment is identified by utilizing the similarity matching of the first source point cloud data and the reconstructed first target point cloud data, and meanwhile, the posture information of the part in the real environment is acquired.
Since the first source point cloud data and the first target point cloud data have different device sources, the density of the point clouds (i.e., the density degree of the point clouds) is different, and it is very difficult to obtain an accurate transformation matrix when two point cloud data with different densities are registered, which causes a certain interference to the robustness of target identification.
S43: and respectively filtering the first target point cloud data and the first source point cloud data by adopting a self-adaptive filtering method to obtain second target point cloud data and second source point cloud data.
The difference between the density of the second target point cloud data and the density of the second source point cloud data is less than a preset density difference, such as: the density of the second target point cloud data is approximately the same as the density of the second source point cloud data; the following description takes an adaptive filtering method as a voxel filtering method as an example, and includes the following steps:
1) And establishing a bounding box based on the point cloud data to be processed.
The point cloud data to be processed comprises first target point cloud data or first source point cloud data; the bounding box wraps the point cloud data to be processed, the boundary of the bounding box is larger than that of the point cloud data to be processed, the volume of the bounding box is V, and the length, the width and the height of the bounding box are respectively l = x max -x min 、w=y max -y min 、 h=z max -z min ,x max 、x min Respectively is the maximum value and the minimum value in the x direction, y in the point cloud data to be processed max 、y min Respectively is the maximum value and the minimum value in the y direction, z in the point cloud data to be processed max 、 z min The maximum value and the minimum value in the z direction in the point cloud data to be processed are respectively.
Further, the bounding box comprises a plurality of voxels, the voxels are the minimum units of digital data in a three-dimensional space, the voxel filtering method is to reasonably divide the voxels of the input point cloud data (namely, point cloud data to be processed), and then respectively sample the point cloud data in each voxel, so as to reduce the point cloud data and simultaneously maintain the shape characteristics of the point cloud.
In one embodiment, a three-dimensional voxel grid is created through point cloud data to be processed, wherein the voxel grid is equivalent to a set of tiny three-dimensional cubes in space; and then, in each voxel (namely a three-dimensional cube), approximately displaying other points in the voxel by using the barycenter of all the points in the voxel to realize that all the points in the voxel are represented by one barycenter point, and performing voxel filtering processing on all the point cloud data to obtain filtered point cloud data.
Furthermore, the voxel size has a significant influence on the sampling rate of the point cloud data, and if the voxel size is too large, the characteristics of the sampled point cloud data are not obvious; if the voxel size is too small, the point cloud data contained in the voxel is reduced, and the sampling effect is not good, so that the size setting of the voxel needs to be calculated according to a preset sampling rate and a preset sampling density, which is described in detail below.
2) And dividing the bounding box based on a preset sampling rate and a preset point cloud density to obtain a plurality of voxels.
The preset sampling rate and the preset point cloud density are both set by an assembler, and the specific operation of dividing the bounding box by utilizing the preset sampling rate and the preset point cloud density is as follows:
a) The current size of the voxel is set to a preset initial size.
The preset initial size is the size of a voxel which is set in advance according to experience or application requirements, the length and the width of the voxel are set to be the same, and the current size is the length or the width of the voxel.
B) And updating the current size based on the preset sampling rate and the preset point cloud density to obtain the voxel size.
The operation of updating the current size includes the steps of:
b1 Update the current size based on the current size and a preset sampling rate.
Calculating the ratio of a preset value to a preset sampling rate to obtain a first numerical value; performing square-opening operation on the first numerical value to obtain a second numerical value; and determining the product of the second value and the current size as the current size.
For example, if the preset value is 1, it is assumed that the current size is updated by using the following formula:
Figure RE-GDA0003875713360000111
the relationship between the preset sampling rate and the number of point cloud data in each voxel is as follows:
Figure RE-GDA0003875713360000112
the point cloud density is calculated as follows:
Figure RE-GDA0003875713360000113
from the above equations (1) to (3), the following equations can be obtained:
Figure RE-GDA0003875713360000114
the calculation formula of the number of voxels is as follows:
Figure RE-GDA0003875713360000121
from equations (5) and (6), the following equations can be obtained:
Figure RE-GDA0003875713360000122
wherein s is the point cloud density, m is the number of point cloud data in each voxel, L is the current size, L' is the updated current size, and p is the preset sampling rate.
B2 Filtering a plurality of voxels corresponding to the current size to obtain filtered data; acquiring the number of point cloud data in the filtering data to obtain the number of current point clouds; and calculating the density of the current point cloud based on the number of the current point cloud and the volume of the bounding box.
Calculating the ratio of the number of the current point clouds to the volume of the bounding box to obtain the density of the current point clouds; specifically, voxel filtering is performed according to the new voxel side length calculated by the formula (6), and the number of currently filtered point clouds (i.e., the current point cloud number) N 'is calculated, so that the current point cloud density s' is as follows:
Figure RE-GDA0003875713360000123
b3 Based on the current point cloud density and the preset point cloud density, updating the current size, and returning to the step of updating the current size based on the current size and the preset sampling rate until the difference value between the current point cloud density and the preset point cloud density is within the preset difference value range.
Judging whether the current point cloud density is smaller than a preset point cloud density; if the current point cloud density is smaller than the preset point cloud density, determining the difference value between the current point cloud density and the preset step length as the current point cloud density; and if the current point cloud density is greater than or equal to the preset point cloud density, determining the sum of the current point cloud density and a preset step length as the current point cloud density, wherein the preset step length is a change factor of each time.
For example, a preset sampling density is recorded as u, the magnitudes of s 'and u are compared, and if s' = u, L '= L' - β; if s'>u, then L '= L' + β; through multiple iterative computations, a suitable voxel size can be finally solved to divide the bounding box, where β is a preset step size, and a specific value of β can be set according to experience or application requirements, such as:
Figure RE-GDA0003875713360000124
c) Based on the voxel size and the volume of the bounding box, the number of voxels is calculated.
The ratio of the volume of the bounding box to the cube of the voxel size is calculated, resulting in the number of voxels.
3) The point cloud data within each voxel is filtered.
The manner of filtering the point cloud data in each voxel is the same as that in the related art, and is not described herein again.
In this embodiment, the first source point cloud data and the first target point cloud data are filtered by the above filtering scheme, so that substantially the same point cloud density can be obtained.
S44: and registering the second target point cloud data with second source point cloud data corresponding to the current part to be assembled to obtain real-time pose information of the current part to be assembled.
The preset installation operation information comprises the installation sequence of the parts and the assembly pose information of the parts; and registering the filtered source point cloud data (namely, second source point cloud data) and the filtered target point cloud data (namely, second target point cloud data) to identify parts in the environment, calculating a rotation matrix and a translation matrix between the second source point cloud data and the second target point cloud data corresponding to the current part to be assembled, and determining the position of the virtual assembly information to be superposed in the real world.
Further, since the second target point cloud data inevitably contains a plurality of objects, the second target point cloud data needs to be segmented before registration, in this embodiment, a segmentation algorithm based on euclidean clustering is adopted, and the distance between adjacent point clouds is used as a determination standard to determine whether the point cloud data should be clustered into one class; and then registering the second source point cloud data and the segmented point cloud data.
The traditional point cloud registration algorithm has high requirements on the positions of point clouds, and if the initial spatial position difference of the two point clouds is large, local optimization is generated, so that the point cloud registration fails. Based on this, the embodiment matches a plurality of local point cloud data by adopting a method combining initial registration and fine registration; specifically, a sampling consistency initial point cloud registration algorithm is adopted, a plurality of point cloud data under different coordinate systems are subjected to rough registration, the spatial positions of the plurality of point cloud data are adjusted, and a better initial registration position is provided for fine registration; after the rough registration, performing fine registration on the Point cloud data by using an Iterative Closest Point (ICP) algorithm, wherein the ICP algorithm is registered as a pose transformation relation between two adjacent frames.
In one embodiment, a real-time pose can be generated by singly adopting point cloud matching based on depth information, but the acquired real-time pose can be optimized by matching through RGB information, so that the pose is more accurate; since the depth camera provides color information and depth information, the embodiment acquires real-time pose information of a current part to be assembled by combining RGB information and depth information, namely, the pose is optimized by using the RGB information on the basis of generating the pose by matching the depth information, and the relative position relation between a moving part and a fixed camera in a target assembly scene is acquired in real time; the method can improve the robustness of the three-dimensional tracking registration process, and the three-dimensional feature points containing color information are acquired in real time by utilizing an API (application program interface) provided by a camera, and through a feature extraction algorithm, for example: fast Point Feature Histograms (FPFH) detect key points of color information in a three-dimensional scene to obtain corresponding key Point descriptor vectors, and the key points are matched and optimized through a nearest Point algorithm and a vector inner product maximum principle; on the basis, removing the mismatching key point pairs by using a Random Sample Consensus (RANSAC) algorithm to obtain a key point cloud set with a high registration rate; and finally, registering the key point pairs by using an ICP (inductively coupled plasma) registration method to obtain a transformation matrix. Further, in order to improve matching efficiency and reduce computation complexity, a k-dimensional tree (k-d) near point search algorithm is adopted to accelerate the search of near points, and finally a rotation matrix and a translation vector are obtained.
Since the size and the dimension of each part in the target assembly scene are fixed, in this embodiment, each clustered point cloud data is matched with the dimension of the three-dimensional model of the part acquired by the three-dimensional laser scanner, the confidence coefficients of the length, the width, the height and the volume of the model are set as standards, and the clustered point cloud data with the highest confidence coefficient is used as the real-time point cloud data corresponding to the current part to be assembled.
S45: and determining whether the current part to be assembled meets the preset installation condition or not based on the real-time pose information and the preset installation operation information of the current part to be assembled.
Detecting the current part to be assembled to obtain real-time pose information of the current part to be assembled and an installation sequence of the current part to be assembled; judging whether the assembly pose information of the current part to be assembled is the same as the assembly pose information (recorded as current reference assembly pose information) corresponding to the current part to be assembled in the preset assembly operation information and whether the installation sequence of the current part to be assembled is the same as the installation sequence (recorded as current reference installation sequence) corresponding to the current part to be assembled in the preset assembly operation information; and if the assembly pose information of the current part to be assembled is the same as the current reference assembly pose information and the installation sequence of the current part to be assembled is the same as the current reference installation sequence, determining that the preset installation condition is met, wherein the current reference assembly pose information comprises a reference installation position and a reference installation direction.
In one embodiment, a part is tracked to obtain a target tracking and identifying result, and the target tracking and identifying result is used for distinguishing the part to be assembled (namely the current part to be assembled) and other parts in the current state; the current part to be assembled is tracked, so that the spatial position relation between the current part to be assembled and the depth camera is calculated in real time, and when the current part to be assembled moves from the initial position to the reference installation position corresponding to the initial position, a corresponding voice prompt for correct installation is given. Specifically, when the current part to be assembled moves, in order to ensure a good virtual-real combination effect, a transformation matrix between the current part to be assembled and a camera is continuously calculated, namely, the six-degree-of-freedom pose of the current part to be assembled is accurately tracked in real time; and the computer acquires real-time pose information and accurately superimposes the virtual guide information on the current part to be assembled in the real scene.
The embodiment provides an environment perception method based on depth camera visual information, which adopts a point cloud registration method fusing depth information and color information to perform tracking registration and is applied to an assembly guide system; the method can identify the current part to be assembled and solve the real-time pose information of the current part to be assembled in a working scene with low illumination intensity, disordered background and few texture features, realizes the virtual-real combination, and can be applied to a real production assembly environment.
S46: and if the current part to be assembled meets the preset installation condition, displaying next assembly information so as to guide the assembly of the next part to be assembled.
S46 is the same as S34 in the above embodiment, and is not described again here.
S47: and if the current part to be assembled does not meet the preset installation condition, generating alarm information.
Under the condition that the current part to be assembled is judged not to meet the preset installation condition, the fact that the installation of the current part to be assembled is not in accordance with the requirement is indicated, possibly because the installation sequence of the current part to be assembled is incorrect or because the installation position of the current part to be assembled is incorrect, at the moment, alarm information can be generated to remind an assembler to adjust the current part to be assembled so as to meet the installation requirement.
In one embodiment, as shown in fig. 5, an operation error-proofing module may be provided, and the operation error-proofing module includes a position error determination module and a sequence error determination module; the position error judgment module is used for judging whether the part in a certain installation step is correctly assembled or not according to real pose information (including the position and the direction of installation) sensed in real time; the error judgment module is used for judging whether an assembler selects an incorrect assembly object or not according to the shape (such as a three-dimensional model) of a part in the process of installing mechanical equipment by the assembler; and if the requirement is judged not to be met, prompting and alarming.
S48: and rendering the real-time point cloud data of the current part to be assembled to obtain a rendering result and displaying the rendering result.
As shown in fig. 5, an information visualization module may be provided, and the information visualization module is used for implementing visualization of the virtual assembly information; specifically, the information visualization module comprises a rendering module, the rendering module can be used for rendering real-time point cloud data of a current part to be assembled, the development environment of the rendering module can be Unity3D, the Unity3D is widely applied to development work of industrial scene visualization, development of external extension plug-ins is supported, and multi-platform compatibility of virtual scenes can be achieved through environment attribute configuration of different platforms (such as android). The AR assembly scene module (including an executable program of the AR assembly scene) is placed at a client, is issued on a device (such as a display, a head-mounted display or a flat panel supporting AR development) and is mainly used for assembly guidance, namely, a virtual scene and a physical scene are overlapped, and the assembly object is observed and simultaneously the virtual projection of the assembly animation can be observed, so that more assembly information is obtained.
Further, the three-dimensional model obtained by scanning is converted into a format corresponding to the Unity, the three-dimensional model is placed at a reference installation position, a virtual scene is constructed through the Unity, a part model with a real proportion is rendered, and corresponding materials are added, so that the part model has a better illumination effect, and the process of combining virtual and real is more real; and designing a User Interface (UI) according to the real information of the part in the world coordinate and the local coordinate, so as to realize real-time expression of the assembly information.
In one embodiment, as shown in fig. 5, a three-dimensional model, a virtual human demonstration animation and a text description are designed according to an assembly process, so that real-time guidance of an assembly process is realized, the assembly efficiency is improved, and the assembly failure rate and the assembly accidents are reduced; the three-dimensional model is used for expressing the three-dimensional structure of the part, and can provide richer visual perception effect; the text prompt is used for expressing various auxiliary assembly information, which comprises names of parts, notice and detailed description of assembly steps; the demonstration animation is used for expressing the assembly relation of the assembly parts, is used for demonstrating the assembly process and improves the visualization effect.
In another specific embodiment, the assembly pose information comprises a position and a direction, and an indication mark matched with the part to be assembled is added in the rendering result based on the position and the direction of the part to be assembled, namely the correct installation position of the part to be assembled is indicated in the assembly process.
Further, after a part needing to be assembled in a certain step in the assembly process is identified, a 3D indication arrow is generated by using the position and the direction of the part and points to a correct installation position; in addition, information that needs attention in the current step can be displayed.
In another specific embodiment, viewpoint information of a target object may be obtained, where the target object may be a person, and the viewpoint information is a human eye viewing viewpoint; and adjusting the orientation of the rendering result based on the viewpoint information so that the adjusted rendering result matches the viewpoint information.
Further, a viewpoint tracking module may be provided, as shown in fig. 5, the viewpoint tracking module is configured to track the head position of the assembler and convert the head position into position information in a projection screen coordinate system, adjust the projection matrix according to the position information, draw an image of a corresponding viewpoint, and change orientation information of the virtual camera in the rendered scene according to the viewpoint position of the assembler.
In the embodiment, a viewpoint tracking module is used for replacing positions of two eyes with head skeleton points, the head skeleton points of an assembler are captured by using an API (application program interface) function of a camera, a projection matrix is adjusted according to the positions of the head skeleton points in a projection screen coordinate system, an image of a corresponding viewpoint is rendered, the displayed image is adaptively adjusted according to the viewpoints of the assembler, and images at different viewing angles can be displayed.
In another embodiment, as shown in fig. 2, a virtual character 15 exists in the scene, which plays a role of an expert, provides technical assembly guidance, promotes effective assembly training, improves assembly efficiency during assembly, reduces error rate, and improves assembly efficiency compared with a traditional method of guidance through a drawing.
In other specific embodiments, the assembler may also perceive the assembly guidance process by wearing AR glasses that include a depth camera, so that the target assembly scene does not need to place a depth camera and a display, and the assembly process can be displayed directly by relying on the glasses.
The method adopts a point cloud registration method of fusing depth information and color information for tracking and registering, can be applied to an assembly guide system, and can show better robustness in a mechanical assembly environment with lower illumination intensity and lacking texture on the surface; in addition, in the assembling process of an assembling person, the spatial position relation between the part to be installed and the depth camera can be calculated in real time, when the part to be installed finally moves to the reference installation position from the initial position, correct installation is prompted, in addition, alarm prompting can be carried out aiming at position errors and sequence errors, the assembling person can conveniently adjust the part in time, and the installation accuracy is improved.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of the assembly guiding device provided in the present application, in which the assembly guiding device 60 includes a memory 61 and a processor 62 connected to each other, the memory 61 is used for storing a computer program, and the computer program is used for implementing the guiding method for component assembly in the above embodiment when being executed by the processor 62.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of a computer-readable storage medium 70 provided in the present application, where the computer-readable storage medium 70 is used for storing a computer program 71, and the computer program 71 is used for implementing the guiding method for part assembly in the foregoing embodiment when being executed by a processor.
The computer readable storage medium 70 may be a server, a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various media capable of storing program codes.
If the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization in the modes of pop-up window information or asking the person to upload personal information thereof and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules or units is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The above embodiments are merely examples, and not intended to limit the scope of the present application, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present application, or those directly or indirectly applied to other related arts, are included in the scope of the present application.

Claims (15)

1. A method for guiding a part assembly for guiding the part assembly of a mechanical device comprising at least two parts, the method comprising:
acquiring preset installation operation information of the part, first source point cloud data and scene data under a current assembly scene, wherein the first source point cloud data is point cloud data corresponding to the part;
processing the scene data to obtain first target point cloud data;
registering the first target point cloud data with first source point cloud data corresponding to a current part to be assembled to obtain real-time pose information of the current part to be assembled, wherein the current part to be assembled is a part currently installed in the target assembly scene;
and displaying next assembly information to guide the assembly of the next part to be assembled under the condition that the current part to be assembled meets the preset installation condition based on the real-time pose information of the current part to be assembled and the preset installation operation information, wherein the next assembly information comprises the name of the next part to be assembled and the assembly pose information of the next part to be assembled.
2. The method for guiding a parts assembly according to claim 1, further comprising:
and generating alarm information under the condition that the current part to be assembled is judged not to meet the preset installation condition based on the real-time pose information of the current part to be assembled and the preset installation operation information.
3. The parts assembly guiding method according to claim 1, wherein the preset mounting operation information includes mounting order of the parts and assembly pose information of the parts, and the step of displaying next assembly information is preceded by:
and determining that the preset installation condition is met when the real-time pose information of the current part to be assembled is the same as the assembly pose information corresponding to the current part to be assembled in the preset installation operation information and the installation sequence of the current part to be assembled is the same as the installation sequence corresponding to the current part to be assembled in the preset installation operation information.
4. The method for guiding the assembly of parts according to claim 1, wherein the step of processing the scene data to obtain first target point cloud data comprises:
filtering the scene data by adopting a straight-through filtering method to obtain the first target point cloud data;
before the step of registering the first target point cloud data with the first source point cloud data corresponding to the part to be assembled, the method comprises the following steps:
and respectively filtering the first target point cloud data and the first source point cloud data by adopting a self-adaptive filtering method to obtain second target point cloud data and second source point cloud data, wherein the difference value between the density of the second target point cloud data and the density of the second source point cloud data is smaller than a preset density difference value.
5. The method for guiding a parts assembly according to claim 4, wherein the adaptive filtering method comprises:
establishing a bounding box based on point cloud data to be processed, wherein the point cloud data to be processed comprises the first target point cloud data or the first source point cloud data;
dividing the bounding box based on a preset sampling rate and a preset point cloud density to obtain a plurality of voxels;
and filtering the point cloud data in each voxel.
6. The method for guiding the assembly of parts according to claim 5, wherein the step of dividing the bounding box based on a preset sampling rate and a preset point cloud density to obtain a plurality of voxels comprises:
setting the current size of the voxel to a preset initial size;
updating the current size based on the preset sampling rate and the preset point cloud density to obtain a voxel size;
based on the voxel size and the volume of the bounding box, the number of voxels is calculated.
7. The method for guiding part assembly according to claim 6, wherein the step of updating the current size based on the preset sampling rate and the preset point cloud density to obtain a voxel size comprises:
updating the current size based on the current size and the preset sampling rate;
filtering a plurality of voxels corresponding to the current size to obtain filtered data;
acquiring the number of point cloud data in the filtering data to obtain the number of current point clouds;
calculating the current point cloud density based on the current point cloud number and the volume of the bounding box;
updating the current size based on the current point cloud density and the preset point cloud density, and returning to the step of updating the current size based on the current size and the preset sampling rate until the difference value between the current point cloud density and the preset point cloud density is within a preset difference value range.
8. The method for guiding parts assembly according to claim 7, wherein the step of calculating the density of the current point cloud based on the number of the current point clouds and the volume of the bounding box comprises:
and calculating the ratio of the number of the current point clouds to the volume of the bounding box to obtain the density of the current point clouds.
9. The method of guiding parts assembly according to claim 7, wherein the step of updating the current size based on the current point cloud density and the preset point cloud density comprises:
judging whether the current point cloud density is smaller than the preset point cloud density or not;
if so, determining the difference value between the current point cloud density and a preset step length as the current point cloud density;
if not, determining the sum of the current point cloud density and the preset step length as the current point cloud density.
10. The method for guiding a parts assembly according to claim 1, further comprising:
and rendering the real-time point cloud data of the current part to be assembled to obtain a rendering result and displaying the rendering result.
11. The method of guiding a parts assembly according to claim 10, further comprising:
and acquiring viewpoint information of the target object, and adjusting the orientation of the rendering result based on the viewpoint information so that the adjusted rendering result is matched with the viewpoint information.
12. The guiding method of parts assembly according to claim 11, wherein the assembly pose information includes a position and an orientation, the method further comprising:
and adding an indication mark matched with the next part to be assembled in the rendering result based on the position and the direction.
13. An assembly guiding device, characterized by comprising a memory and a processor connected to each other, wherein the memory is used for storing a computer program, which when executed by the processor is used for implementing the guiding method of the part assembly of any one of claims 1-12.
14. An assembly guiding system is characterized by comprising assembly guiding equipment and collecting equipment, wherein the assembly guiding equipment is connected with the collecting equipment and used for receiving scene data collected by the collecting equipment and processing the collecting equipment to obtain virtual assembly information, and the assembly guiding equipment is the assembly guiding equipment according to claim 13.
15. A computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, is adapted to implement the method of guiding a parts assembly of any of claims 1-12.
CN202210677574.7A 2022-06-14 2022-06-14 Guiding method for part assembly and related equipment Pending CN115309113A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210677574.7A CN115309113A (en) 2022-06-14 2022-06-14 Guiding method for part assembly and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210677574.7A CN115309113A (en) 2022-06-14 2022-06-14 Guiding method for part assembly and related equipment

Publications (1)

Publication Number Publication Date
CN115309113A true CN115309113A (en) 2022-11-08

Family

ID=83855240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210677574.7A Pending CN115309113A (en) 2022-06-14 2022-06-14 Guiding method for part assembly and related equipment

Country Status (1)

Country Link
CN (1) CN115309113A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778119A (en) * 2023-06-26 2023-09-19 中国信息通信研究院 Man-machine cooperative assembly system based on augmented reality

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778119A (en) * 2023-06-26 2023-09-19 中国信息通信研究院 Man-machine cooperative assembly system based on augmented reality
CN116778119B (en) * 2023-06-26 2024-03-12 中国信息通信研究院 Man-machine cooperative assembly system based on augmented reality

Similar Documents

Publication Publication Date Title
US11354851B2 (en) Damage detection from multi-view visual data
US20200257862A1 (en) Natural language understanding for visual tagging
EP3798801A1 (en) Image processing method and apparatus, storage medium, and computer device
US10410089B2 (en) Training assistance using synthetic images
US20180189974A1 (en) Machine learning based model localization system
US20200234397A1 (en) Automatic view mapping for single-image and multi-view captures
US9208607B2 (en) Apparatus and method of producing 3D model
CN111710036B (en) Method, device, equipment and storage medium for constructing three-dimensional face model
US20200258309A1 (en) Live in-camera overlays
US20210312702A1 (en) Damage detection from multi-view visual data
WO2023093217A1 (en) Data labeling method and apparatus, and computer device, storage medium and program
US11783443B2 (en) Extraction of standardized images from a single view or multi-view capture
US11842514B1 (en) Determining a pose of an object from rgb-d images
JP2012234494A (en) Image processing apparatus, image processing method, and program
US10950056B2 (en) Apparatus and method for generating point cloud data
IL284840B (en) Damage detection from multi-view visual data
US20210225038A1 (en) Visual object history
CN110689573A (en) Edge model-based augmented reality label-free tracking registration method and device
CN115309113A (en) Guiding method for part assembly and related equipment
KR20160046399A (en) Method and Apparatus for Generation Texture Map, and Database Generation Method
KR20100006736A (en) System and apparatus for implementing augmented reality, and method of implementing augmented reality using the said system or the said apparatus
CN116843867A (en) Augmented reality virtual-real fusion method, electronic device and storage medium
KR20130039173A (en) Apparatus and method for correcting 3d contents by using matching information among images
WO2022011560A1 (en) Image cropping method and apparatus, electronic device, and storage medium
KR102260519B1 (en) 3D stereoscopic image conversion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination