CN115937043B - Touch-assisted point cloud completion method - Google Patents
Touch-assisted point cloud completion method Download PDFInfo
- Publication number
- CN115937043B CN115937043B CN202310009699.7A CN202310009699A CN115937043B CN 115937043 B CN115937043 B CN 115937043B CN 202310009699 A CN202310009699 A CN 202310009699A CN 115937043 B CN115937043 B CN 115937043B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- touch
- coordinate system
- point
- missing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention belongs to the field of three-dimensional point cloud completion, and particularly relates to a three-dimensional point cloud completion method based on haptic assistance. The problem of partial detail deficiency in the process of generating the complement point cloud by the missing point cloud under a single view angle is solved, and the point cloud complement effect is improved by utilizing the partial tactile information. Mainly comprises the following steps: step 1, initializing a Pybullet simulation environment, and acquiring a touch picture by using a mechanical arm connected with an electric clamping jaw and a touch sensor; step 2, converting the touch picture into a touch point cloud, and splicing the touch point cloud with the missing point cloud; step 3, training the PoinTr network by using the data set; and 4, inputting the spliced point cloud into a PoinTr network to obtain a completed point cloud.
Description
Technical Field
The invention belongs to the field of three-dimensional point cloud completion, and particularly relates to a touch-assisted point cloud completion-based method.
Background
Understanding 3D space is critical to understanding how humans and machines interact with surrounding objects. The point cloud, as an easily acquired 3D structure data, has prompted extensive research by computer vision in understanding 3D scenes and objects. However, the original point cloud conventionally captured by a lidar scanner or an RGB-D camera inevitably has sparse and incomplete drawbacks due to limitations of sensor resolution, object shielding, and object surface materials, etc. Point cloud completion methods based on deep learning have attracted increasing research interest, benefiting from large-scale point cloud datasets. Recent advances in three-dimensional point cloud based processing technology have facilitated point cloud completion studies. The pioneering work PointNet proposes to apply MLP independently at each point, aggregating features through pooling operations to achieve permutation invariance. PointCleanNet is the first learning-based architecture, an encoder-decoder framework is proposed, and FoldinNet is adopted to map two-dimensional points onto a three-dimensional curved surface by simulating a two-dimensional plane, a coarse-to-fine complement strategy is introduced, and details are gradually restored in the missing part. SeedFormer introduces a new shape representation method, namely Patch Seeds, which captures the overall structure from part of the input and retains local region information, while introducing an upsampling transform module in the completion task. PoinTr defines point cloud completion as a set-to-set conversion problem, represents the point cloud as a set of unordered point sets with position embedding, converts the missing point cloud into a set of point agents, embeds special geometric sense blocks, and generates a complete point cloud using a transducer-based encoder-decoder architecture. Due to the problems of the discretization of the point cloud, the unstructured local area of the point cloud prediction and the like, the method is difficult to maintain the good point distribution structure of the local area, and further cannot well capture the geometric details and structures of the local area, such as smooth areas, sharp edges and corners.
Haptic is another way of perceiving the 3D shape of an object. For robots, most tactile sensors measure geometry on the contact surface. The robot can reconstruct the shape of an object through multiple touches and combining the position and the gesture of the sensor when each touch, and the robot cannot be influenced by the ambiguity caused by the color or the material of the surface of the object. However, haptic information is limited by the size and scale of the sensor, and single point cloud reconstruction based on only haptic sensation is difficult to be practically applied because only local area information can be obtained per touch, and multiple touches and a long time are required to reconstruct the complete shape of the object.
Therefore, in the field of three-dimensional point cloud completion, a point cloud completion method capable of utilizing the learning capability of a neural network and reasonably utilizing the local touch information of an object is required to be explored at present, so that the point cloud completion efficiency and the point cloud completion effect are improved.
Disclosure of Invention
The existing three-dimensional point cloud complement method has certain limitations, such as: the point cloud complement method based on the neural network takes the missing point cloud as network input, and local geometric detail structures are difficult to capture in the complement task of the missing area point cloud, so that the complement reconstruction effect of a complex object is poor; while object reconstruction by haptic information alone is limited by sensor size and scale per touch, reconstruction efficiency tends to be inefficient because only local area information is available per touch. In the invention, a set of three-dimensional point cloud complement method based on DIGIT simulation touch point cloud assistance is provided. In a Pybullet simulation engine, a mechanical arm connected with an electric clamping jaw and a DIGIT touch sensor provided by Facebook are used for controllably acquiring touch information of the surface of an object to generate a touch point cloud corresponding to a touch position. Meanwhile, utilizing 3D deep learning and a large-scale 3D shape storage library, implicit shape priori of the object is obtained in the machine learning process. The touch point cloud contains the local geometric information of the object, the local touch point cloud is integrated into the incomplete point cloud, the machine learning is utilized to restrict the overall situation and optimize the shape, and the defect that the common machine learning network cannot capture local details is overcome.
The invention provides a touch auxiliary point cloud complement method, which comprises the following steps:
step 2, converting the touch picture into a preliminary touch point cloud, converting the preliminary touch point cloud from a world coordinate system to a target characteristic coordinate system, and splicing the preliminary touch point cloud with the object missing point cloud;
step 3, training the PoinTr network by using the data set;
and 4, inputting the spliced point cloud in the step 2 into a trained PoinTr network to obtain a completed point cloud.
Further, in the step 1, a touch picture of the object point cloud missing area is obtained, specifically: moving the mechanical arm and controlling the closing of the clamping jaw, touching the surface of the object point cloud missing area, and generating a touch picture of the touch area;
recording depth information H E R of object surface captured on DIGIT touch sensor at the moment n×m And the spatial coordinates x ε R of the DIGIT touch sensor center 3 Rotation vector R e R 3 To determine the pose of the touch area in the world coordinate system.
Further, in the step 2, the haptic image is converted into a preliminary haptic point cloud, and then the preliminary haptic point cloud is converted from a world coordinate system to a target feature coordinate system to be spliced with the object missing point cloud, which specifically comprises the following steps:
step 2.1, taking the central point of the DIGIT touch sensor as the center, and constructing an initialization plane point cloud on the tangential plane of the central point;
step 2.2, obtaining a preliminary tactile point cloud P by superimposing depth information H of the touch area on the initialization plane point cloud c . Then converting the rotation vector R of the DIGIT touch sensor recorded during touch into a rotation matrix R, and combining with the space coordinate x to obtain a touch point cloud P under the world coordinate system t :
P t =P c *R+x
Step 2.3, the tactile point cloud P under the world coordinate system t Splicing the point cloud with the missing point cloud of the object, and inputting the point cloud into a PoinTr network to obtain the completed point cloud.
Further, different touch areas are selected for the missing point cloud respectively, the steps 1-4 are repeated, and the selection of a proper touch position is favorable for reconstructing the details of the point cloud.
The beneficial effects are that: the point cloud complement method utilizing the local touch information of the object blends the local touch point cloud into the incomplete point cloud, so that the point cloud complement efficiency and the complement effect are improved.
Drawings
Fig. 1 is a flow chart of the method of the present invention.
Fig. 2 is a visual view of a touch object and a haptic image in a pybull simulation environment.
Fig. 3 is a point cloud completion network overall frame diagram.
Fig. 4 is a point cloud complement effect diagram of adding one touch.
Fig. 5 is a graph of point cloud completion effects for different touch times.
Fig. 6 is a graph of point cloud completion effects for different touch locations.
Detailed Description
The present invention will be further described in detail with reference to the following examples, which are only for the purpose of illustrating the invention and are not to be construed as limiting the scope of the invention.
The method for supplementing the point cloud based on the touch assistance, as shown in fig. 1, comprises the following steps:
in the Pybullet simulation engine, a mechanical arm connected with two-finger electric clamping jaws is built, and a DIGIT touch sensor provided by Facebook is arranged on each finger of the clamping jaws for acquiring a touch image. The object is placed at a fixed position in front of the mechanical arm, and the electric clamping jaw is positioned right above the target object in the initial stage. Then, moving the mechanical arm and controlling the closing of the clamping jaw to touch the surface of the target object, as shown in fig. 2 (a), until the obvious contact image appears on the feedback image of the DIGIT touch sensor, stopping moving, wherein the touch image of the object touch area is from the depth information of the contact surface provided by the DIGIT touch sensor;
as shown in fig. 2 (b); recording depth information H E R obtained on DIGIT touch sensor at the moment n×m And the spatial coordinates x ε R of the DIGIT touch sensor center 3 And a rotation vector R ε R 3 To determine the pose of the touch area in the world coordinate system.
Step 2, converting the touch picture into a preliminary touch point cloud, converting the preliminary touch point cloud from a world coordinate system to an object coordinate system, and splicing the preliminary touch point cloud with the object missing point cloud, wherein the specific steps are as follows:
step 2.1, taking the central point of the DIGIT touch sensor as the center, and constructing an initialization plane point cloud on the tangential plane of the central point;
step 2.2, obtaining a preliminary touch point cloud P by superposing depth information H of the touch area on the initialization point cloud plane c The method comprises the steps of carrying out a first treatment on the surface of the Then converting the rotation vector R of the DIGIT touch sensor recorded during touch into a rotation matrix R, and thenCombining the space coordinates x to obtain a touch point cloud P under a world coordinate system t :
P t =P c *R+x
For a preliminary tactile point cloud P by a given threshold t Removing noise;
step 2.3, the preliminary tactile point cloud P is obtained by using the spatial coordinates x and the rotation vector r of the DIGIT tactile sensor t The method is converted to a corresponding position under an object coordinate system, and spliced with a target object missing point cloud, and is specifically described as follows:
the missing point cloud of the target object is positioned in the object coordinate system, and the touch point cloud P t Is positioned in a world coordinate system; aligning an object coordinate system, a robot coordinate system and a world coordinate system to ensure missing point cloud and touch point cloud P of an object t In the same coordinate system;
the world coordinate system and the robot coordinate system are assumed to coincide, and only the object coordinate system and the robot coordinate system are required to be aligned. Three non-coincident points w1, w2 and w3 in a robot coordinate system are taken, positions R1, R2 and R3 of the three points in a target object coordinate system are recorded, and a rotation matrix R required for aligning the robot coordinate system with the object coordinate system is obtained by solving a linear equation r And translation vector T r :
X r =R r *X w +T r
X r =[r1,r2,r3]
X w =[w1,w2,w3]
In order to simplify the calculation process, it is assumed that the target object coordinate system and the robot coordinate system rotate matrix R r Is a unit matrix, thereby only requiring a translation vector T between two coordinate systems r . Placing the target object at a fixed position w1= (m, 0) in the x-axis direction of the robot coordinate system T For a certain point p in the target object coordinate system i Corresponding to
P in robot coordinate system j The method comprises the following steps:
p j =p i +w1
since the obtained touch point cloud is too dense and can influence the performance of the local attention module of the subsequent PoinTr network, in order to avoid the model from paying attention to the local information too much and neglecting the long-range information, the touch point cloud needs to be downsampled to 100 points by using the method of furthest point sampling. Through the conversion between the coordinate systems, the finally obtained touch point cloud and the target object missing point cloud can be spliced together.
Step 3, training the PoinTr network by using the data set;
the PoinTr network is based on a transducer encoder-decoder structure and comprises a feature extraction module, a geometric sense encoder module, a geometric sense decoder module and an up-sampling module.
Firstly, extracting feature vectors from point clouds obtained after splicing processing by a feature extraction module, inputting the feature vectors into a geometric sense encoder module, and establishing geometric relations among the point clouds by the geometric sense encoder module; the geometric sense decoder module inquires the geometric relation among the point clouds to generate the predicted point agent and agent characteristic of the missing area.
And finally, inputting the predicted point agent and the agent characteristics into an up-sampling module, recovering the detailed local shape centered on the predicted point agent by using the up-sampling module, and outputting the completed complete point cloud.
The training dataset was shape net-55, containing 41,952 object models from 55 categories. For each object model, 8192 points are randomly sampled from the surface of the object as the complete point cloud, one view point is randomly selected in consideration of uncertainty of the view angle of the test missing point cloud, and then 4096 points furthest from the view point are removed, so that the trained local missing point cloud is obtained. The overall network framework is shown in fig. 3.
And 4, inputting the spliced point cloud into a trained PoinTr network to obtain a completed point cloud. If the touch point or the secondary point needs to be added, returning to the step 1;
for verification of DIGIT haptic auxiliary point cloud completion effect, a plurality of experiments are performed, including adding one touch to influence the point cloud completion, different touch times to influence the point cloud completion and different touch positions to influence the point cloud completion. The object model used in the experimental stage was selected from the ShapeNet-55 training dataset. The method uses the PoinTr network to realize reconstruction of the missing point cloud. The PoinTr network is based on a Transfromer encoder and decoder structure, and meanwhile, a local attention module is introduced into the encoder and decoder, so that the point cloud completion task can be effectively completed. The following experimental results are all comparisons between the method of the present invention and the PoinTr network benchmark.
The experimental results of adding one touch to the point cloud complement effect are shown in table 1. The evaluation criteria for the data in table 1 is the chamfer distance (CD 2) between the complementary reconstruction point cloud and the real point cloud, with smaller values representing better complementary reconstruction.
TABLE 1 comparison of Point cloud completion results with one touch added
Fig. 4 shows the complement of spheres, cubes, guitar and buckets, as can be seen, all objects can be reconstructed, and most objects can be reconstructed better after haptic information is added. For more complex objects, such as a bucket and guitar, the details of the shape are not well reconstructed before the haptic point cloud is added, and only a more realistic object can be obtained after haptic assistance is used. For simpler objects such as spheres and cubes, the addition of the haptic information may lead to a poor reconstruction result, because one is that the model itself is too regular and simple, and a better reconstruction effect can be achieved only by means of the partial missing point cloud; secondly, when the model is small, the DIGIT haptic simulator can experience lens distortion at the edges of its depth map imaging range. But the method is feasible in most scenarios considering that in practical applications the reconstruction is mostly aimed at more complex objects.
Experimental results of adding multiple touches to the effect of point cloud complement reconstruction for example, as shown in fig. 5 and table 2, most objects can be reconstructed better after adding more touches.
Table 2 shows comparison of the point cloud completion results for different touch times
The reconstruction results in which the water tub, guitar and basket are shown in fig. 5. The data for each class of objects in the graph is represented as follows: in fig. 5 (a) a real point cloud; (b) adding a missing point cloud of three touches; (c) no haptic complement is added; (d) adding a complement of the haptic sensation; (e) adding the complement of the secondary haptic sensation; (f) adding the complement of three haptic sensations. The other side of the three models, which is not restored by a method when no touch is added, lacks local details, such as a bucket and a basket lack handles on the other side, and a guitar does not reconstruct the missing shape well, but after the touch point cloud is added, the local details on the other side of the three models are restored well, and meanwhile, the originally divergent point cloud also becomes more concentrated. In a word, for three models, with the increase of the number of added tactile point clouds, local details are more in line with the actual situation, and in the full point clouds output by the network when no touch is added, the point clouds of the original missing area are recovered to be sparse and divergent, and the point clouds can be recovered to be denser and concentrated after a plurality of times of tactile point clouds are added.
The reconstruction effect of the point cloud complement of different touch positions is shown in fig. 6, and by taking a bucket, a chair and a guitar as examples, the detail of the point cloud can be better reconstructed by selecting a proper touch position. Each column of data in the figure is represented as follows: in fig. 6 (a) a real point cloud; (b) adding a touch at location 1; (c) a complement of (b); (d) adding a touch at location 2; (e) the complement of (d). The position 1 and the position 2 are used for distinguishing the difference of the touch areas added twice, and are not limited to exact positions; experimental results show that if touch is added in an area with poor completion effect, the completion result is improved to a great extent, such as a bucket in the figure, when the touch position is at the upper middle part of the bucket, the reconstruction result does not have a method for acquiring corresponding local details, and when the touch position is at a handle of the bucket, the details of point cloud are well reconstructed; for chairs and guitars, when the touch area has recovered well before the touch was added, the haptic point cloud did not perform well in its role when reconstructed, resulting in a final reconstruction result that did not reach the ideal target, which is substantially consistent with the conclusions drawn by the simple shaped object added the touch as described above.
It should be emphasized that the simulation environment to which the present method is applicable is exemplified by the digital tactile sensor provided by Facebook, but is not limited thereto, and is equally applicable to numerous other types of tactile sensors; the method is developed under the background of point cloud completion, and the simulation environment construction and experiment process displayed in the method comprises construction of a simulation environment, acquisition of simulated touch point cloud, splicing of point cloud data of different modes and the like, and is also suitable for multi-mode point cloud processing tasks including touch and visual point cloud fusion.
Claims (8)
1. The touch-assisted point cloud completion method is characterized by comprising the following steps of:
step 1, initializing a Pybullet simulation environment, selecting an object point cloud missing area to touch by using a mechanical arm connected with an electric clamping jaw and a DIGIT touch sensor, and acquiring touch pictures and pose information of the touch area;
the method comprises the steps of obtaining a touch picture of an object point cloud missing area, specifically: moving the mechanical arm and controlling the closing of the clamping jaw, touching the surface of the object point cloud missing area, and generating a touch picture of the touch area; recording depth information H E R of object surface captured on DIGIT touch sensor at the moment n×m And the spatial coordinates x ε R of the DIGIT touch sensor center 3 Rotation vector R e R 3 To determine a pose of the touch region in a world coordinate system;
step 2, converting the touch picture into a preliminary touch point cloud, converting the preliminary touch point cloud from a world coordinate system to a target feature coordinate system, and splicing the preliminary touch point cloud with the object missing point cloud, wherein the method specifically comprises the following steps of:
step 2.1, taking the central point of the DIGIT touch sensor as the center, and constructing an initialization plane point cloud on the tangential plane of the central point;
step 2.2 by superimposing the touch on the initialization plane point cloudObtaining preliminary touch point cloud P by touching depth information H of area c The method comprises the steps of carrying out a first treatment on the surface of the Then converting the rotation vector R of the DIGIT touch sensor recorded during touch into a rotation matrix R, and combining with the space coordinate x to obtain a touch point cloud P under the world coordinate system t :
P t =P c *R+x
Step 2.3, the tactile point cloud P under the world coordinate system t Splicing the point cloud with the missing point cloud of the object, and inputting the point cloud into a PoinTr network to obtain a completed point cloud;
step 3, training the PoinTr network by using the data set;
and 4, inputting the spliced point cloud in the step 2 into a trained PoinTr network to obtain a completed point cloud.
2. The method for supplementing the touch auxiliary point cloud according to claim 1, wherein the missing point cloud is selected to be touched in different touch areas, and the steps 1-4 are repeated, so that the reconstruction of the details of the point cloud is facilitated.
3. The method of claim 1, wherein the specific content of step 3 is as follows: the missing point cloud of the target object is positioned in the object coordinate system, and the touch point cloud P t Is positioned in a world coordinate system; aligning an object coordinate system, a robot coordinate system and a world coordinate system to ensure missing point cloud and touch point cloud P of an object t In the same coordinate system.
4. A method of touch-assisted point cloud completion according to claim 3, wherein assuming that the world coordinate system and the robot coordinate system coincide, only the object coordinate system and the robot coordinate system need to be aligned, three non-coincident points w1, w2, w3 in the robot coordinate system are taken, the positions R1, R2, R3 of the three points in the target object coordinate system are recorded, and a rotation matrix R required for alignment of the robot coordinate system and the object coordinate system is obtained by solving a linear equation r And translation vector T r :
X r =R r *X w +T r
X r =[r1,r2,r3]
X w =[w1,w2,w3]。
5. A method of touch-assisted point cloud completion as claimed in claim 4, wherein the target object coordinate system and the robot coordinate system are assumed to rotate matrix R r Is a unit matrix, only a translation vector T between two coordinate systems is needed r The method comprises the steps of carrying out a first treatment on the surface of the Placing the target object at a fixed position w1= (m, 0) in the x-axis direction of the robot coordinate system T For a certain point p in the target object coordinate system i Corresponding to p in robot coordinate system j The method comprises the following steps:
p j =p i +w1。
6. a method of touch-assisted point cloud completion according to claim 1, wherein the touch point cloud is downsampled to a point number of 100 using the furthest point sampling method.
7. The method of claim 1, wherein the PoinTr network is based on a transducer encoder-decoder architecture.
8. The method of claim 1, wherein the PoinTr network comprises a feature extraction module, a geometric sense encoder module, a geometric sense decoder module, and an upsampling module;
firstly, extracting feature vectors from point clouds obtained after splicing processing by a feature extraction module, inputting the feature vectors into a geometric sense encoder module, and establishing geometric relations among the point clouds by the geometric sense encoder module; the geometric sense decoder module inquires the geometric relation among the point clouds to generate a predicted point agent and agent characteristics of the missing area; and finally, inputting the predicted point agent and the agent characteristics into an up-sampling module, recovering the detailed local shape centered on the predicted point agent by using the up-sampling module, and outputting the completed complete point cloud.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310009699.7A CN115937043B (en) | 2023-01-04 | 2023-01-04 | Touch-assisted point cloud completion method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310009699.7A CN115937043B (en) | 2023-01-04 | 2023-01-04 | Touch-assisted point cloud completion method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115937043A CN115937043A (en) | 2023-04-07 |
CN115937043B true CN115937043B (en) | 2023-07-04 |
Family
ID=86650847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310009699.7A Active CN115937043B (en) | 2023-01-04 | 2023-01-04 | Touch-assisted point cloud completion method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115937043B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117274764B (en) * | 2023-11-22 | 2024-02-13 | 南京邮电大学 | Multi-mode feature fusion three-dimensional point cloud completion method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113205466A (en) * | 2021-05-10 | 2021-08-03 | 南京航空航天大学 | Incomplete point cloud completion method based on hidden space topological structure constraint |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4778591B2 (en) * | 2009-05-21 | 2011-09-21 | パナソニック株式会社 | Tactile treatment device |
US10394327B2 (en) * | 2014-09-12 | 2019-08-27 | University Of Washington | Integration of auxiliary sensors with point cloud-based haptic rendering and virtual fixtures |
CN112066874A (en) * | 2020-08-14 | 2020-12-11 | 苏州环球科技股份有限公司 | Multi-position 3D scanning online detection method |
CN113256640B (en) * | 2021-05-31 | 2022-05-24 | 北京理工大学 | Method and device for partitioning network point cloud and generating virtual environment based on PointNet |
CN113808261B (en) * | 2021-09-30 | 2022-10-21 | 大连理工大学 | Panorama-based self-supervised learning scene point cloud completion data set generation method |
CN114187422B (en) * | 2021-11-30 | 2024-08-20 | 华中科技大学 | Three-dimensional measurement method and system based on visual and tactile fusion |
CN115375842A (en) * | 2022-08-19 | 2022-11-22 | 深圳大学 | Plant three-dimensional reconstruction method, terminal and storage medium |
CN115511962B (en) * | 2022-09-20 | 2024-05-28 | 上海人工智能创新中心 | Target active detection method and system based on photoelectric tactile sensor |
-
2023
- 2023-01-04 CN CN202310009699.7A patent/CN115937043B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113205466A (en) * | 2021-05-10 | 2021-08-03 | 南京航空航天大学 | Incomplete point cloud completion method based on hidden space topological structure constraint |
Also Published As
Publication number | Publication date |
---|---|
CN115937043A (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ueda et al. | A hand-pose estimation for vision-based human interfaces | |
WO2019157924A1 (en) | Real-time detection method and system for three-dimensional object | |
Wang et al. | 3d shape reconstruction from free-hand sketches | |
CN113034652A (en) | Virtual image driving method, device, equipment and storage medium | |
CN113159232A (en) | Three-dimensional target classification and segmentation method | |
Wang et al. | DemoGrasp: Few-shot learning for robotic grasping with human demonstration | |
CN115937043B (en) | Touch-assisted point cloud completion method | |
CN116822100B (en) | Digital twin modeling method and simulation test system thereof | |
Dave et al. | Gesture interface for 3d cad modeling using kinect | |
Xu et al. | Visual-tactile sensing for in-hand object reconstruction | |
Jiang et al. | Scaling up dynamic human-scene interaction modeling | |
Zhu et al. | Mocanet: Motion retargeting in-the-wild via canonicalization networks | |
Cai et al. | Multi-modal transformer-based tactile signal generation for haptic texture simulation of materials in virtual and augmented reality | |
Wang et al. | A real2sim2real method for robust object grasping with neural surface reconstruction | |
Wang et al. | Dtf-net: Category-level pose estimation and shape reconstruction via deformable template field | |
Xiang et al. | Sketch‐based modeling with a differentiable renderer | |
Liu et al. | Skeleton tracking based on Kinect camera and the application in virtual reality system | |
Xie et al. | Template Free Reconstruction of Human-object Interaction with Procedural Interaction Generation | |
Saif et al. | Augmented Reality-Based 3D Human Hands Tracking From Monocular True Images Using Convolutional Neural Network | |
CN114067046B (en) | Method and system for reconstructing and displaying hand three-dimensional model by single picture | |
Nikolaev et al. | Using virtual data for training deep model for hand gesture recognition | |
Martinez et al. | 3D reconstruction with a markerless tracking method of flexible and modular molecular physical models: towards tangible interfaces | |
Zhao et al. | Supple: Extracting hand skeleton with spherical unwrapping profiles | |
Saran et al. | Augmented annotations: Indoor dataset generation with augmented reality | |
Sun et al. | Precise grabbing of overlapping objects system based on end-to-end deep neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |