CN110243370A - A 3D Semantic Map Construction Method for Indoor Environment Based on Deep Learning - Google Patents
A 3D Semantic Map Construction Method for Indoor Environment Based on Deep Learning Download PDFInfo
- Publication number
- CN110243370A CN110243370A CN201910408713.4A CN201910408713A CN110243370A CN 110243370 A CN110243370 A CN 110243370A CN 201910408713 A CN201910408713 A CN 201910408713A CN 110243370 A CN110243370 A CN 110243370A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- map
- semantic
- image
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 21
- 238000010276 construction Methods 0.000 title claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 34
- 230000011218 segmentation Effects 0.000 claims abstract description 24
- 238000001514 detection method Methods 0.000 claims abstract description 15
- 241000985694 Polypodiopsida Species 0.000 claims abstract description 12
- 238000002372 labelling Methods 0.000 claims abstract description 9
- 238000000605 extraction Methods 0.000 claims abstract description 5
- 238000012545 processing Methods 0.000 claims abstract description 5
- 230000009466 transformation Effects 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 17
- 238000005457 optimization Methods 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 4
- 238000005315 distribution function Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000007613 environmental effect Effects 0.000 abstract description 4
- 230000008447 perception Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 8
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 239000012634 fragment Substances 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
一种基于深度学习的室内环境三维语义地图构建方法,步骤为:1)用移动机器人自身携带的Kinect深度相机获取室内环境目标场景的RGB‑D图像序列;2)用训练好的基于RGB‑D图像的语义分割网络对当前获取的RGB‑D图像进行特征提取与处理;3)据输入的每一帧图像估计对应的机器人位姿信息Pt;4)据Randomized ferns实时重定位和闭环检测算法优化机器人位姿;5)用关键帧构建点云地图,并将新获取图像帧对应点云与已构建的点云地图进行融合;6)将关键帧的像素级语义标注结果映射到对应的点云地图上;7)用获取关键帧的语义标签优化已构建三维点云地图的语义标注信息,获得室内环境的三维语义地图;完成室内环境语义地图实时构建任务,提高移动机器人环境感知的智能化水平。
A method for building a three-dimensional semantic map of an indoor environment based on deep learning, the steps are: 1) using a Kinect depth camera carried by a mobile robot to obtain an RGB-D image sequence of an indoor environment target scene; 2) using a trained RGB-D image sequence The semantic segmentation network of the image performs feature extraction and processing on the currently acquired RGB-D image; 3) Estimates the corresponding robot pose information P t according to each frame of the input image; 4) Real-time relocation and closed-loop detection algorithm according to Randomized ferns Optimize the robot pose; 5) Construct a point cloud map with key frames, and fuse the point cloud corresponding to the newly acquired image frame with the constructed point cloud map; 6) Map the pixel-level semantic annotation results of the key frames to the corresponding points On the cloud map; 7) Optimize the semantic labeling information of the constructed 3D point cloud map by using the semantic tags of the key frames to obtain the 3D semantic map of the indoor environment; complete the real-time construction task of the indoor environment semantic map and improve the intelligence of the mobile robot's environmental perception Level.
Description
技术领域technical field
本发明属于移动机器人室内导航技术领域,具体涉及一种基于深度学习的室内环境三维语义地图构建方法。The invention belongs to the technical field of indoor navigation of mobile robots, and in particular relates to a method for constructing a three-dimensional semantic map of an indoor environment based on deep learning.
背景技术Background technique
目标场景地图构建是移动机器人自主导航的重要研究内容,对构建地图的点云进行语义标注,生成具有语义信息的高精度点云地图,对移动机器人在未知环境进行智能导航具有重要的应用价值。移动机器人通过语义地图能够和用户进行自然的交流,从而完成自动驾驶、家庭服务等人机交互任务。The construction of the target scene map is an important research content of the autonomous navigation of the mobile robot. Semantic labeling of the point cloud of the constructed map to generate a high-precision point cloud map with semantic information has important application value for the intelligent navigation of the mobile robot in the unknown environment. Mobile robots can communicate with users naturally through semantic maps, so as to complete human-computer interaction tasks such as autonomous driving and home services.
移动机器人在所处环境信息完全未知的情况下,对所处环境及自身的位置没有任何的先验信息,这就要求移动机器人在移动过程中通过自身携带的传感器获取所处环境的相关信息,从而完成环境地图构建并定位自身在地图中的所处位置,这就是同时定位和地图构建技术(Simultaneous Localization andMapping,SLAM)。现有的场景地图构建方法是对目标场景图像序列进行特征点的提取与匹配,进而得到目标场景的稀疏点云或路标地图,然而仅依靠稀疏点云地图难以完成自动驾驶、家庭服务等人机交互任务。When the environment information of the mobile robot is completely unknown, there is no prior information about the environment and its own position, which requires the mobile robot to obtain relevant information about the environment through its own sensors during the movement process. In order to complete the construction of the environmental map and locate its own position in the map, this is the simultaneous localization and map construction technology (Simultaneous Localization and Mapping, SLAM). The existing scene map construction method is to extract and match the feature points of the target scene image sequence, and then obtain the sparse point cloud or road sign map of the target scene. Interactive tasks.
具有高层次语义信息的场景地图能够使机器人对空间中存在的物体进行识别和建模,更充分地理解未知环境场景信息,从而为更高级的人机交互和完成更复杂的任务奠定基础。传统的点云地图标注任务依赖于环境几何信息或者用户指导标记,点云标注不够准确且需要离线进行,随着深度学习技术在图像感知领域的快速发展,尤其是卷积神经网络(Convolutional NeuralNetwork,CNN)在图像分类方面的成就,大批学者开始将深度学习应用于图像语义分割,进而为点云地图提供准确的像素级语义标注,实现室内环境高精度点云地图实时构建任务。因此,基于深度学习的室内场景语义地图构建技术的研究,具有重要的理论意义及广阔的应用前景。Scene maps with high-level semantic information enable robots to identify and model objects in space, and understand unknown environmental scene information more fully, thereby laying the foundation for more advanced human-computer interaction and more complex tasks. The traditional point cloud map labeling task relies on environmental geometric information or user-guided labels. Point cloud labeling is not accurate enough and needs to be done offline. With the rapid development of deep learning technology in the field of image perception, especially the convolutional neural network (Convolutional Neural Network, CNN) in image classification, a large number of scholars began to apply deep learning to image semantic segmentation, and then provide accurate pixel-level semantic annotation for point cloud maps, and realize the task of real-time construction of high-precision point cloud maps for indoor environments. Therefore, the research on indoor scene semantic map construction technology based on deep learning has important theoretical significance and broad application prospects.
发明内容Contents of the invention
为克服上述现有技术问题,本发明的目的是提供一种基于深度学习的室内环境三维语义地图构建方法;本发明将深度学习应用于室内环境的三维语义地图构建算法中,能够对三维点云进行实时准确的像素级语义标注,从而实时构建室内环境的三维语义地图。具有实施、准确无需离线进行的特点。In order to overcome the above-mentioned prior art problems, the object of the present invention is to provide a method for building a three-dimensional semantic map of an indoor environment based on deep learning; the present invention applies deep learning to the three-dimensional semantic map construction algorithm for an indoor environment, and can analyze three-dimensional point clouds Perform real-time and accurate pixel-level semantic annotation to build a 3D semantic map of the indoor environment in real time. It has the characteristics of implementation, accuracy and no need to go offline.
为实现上述目的,本发明采用的技术方案是:In order to achieve the above object, the technical scheme adopted in the present invention is:
一种基于深度学习的室内环境语义地图构建方法,包括如下具体步骤:A method for building a semantic map of an indoor environment based on deep learning, comprising the following specific steps:
步骤1,利用移动机器人自身携带的Kinect深度相机获取室内环境目标场景的RGB-D图像序列;Step 1, using the Kinect depth camera carried by the mobile robot itself to obtain the RGB-D image sequence of the indoor environment target scene;
步骤2,采用基于RGB-D图像的语义分割网络对获取的每一帧RGB-D图像进行特征提取与处理;Step 2, using a semantic segmentation network based on RGB-D images to perform feature extraction and processing for each frame of RGB-D images acquired;
步骤3,根据输入的每一帧图像估计对应的机器人位姿信息Pt;Step 3. Estimate the corresponding robot pose information P t according to each input frame image;
步骤4,根据Randomized ferns实时重定位和闭环检测算法优化机器人位姿;Step 4, optimize the pose of the robot according to the Randomized ferns real-time relocation and closed-loop detection algorithm;
步骤5,利用关键帧进行点云地图构建,将新获取图像帧对应点云与已构建的点云地图进行融合;Step 5, use the key frame to construct the point cloud map, and fuse the point cloud corresponding to the newly acquired image frame with the constructed point cloud map;
步骤6,将关键帧的像素级语义标注结果映射到对应的点云地图上,获取关键帧的语义标签;Step 6, map the pixel-level semantic labeling results of the key frame to the corresponding point cloud map, and obtain the semantic label of the key frame;
步骤7,利用新获取的关键帧优化已构建三维点云地图的语义标注信息。Step 7, use the newly acquired key frames to optimize the semantic annotation information of the constructed 3D point cloud map.
步骤2采用基于RGB-D图像的语义分割网络对获取的每一帧RGB-D图像进行特征提取与处理,其具体方法为:采用图像级联网络ICNet,并将图像的深度信息作为网络的第四个输入通道,对每一帧输入图像进行像素级语义预测。Step 2 uses the semantic segmentation network based on RGB-D images to extract and process the features of each acquired RGB-D image. Four input channels for pixel-level semantic prediction for each frame of input image.
步骤3中,所述根据输入的每一帧图像估计对应的机器人位姿信息Pt的具体方法为:综合利用深度图像的几何位姿估计和RGB图像的光度位姿估计,通过最小化点面误差和光度误差,获得机器人位姿Pt,In step 3, the specific method of estimating the corresponding robot pose information P t according to each input frame image is: comprehensively utilizing the geometric pose estimation of the depth image and the photometric pose estimation of the RGB image, by minimizing the point plane error and photometric error to obtain the robot pose P t ,
点面误差: Point error:
其中,是当前帧深度图像的第k个顶点;vk和nk分别是前一帧图像对应的顶点和法线;T是当前位姿转换矩阵;in, is the kth vertex of the current frame depth image; v k and n k are the corresponding vertex and normal of the previous frame image respectively; T is the current pose transformation matrix;
光度误差: Photometric error:
其中,是当前帧RGB图像在点u处的灰度值;表示当前帧RGB图像中一点u在前一帧RGB图像投影的灰度值;in, is the gray value of the current frame RGB image at point u; Indicates the gray value of a point u in the current frame RGB image projected on the previous frame RGB image;
联合损失函数:Etrack=Eicp+0.1Ergb Joint loss function: E track = E icp +0.1E rgb
采用高斯-牛顿非线性最小二乘求得更新后的机器人位姿转换矩阵:即更新后当前帧对应位姿Pt=T′Pt-1,再根据相邻图像帧间的机器人位姿关系,确定用于点云地图构建的关键帧序列。The updated robot pose transformation matrix is obtained by using Gauss-Newton nonlinear least squares: That is, after the update, the corresponding pose of the current frame is P t = T′P t-1 , and then according to the robot pose relationship between adjacent image frames, the key frame sequence for point cloud map construction is determined.
步骤4中,所述根据Randomized ferns实时重定位和闭环检测算法优化机器人位姿的具体方法为:对输入每一帧图像进行编码,根据编码值计算图像帧间相似度,再根据相似度判断是否加入新的关键帧,并对新的关键帧求解相似变换矩阵来进行闭环检测。In step 4, the specific method of optimizing the pose of the robot according to the Randomized ferns real-time relocation and closed-loop detection algorithm is: encode each frame of the input image, calculate the similarity between image frames according to the encoded value, and then judge whether it is based on the similarity. Add a new key frame, and solve the similar transformation matrix for the new key frame to perform loop closure detection.
步骤5中,所述利用关键帧进行点云地图构建,并将新获取图像帧对应点云与已构建的点云地图进行融合的具体方法为:对所有深度图像对应的点云进行坐标变换,使得其后点云都与第一帧点云处于同一坐标系中;在每个连贯的有重叠的点云之间找到最佳变换关系,并累积这些变换关系到全部点云,就可以逐步将当前点云融合入已构建的点云地图。In step 5, the specific method of using the key frame to construct the point cloud map, and fusing the point cloud corresponding to the newly acquired image frame with the constructed point cloud map is: perform coordinate transformation on the point cloud corresponding to all depth images, So that the subsequent point cloud is in the same coordinate system as the first frame point cloud; find the best transformation relationship between each coherent overlapping point cloud, and accumulate these transformation relations to all point clouds, you can gradually The current point cloud is merged into the constructed point cloud map.
步骤6中,将关键帧的像素级语义标注结果映射到对应的点云地图上的具体方法为:根据机器人位姿转换矩阵TWC,进而将每一个像素点的相机坐标转换为世界坐标,最后根据每个像素点所对应的三维空间坐标,将关键帧图像的二维语义分割结果映射到对应的三维点云地图上,完成三维点云地图的语义标注任务。In step 6, the specific method of mapping the pixel-level semantic annotation results of the key frame to the corresponding point cloud map is as follows: according to the robot pose transformation matrix T WC , the camera coordinates of each pixel point are converted into world coordinates, and finally According to the three-dimensional space coordinates corresponding to each pixel point, the two-dimensional semantic segmentation results of the key frame image are mapped to the corresponding three-dimensional point cloud map, and the semantic labeling task of the three-dimensional point cloud map is completed.
所述的闭环检测包括全局闭环检测和局部闭环检测,其中,全局闭环约束下节点参数优化方程为: The closed-loop detection includes global closed-loop detection and local closed-loop detection, wherein, the node parameter optimization equation under the global closed-loop constraint is:
其中,H表示相似转换矩阵,Pt表示当前帧图像对应的机器人位姿,表示当前帧深度图中点u的投影,表示机器人初始位姿,表示初始时刻;Among them, H represents the similarity transformation matrix, P t represents the robot pose corresponding to the current frame image, Indicates the projection of point u in the depth map of the current frame, represents the initial pose of the robot, represents the initial moment;
局部闭环约束下节点参数优化方程为:The node parameter optimization equation under local closed-loop constraints is:
其中,表示最近时段所构建地图与之前时段构建地图模型之间变形的约束。in, Represents the constraint on the deformation between the map built in the most recent time period and the map model built in the previous time period.
所述步骤7的具体方法为:以关键帧的语义分割结果对点云标签概率分布进行初始化,再采用递归贝叶斯更新点云标签分布概率:The specific method of step 7 is: initialize the point cloud label probability distribution with the semantic segmentation result of the key frame, and then use recursive Bayesian to update the point cloud label distribution probability:
其中,ct表示t时刻点云的类别概率分布,表示关键帧集合{K0,K1,…,Kt},Z表示归一化常量,Kt表示t时刻的关键帧;Among them, c t represents the category probability distribution of the point cloud at time t, Represents a set of key frames {K 0 , K 1 ,...,K t }, Z represents a normalization constant, and K t represents the key frame at time t;
通过最大化该概率分布函数可以得到每一个点云的最终语义标签:The final semantic label for each point cloud can be obtained by maximizing this probability distribution function:
L(P)=argmaxP(c|K)。L(P)=argmaxP(c|K).
本发明的有益效果是:The beneficial effects of the present invention are:
1)本发明综合利用RGB图像的颜色特征和深度图像的几何特征,提高了图像语义分割网络的性能,并通过模型压缩对网络参数进行合理删减,在对象种类繁多、遮挡严重的室内环境也能快速得到精确的语义分割结果。1) The present invention comprehensively utilizes the color features of the RGB image and the geometric features of the depth image to improve the performance of the image semantic segmentation network, and reasonably deletes the network parameters through model compression. Accurate semantic segmentation results can be obtained quickly.
2)本发明直接采用机器人自身携带的Kinect深度相机对室内环境进行地图构建,能够对点云进行实时语义标注,递增式地构建室内环境空间语义地图,使移动机器人能够在室内全局语义地图中进行智能导航,为完成自动驾驶、家庭服务等人机交互任务奠定基础。2) The present invention directly uses the Kinect depth camera carried by the robot itself to construct a map of the indoor environment, which can carry out real-time semantic annotation on the point cloud, and incrementally construct the semantic map of the indoor environment space, so that the mobile robot can map the indoor global semantic map. Intelligent navigation lays the foundation for the completion of human-computer interaction tasks such as automatic driving and home services.
附图说明Description of drawings
图1是本发明的系统总体框图。Fig. 1 is the general block diagram of the system of the present invention.
图2是本发明的基于RGB-D图像的ICNet语义分割网络结构框图。Fig. 2 is a block diagram of the ICNet semantic segmentation network structure based on RGB-D images of the present invention.
图3是语义分割网络模型压缩示意图。Figure 3 is a schematic diagram of semantic segmentation network model compression.
图4是机器人位姿求解过程示意图。Figure 4 is a schematic diagram of the robot pose solution process.
图5是Randomized ferns实时重定位和闭环检测算法流程框图。Fig. 5 is a block diagram of the real-time relocation and closed-loop detection algorithm of Randomized ferns.
图6(a)是存在全局闭环情形想的机器人位姿优化示意图。Figure 6(a) is a schematic diagram of robot pose optimization in the presence of a global closed-loop scenario.
图6(b)是不存在全局闭环情形想的机器人位姿优化示意图。Figure 6(b) is a schematic diagram of robot pose optimization without the idea of a global closed-loop situation.
图7是点云融合流程框图。Fig. 7 is a flow chart of point cloud fusion.
图8是本发明的流程图。Fig. 8 is a flowchart of the present invention.
具体实施方式Detailed ways
下面将结合附图和具体的实施细节,对本发明实施例作进一步详述:The embodiments of the present invention will be further described below in conjunction with the accompanying drawings and specific implementation details:
参见图1、8,一种基于深度学习的室内环境语义地图构建方法,包括以下步骤:Referring to Figures 1 and 8, a method for building a semantic map of an indoor environment based on deep learning includes the following steps:
步骤1,利用移动机器人自身携带的Kinect深度相机采集室内环境目标场景的RGB-D图像序列;Step 1, using the Kinect depth camera carried by the mobile robot itself to collect the RGB-D image sequence of the indoor environment target scene;
步骤2,采用基于RGB-D图像的语义分割网络对获取的每一帧RGB-D图像进行特征提取与处理;基于RGB-D图像的语义分割网络整体框图如图2所示,设计具体步骤为:Step 2, use the RGB-D image-based semantic segmentation network to extract and process the features of each acquired RGB-D image; the overall block diagram of the RGB-D image-based semantic segmentation network is shown in Figure 2, and the design steps are as follows: :
1)在Linux操作系统中,利用caffe深度学习框架构建基于RGB-D图像的ICNet语义分割网络,通过Concat层将RGB图像和深度图像进行拼接,作为语义分割网络的四个输入通道;1) In the Linux operating system, use the caffe deep learning framework to construct the ICNet semantic segmentation network based on RGB-D images, and stitch the RGB image and the depth image through the Concat layer as the four input channels of the semantic segmentation network;
2)利用NYUD V2室内图像标准数据集训练图像语义分割网络,将输入RGB-D图像对划分成1/4、1/2和全分辨率图像,分别作为三个分支网络的输入;2) Use the NYUD V2 indoor image standard dataset to train the image semantic segmentation network, divide the input RGB-D image pair into 1/4, 1/2 and full resolution images, and use them as the input of the three branch networks respectively;
3)经过两次特征图融合,将三种不同分辨率输入图像得到的卷积特征图通过Eltwise层进行融合;3) After two feature map fusions, the convolution feature maps obtained from input images with three different resolutions are fused through the Eltwise layer;
4)将最终融合特征图经过多次上采样操作,恢复图像分辨率到原始输入大小,最终得到精确的语义分割结果;4) The final fusion feature map undergoes multiple upsampling operations to restore the image resolution to the original input size, and finally obtain accurate semantic segmentation results;
深度图像的特征提取和处理增加了网络模型的参数数量,为保证语义分割网络的快速性,需进行模型压缩。在网络性能测试过程中,根据每个卷积核的L1范数合理删减网络参数,以达到快速得到输入图像的语义分割结果的目的,具体流程如图3所示。The feature extraction and processing of depth images increases the number of parameters of the network model. In order to ensure the rapidity of the semantic segmentation network, model compression is required. In the process of network performance testing, the network parameters are reasonably deleted according to the L1 norm of each convolution kernel, so as to achieve the purpose of quickly obtaining the semantic segmentation results of the input image. The specific process is shown in Figure 3.
步骤3,综合利用深度图像的几何位姿估计和RGB图像的光度位姿估计,通过最小化点面误差和光度误差,获得机器人位姿Pt;Step 3, comprehensively utilize the geometric pose estimation of the depth image and the photometric pose estimation of the RGB image, and obtain the robot pose P t by minimizing the point-plane error and the photometric error;
其中,旋转矩阵Rt∈SO3,平移矩阵tt∈R3,Among them, the rotation matrix R t ∈ SO 3 , the translation matrix t t ∈ R 3 ,
点面误差: Point error:
其中,是当前帧深度图像的第k个顶点;vk和nk分别是前一帧图像对应的顶点和法线;T是当前位姿转换矩阵;in, is the kth vertex of the current frame depth image; v k and n k are the corresponding vertex and normal of the previous frame image respectively; T is the current pose transformation matrix;
光度误差: Photometric error:
其中,是当前帧RGB图像在点u处的灰度值;表示当前帧RGB图像中一点u在前一帧RGB图像投影的灰度值;in, is the gray value of the current frame RGB image at point u; Indicates the gray value of a point u in the current frame RGB image projected on the previous frame RGB image;
联合损失函数:Etrack=Eicp+0.1Ergb Joint loss function: E track = E icp +0.1E rgb
采用高斯-牛顿非线性最小二乘法求得更新后的位姿转换矩阵:即更新后当前帧对应相机位姿Pt=T′Pt-1,具体求解过程如图4所示,再根据相邻图像帧间的机器人位姿关系,确定用于点云地图构建的关键帧序列;The updated pose transformation matrix is obtained by the Gauss-Newton nonlinear least square method: That is, after the update, the current frame corresponds to the camera pose P t = T′P t-1 , the specific solution process is shown in Figure 4, and then according to the robot pose relationship between adjacent image frames, determine the key point cloud map construction frame sequence;
步骤4,根据Randomized ferns实时重定位和闭环检测算法优化机器人位姿和点云地图,其整体流程如图5所示,Randomized ferns对输入的每一帧图像进行编码,并采用特殊的编码存储方式,加快图像相似度比较的效率,其编码方式如下:表示每帧图像的编码由m个块编码组成,Step 4. Optimize the robot pose and point cloud map according to the randomized ferns real-time relocation and closed-loop detection algorithm. The overall process is shown in Figure 5. Randomized ferns encodes each frame of the input image and uses a special encoding storage method , to speed up the efficiency of image similarity comparison, the encoding method is as follows: Indicates that the encoding of each frame of image consists of m block encodings,
其中,表示每个块编码由n个Ferns组成,in, Indicates that each block code consists of n Ferns,
其中,表示每个Fern通过比较c通道像素点x处的像素值与阈值θ的大小关系,确定Ferns的编码,in, Indicates that each Fern determines the encoding of Ferns by comparing the pixel value at the pixel point x of the c channel with the threshold θ,
对新获取的每一帧图像计算块编码,利用函数Fers::generateFerns()随机初始化Ferns的位置、通道和阈值θ,并利用函数Ferns::addFrame()根据块编码将新输入图像帧与之前图像帧进行相似度比较,再通过相似度确定是否存在新的关键帧和闭环;Calculate the block encoding for each newly acquired image, use the function Fers::generateFerns() to randomly initialize the position, channel and threshold θ of Ferns, and use the function Ferns::addFrame() to compare the new input image frame with the previous one according to the block encoding The similarity of the image frames is compared, and then the similarity is used to determine whether there are new key frames and closed loops;
若存在全局闭环,如图6(a)所示,则利用步骤1的跟踪算法计算当前帧和第i帧之间的位姿,得到位姿变换矩阵后,对图像进行均匀采样并建立约束,优化节点参数,即机器人位姿;If there is a global closed loop, as shown in Figure 6(a), use the tracking algorithm in step 1 to calculate the pose between the current frame and the i-th frame, and after obtaining the pose transformation matrix, uniformly sample the image and establish constraints, Optimize the node parameters, that is, the robot pose;
全局闭环约束下节点参数优化方程:The node parameter optimization equation under global closed-loop constraints:
其中,H表示相似转换矩阵,Pt表示当前帧图像对应的机器人位姿,表示当前帧深度图在相机坐标系中的投影点,表示机器人初始位姿,表示初始时刻;Among them, H represents the similarity transformation matrix, P t represents the robot pose corresponding to the current frame image, Indicates the projection point of the current frame depth map in the camera coordinate system, represents the initial pose of the robot, represents the initial moment;
若不存在全局闭环,如图6(b)所示,则对局部闭环进行位姿估计,并建立约束优化节点参数;If there is no global closed loop, as shown in Figure 6(b), perform pose estimation on the local closed loop, and establish constraints to optimize node parameters;
局部闭环约束下节点参数优化方程:Node parameter optimization equation under local closed-loop constraints:
其中,表示最近时段所构建地图与之前时段构建地图模型之间变形的约束;in, Represents the constraint of deformation between the map constructed in the latest period and the map model constructed in the previous period;
步骤5,采用OpenGL进行点云的融合和更新,具体流程如图7所示。首先,将每个输入顶点的3D坐标转换成2D坐标,根据光照公式计算每个顶点的颜色值,并生成纹理坐标;然后,将第一步处理的顶点与几何着色器存储的多个顶点组成的图元组织起来,并进行裁剪和栅格化;最后,采用片元着色器对栅格化之后生成的独立片元计算最终颜色和深度值,进而拼接成全局点云地图;Step 5, using OpenGL to fuse and update the point cloud, the specific process is shown in Figure 7. First, convert the 3D coordinates of each input vertex into 2D coordinates, calculate the color value of each vertex according to the lighting formula, and generate texture coordinates; then, combine the vertices processed in the first step with multiple vertices stored in the geometry shader The primitives are organized, clipped and rasterized; finally, the fragment shader is used to calculate the final color and depth value of the independent fragments generated after rasterization, and then spliced into a global point cloud map;
步骤6,在全局三维点云地图生成的同时,进行语义标注,将关键帧的像素级语义标注结果映射到对应的点云地图上,根据机器人位姿变换矩阵TWC,可将每一个像素点的相机坐标转换为世界坐标,最后根据每个像素点所对应的三维空间坐标,将关键帧图像的二维语义分割结果映射到对应的三维点云地图上,完成三维点云地图的语义标注任务;Step 6: While generating the global 3D point cloud map, perform semantic annotation, and map the pixel-level semantic annotation results of key frames to the corresponding point cloud map. According to the robot pose transformation matrix T WC , each pixel can be The camera coordinates are converted into world coordinates, and finally, according to the 3D space coordinates corresponding to each pixel point, the 2D semantic segmentation results of the key frame image are mapped to the corresponding 3D point cloud map, and the semantic labeling task of the 3D point cloud map is completed ;
步骤7,由于新获取图像帧会对点云分配到不同的标签,因此还需要根据新获取关键帧的语义标签对已构建点云地图的语义信息进行优化,以关键帧的语义分割结果对点云标签概率分布进行初始化,采用递归贝叶斯更新点云标签分布概率:Step 7, since the newly acquired image frame will assign different labels to the point cloud, it is also necessary to optimize the semantic information of the constructed point cloud map according to the semantic label of the newly acquired key frame, and use the semantic segmentation result of the key frame to classify the point cloud The cloud label probability distribution is initialized, and the recursive Bayesian is used to update the point cloud label distribution probability:
其中,ct表示t时刻点云的类别概率分布,表示关键帧集合{K0,K1,…,Kt},Z表示归一化常量,Kt表示t时刻的关键帧;通过最大化该概率分布函数就可以得到每一个点云的最终语义标签:Among them, c t represents the category probability distribution of the point cloud at time t, Represents a set of key frames {K 0 , K 1 ,...,K t }, Z represents a normalization constant, and K t represents the key frame at time t; the final semantics of each point cloud can be obtained by maximizing the probability distribution function Label:
L(P)=argmaxP(c|K)L(P)=argmaxP(c|K)
其中,P(c|K)表示关键帧中点云的标签概率分布,L(P)表示点云的最终语义类别。where P(c|K) denotes the label probability distribution of the point cloud in the keyframe, and L(P) denotes the final semantic category of the point cloud.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910408713.4A CN110243370A (en) | 2019-05-16 | 2019-05-16 | A 3D Semantic Map Construction Method for Indoor Environment Based on Deep Learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910408713.4A CN110243370A (en) | 2019-05-16 | 2019-05-16 | A 3D Semantic Map Construction Method for Indoor Environment Based on Deep Learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110243370A true CN110243370A (en) | 2019-09-17 |
Family
ID=67884125
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910408713.4A Pending CN110243370A (en) | 2019-05-16 | 2019-05-16 | A 3D Semantic Map Construction Method for Indoor Environment Based on Deep Learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110243370A (en) |
Cited By (89)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110738200A (en) * | 2019-12-23 | 2020-01-31 | 广州赛特智能科技有限公司 | Lane line 3D point cloud map construction method, electronic device and storage medium |
CN110751220A (en) * | 2019-10-24 | 2020-02-04 | 江西应用技术职业学院 | A Machine Vision Indoor Localization Method Based on Improved Convolutional Neural Network Structure |
CN110781262A (en) * | 2019-10-21 | 2020-02-11 | 中国科学院计算技术研究所 | Semantic map construction method based on visual SLAM |
CN110807782A (en) * | 2019-10-25 | 2020-02-18 | 中山大学 | A map representation system for visual robot and its construction method |
CN110826448A (en) * | 2019-10-29 | 2020-02-21 | 中山大学 | Indoor positioning method with automatic updating function |
CN110900560A (en) * | 2019-11-27 | 2020-03-24 | 佛山科学技术学院 | A multi-legged wheeled mobile robot system with scene understanding ability |
CN110969568A (en) * | 2019-11-29 | 2020-04-07 | 广联达科技股份有限公司 | BIM model double-sided display accelerated rendering method, system, product and storage medium |
CN110986945A (en) * | 2019-11-14 | 2020-04-10 | 上海交通大学 | Local Navigation Method and System Based on Semantic Heightmap |
CN111080659A (en) * | 2019-12-19 | 2020-04-28 | 哈尔滨工业大学 | Environmental semantic perception method based on visual information |
CN111105695A (en) * | 2019-12-31 | 2020-05-05 | 智车优行科技(上海)有限公司 | Map making method and device, electronic equipment and computer readable storage medium |
CN111125283A (en) * | 2019-12-23 | 2020-05-08 | 苏州智加科技有限公司 | Electronic map construction method and device, computer equipment and storage medium |
CN111161334A (en) * | 2019-12-31 | 2020-05-15 | 南通大学 | A deep learning-based semantic map construction method |
CN111178342A (en) * | 2020-04-10 | 2020-05-19 | 浙江欣奕华智能科技有限公司 | Pose graph optimization method, device, equipment and medium |
CN111179344A (en) * | 2019-12-26 | 2020-05-19 | 广东工业大学 | Efficient mobile robot SLAM system for repairing semantic information |
CN111190981A (en) * | 2019-12-25 | 2020-05-22 | 中国科学院上海微系统与信息技术研究所 | Method and device for constructing three-dimensional semantic map, electronic equipment and storage medium |
CN111210518A (en) * | 2020-01-15 | 2020-05-29 | 西安交通大学 | Topological map generation method based on visual fusion landmarks |
CN111223101A (en) * | 2020-01-17 | 2020-06-02 | 湖南视比特机器人有限公司 | Point cloud processing method, point cloud processing system, and storage medium |
CN111223136A (en) * | 2020-01-03 | 2020-06-02 | 三星(中国)半导体有限公司 | Depth feature extraction method and device for sparse 2D point set |
CN111239761A (en) * | 2020-01-20 | 2020-06-05 | 西安交通大学 | Method for indoor real-time establishment of two-dimensional map |
CN111242994A (en) * | 2019-12-31 | 2020-06-05 | 深圳优地科技有限公司 | Semantic map construction method and device, robot and storage medium |
CN111248815A (en) * | 2020-01-16 | 2020-06-09 | 珠海格力电器股份有限公司 | Method, device and equipment for generating working map and storage medium |
CN111325843A (en) * | 2020-03-09 | 2020-06-23 | 北京航空航天大学 | A real-time semantic map construction method based on semantic inverse depth filtering |
CN111325842A (en) * | 2020-03-04 | 2020-06-23 | Oppo广东移动通信有限公司 | Map construction method, repositioning method and device, storage medium and electronic equipment |
CN111340939A (en) * | 2020-02-21 | 2020-06-26 | 广东工业大学 | An Indoor 3D Semantic Map Construction Method |
CN111367282A (en) * | 2020-03-09 | 2020-07-03 | 山东大学 | A robot navigation method and system based on multimodal perception and reinforcement learning |
CN111429517A (en) * | 2020-03-23 | 2020-07-17 | Oppo广东移动通信有限公司 | Relocation method, relocation device, storage medium and electronic device |
CN111462324A (en) * | 2020-05-18 | 2020-07-28 | 南京大学 | Online spatiotemporal semantic fusion method and system |
CN111476894A (en) * | 2020-05-14 | 2020-07-31 | 小狗电器互联网科技(北京)股份有限公司 | Three-dimensional semantic map construction method and device, storage medium and electronic equipment |
CN111551167A (en) * | 2020-02-10 | 2020-08-18 | 江苏盖亚环境科技股份有限公司 | Global navigation auxiliary method based on unmanned aerial vehicle shooting and semantic segmentation |
CN111553945A (en) * | 2020-04-13 | 2020-08-18 | 东风柳州汽车有限公司 | Vehicle positioning method |
CN111563442A (en) * | 2020-04-29 | 2020-08-21 | 上海交通大学 | Slam method and system for fusing point cloud and camera image data based on laser radar |
CN111612886A (en) * | 2020-04-21 | 2020-09-01 | 厦门大学 | Indoor 3D model generation method and system |
CN111652174A (en) * | 2020-06-10 | 2020-09-11 | 北京云迹科技有限公司 | Semantic calibration method and device based on laser data |
CN111664860A (en) * | 2020-07-01 | 2020-09-15 | 北京三快在线科技有限公司 | Positioning method and device, intelligent equipment and storage medium |
CN111667545A (en) * | 2020-05-07 | 2020-09-15 | 东软睿驰汽车技术(沈阳)有限公司 | High-precision map generation method and device, electronic equipment and storage medium |
CN111737278A (en) * | 2020-08-05 | 2020-10-02 | 鹏城实验室 | Simultaneous positioning and mapping method, system, device and storage medium |
CN111783838A (en) * | 2020-06-05 | 2020-10-16 | 东南大学 | A point cloud feature space representation method for laser SLAM |
CN111882663A (en) * | 2020-07-03 | 2020-11-03 | 广州万维创新科技有限公司 | Visual SLAM closed-loop detection method achieved by fusing semantic information |
CN111882590A (en) * | 2020-06-24 | 2020-11-03 | 广州万维创新科技有限公司 | AR scene application method based on single picture positioning |
CN111899277A (en) * | 2020-07-09 | 2020-11-06 | 浙江大华技术股份有限公司 | Moving object detection method and device, storage medium and electronic device |
CN111951397A (en) * | 2020-08-07 | 2020-11-17 | 清华大学 | A method, device and storage medium for multi-machine cooperative construction of three-dimensional point cloud map |
CN111966110A (en) * | 2020-09-08 | 2020-11-20 | 天津海运职业学院 | Automatic navigation method and system for port unmanned transport vehicle |
CN111958592A (en) * | 2020-07-30 | 2020-11-20 | 国网智能科技股份有限公司 | Image semantic analysis system and method for transformer substation inspection robot |
CN112017188A (en) * | 2020-09-09 | 2020-12-01 | 上海航天控制技术研究所 | Space non-cooperative target semantic identification and reconstruction method |
CN112347550A (en) * | 2020-12-07 | 2021-02-09 | 厦门大学 | Coupled indoor 3D semantic mapping and modeling method |
CN112767485A (en) * | 2021-01-26 | 2021-05-07 | 哈尔滨工业大学(深圳) | Point cloud map creating and scene identifying method based on static semantic information |
CN112802204A (en) * | 2021-01-26 | 2021-05-14 | 山东大学 | Target semantic navigation method and system for three-dimensional space scene prior in unknown environment |
CN112819893A (en) * | 2021-02-08 | 2021-05-18 | 北京航空航天大学 | Method and device for constructing three-dimensional semantic map |
CN112836734A (en) * | 2021-01-27 | 2021-05-25 | 深圳市华汉伟业科技有限公司 | Heterogeneous data fusion method and device and storage medium |
CN112862894A (en) * | 2021-04-12 | 2021-05-28 | 中国科学技术大学 | Robot three-dimensional point cloud map construction and expansion method |
CN112904437A (en) * | 2021-01-14 | 2021-06-04 | 支付宝(杭州)信息技术有限公司 | Detection method and detection device of hidden component based on privacy protection |
CN113009501A (en) * | 2021-02-25 | 2021-06-22 | 重庆交通大学 | Image and laser data fused robot navigation three-dimensional semantic map generation method |
CN113052761A (en) * | 2019-12-26 | 2021-06-29 | 炬星科技(深圳)有限公司 | Laser point cloud map fusion method, device and computer readable storage medium |
CN113052369A (en) * | 2021-03-15 | 2021-06-29 | 北京农业智能装备技术研究中心 | Intelligent agricultural machinery operation management method and system |
CN113077512A (en) * | 2021-03-24 | 2021-07-06 | 浙江中体文化集团有限公司 | RGB-D pose recognition model training method and system |
CN113160420A (en) * | 2021-05-17 | 2021-07-23 | 上海商汤临港智能科技有限公司 | Three-dimensional point cloud reconstruction method and device, electronic equipment and storage medium |
CN113238554A (en) * | 2021-05-08 | 2021-08-10 | 武汉科技大学 | Indoor navigation method and system based on SLAM technology integrating laser and vision |
CN113256711A (en) * | 2021-05-27 | 2021-08-13 | 南京航空航天大学 | Pose estimation method and system of monocular camera |
CN113405547A (en) * | 2021-05-21 | 2021-09-17 | 杭州电子科技大学 | Unmanned aerial vehicle navigation method based on semantic VSLAM |
CN113534786A (en) * | 2020-04-20 | 2021-10-22 | 深圳市奇虎智能科技有限公司 | SLAM method-based environment reconstruction method and system and mobile robot |
CN113592875A (en) * | 2020-04-30 | 2021-11-02 | 阿里巴巴集团控股有限公司 | Data processing method, image processing method, storage medium and computing device |
CN113576780A (en) * | 2021-08-04 | 2021-11-02 | 北京化工大学 | Intelligent wheelchair based on semantic vision SLAM |
CN113674416A (en) * | 2021-08-26 | 2021-11-19 | 中国电子科技集团公司信息科学研究院 | Three-dimensional map construction method and device, electronic equipment and storage medium |
CN113763551A (en) * | 2021-09-08 | 2021-12-07 | 北京易航远智科技有限公司 | Point cloud-based rapid repositioning method for large-scale mapping scene |
CN113759369A (en) * | 2020-06-28 | 2021-12-07 | 北京京东乾石科技有限公司 | Image establishing method and device based on double multiline radar |
CN113888691A (en) * | 2020-07-03 | 2022-01-04 | 上海大界机器人科技有限公司 | Method, device and storage medium for building scene semantic map construction |
CN113916245A (en) * | 2021-10-09 | 2022-01-11 | 上海大学 | Semantic map construction method based on instance segmentation and VSLAM |
CN113935428A (en) * | 2021-10-25 | 2022-01-14 | 山东大学 | Three-dimensional point cloud clustering identification method and system based on image identification |
CN114092530A (en) * | 2021-10-27 | 2022-02-25 | 江西省通讯终端产业技术研究院有限公司 | Ladle visual alignment method, device and equipment based on deep learning semantic segmentation and point cloud registration |
CN114092655A (en) * | 2021-11-08 | 2022-02-25 | 北京三快在线科技有限公司 | Map construction method, device, equipment and storage medium |
CN114359493A (en) * | 2021-12-20 | 2022-04-15 | 中国船舶重工集团公司第七0九研究所 | Method and system for generating three-dimensional semantic map for unmanned ship |
CN114625815A (en) * | 2020-12-11 | 2022-06-14 | 广东博智林机器人有限公司 | A method and system for generating semantic map of construction robot |
WO2022134057A1 (en) * | 2020-12-25 | 2022-06-30 | Intel Corporation | Re-localization of robot |
CN114742893A (en) * | 2022-06-09 | 2022-07-12 | 浙江大学湖州研究院 | 3D laser data training and rapid positioning method based on deep learning |
CN114782530A (en) * | 2022-03-28 | 2022-07-22 | 杭州国辰机器人科技有限公司 | Three-dimensional semantic map construction method, device, equipment and medium under indoor scene |
CN114863096A (en) * | 2022-04-02 | 2022-08-05 | 合众新能源汽车有限公司 | Semantic map construction and positioning method and device for indoor parking lot |
CN114913224A (en) * | 2021-02-07 | 2022-08-16 | 浙江舜宇智能光学技术有限公司 | Composition method for mobile robot based on visual SLAM |
CN114952847A (en) * | 2022-05-31 | 2022-08-30 | 中国电信股份有限公司 | Cognitive map construction method and device |
CN115035260A (en) * | 2022-05-27 | 2022-09-09 | 哈尔滨工程大学 | Indoor mobile robot three-dimensional semantic map construction method |
CN115408544A (en) * | 2022-08-19 | 2022-11-29 | 梅卡曼德(北京)机器人科技有限公司 | Image database construction method, device, equipment, storage medium and product |
CN115638788A (en) * | 2022-12-23 | 2023-01-24 | 安徽蔚来智驾科技有限公司 | Semantic vector map construction method, computer equipment and storage medium |
CN115655262A (en) * | 2022-12-26 | 2023-01-31 | 广东省科学院智能制造研究所 | Deep learning perception-based multi-level semantic map construction method and device |
CN115719363A (en) * | 2022-10-31 | 2023-02-28 | 重庆理工大学 | Environment sensing method and system capable of performing two-dimensional dynamic detection and three-dimensional reconstruction |
CN115880690A (en) * | 2022-11-23 | 2023-03-31 | 郑州大学 | Method for quickly marking object in point cloud under assistance of three-dimensional reconstruction |
CN116134488A (en) * | 2020-12-23 | 2023-05-16 | 深圳元戎启行科技有限公司 | Point cloud labeling method, point cloud labeling device, computer equipment and storage medium |
CN116664681A (en) * | 2023-07-26 | 2023-08-29 | 长春工程学院 | Semantic perception-based intelligent collaborative augmented reality system and method for electric power operation |
CN117315092A (en) * | 2023-10-08 | 2023-12-29 | 玩出梦想(上海)科技有限公司 | Automatic labeling method and data processing equipment |
CN118031976A (en) * | 2024-04-15 | 2024-05-14 | 中国科学院国家空间科学中心 | Man-machine cooperative system for exploring unknown environment |
CN114092655B (en) * | 2021-11-08 | 2025-02-25 | 北京三快在线科技有限公司 | Method, device, equipment and storage medium for constructing a map |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105856230A (en) * | 2016-05-06 | 2016-08-17 | 简燕梅 | ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot |
CN107063258A (en) * | 2017-03-07 | 2017-08-18 | 重庆邮电大学 | A kind of mobile robot indoor navigation method based on semantic information |
CN107680133A (en) * | 2017-09-15 | 2018-02-09 | 重庆邮电大学 | A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm |
CN107958482A (en) * | 2016-10-17 | 2018-04-24 | 杭州海康威视数字技术股份有限公司 | A kind of three-dimensional scene models construction method and device |
CN108415032A (en) * | 2018-03-05 | 2018-08-17 | 中山大学 | A kind of point cloud semanteme map constructing method based on deep learning and laser radar |
CN108615244A (en) * | 2018-03-27 | 2018-10-02 | 中国地质大学(武汉) | A kind of image depth estimation method and system based on CNN and depth filter |
CN109636905A (en) * | 2018-12-07 | 2019-04-16 | 东北大学 | Environment semanteme based on depth convolutional neural networks builds drawing method |
-
2019
- 2019-05-16 CN CN201910408713.4A patent/CN110243370A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105856230A (en) * | 2016-05-06 | 2016-08-17 | 简燕梅 | ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot |
CN107958482A (en) * | 2016-10-17 | 2018-04-24 | 杭州海康威视数字技术股份有限公司 | A kind of three-dimensional scene models construction method and device |
CN107063258A (en) * | 2017-03-07 | 2017-08-18 | 重庆邮电大学 | A kind of mobile robot indoor navigation method based on semantic information |
CN107680133A (en) * | 2017-09-15 | 2018-02-09 | 重庆邮电大学 | A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm |
CN108415032A (en) * | 2018-03-05 | 2018-08-17 | 中山大学 | A kind of point cloud semanteme map constructing method based on deep learning and laser radar |
CN108615244A (en) * | 2018-03-27 | 2018-10-02 | 中国地质大学(武汉) | A kind of image depth estimation method and system based on CNN and depth filter |
CN109636905A (en) * | 2018-12-07 | 2019-04-16 | 东北大学 | Environment semanteme based on depth convolutional neural networks builds drawing method |
Non-Patent Citations (2)
Title |
---|
单吉超等: "室内场景下实时地三维语义地图构建", 《仪器仪表学报》 * |
辛菁等: "基于Kinect的移动机器人大视角3维V-SLAM", 《机器人》 * |
Cited By (137)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110781262A (en) * | 2019-10-21 | 2020-02-11 | 中国科学院计算技术研究所 | Semantic map construction method based on visual SLAM |
CN110781262B (en) * | 2019-10-21 | 2023-06-02 | 中国科学院计算技术研究所 | Construction method of semantic map based on visual SLAM |
CN110751220B (en) * | 2019-10-24 | 2022-02-11 | 江西应用技术职业学院 | A Machine Vision Indoor Localization Method Based on Improved Convolutional Neural Network Structure |
CN110751220A (en) * | 2019-10-24 | 2020-02-04 | 江西应用技术职业学院 | A Machine Vision Indoor Localization Method Based on Improved Convolutional Neural Network Structure |
CN110807782A (en) * | 2019-10-25 | 2020-02-18 | 中山大学 | A map representation system for visual robot and its construction method |
CN110826448A (en) * | 2019-10-29 | 2020-02-21 | 中山大学 | Indoor positioning method with automatic updating function |
CN110826448B (en) * | 2019-10-29 | 2023-04-07 | 中山大学 | Indoor positioning method with automatic updating function |
CN110986945A (en) * | 2019-11-14 | 2020-04-10 | 上海交通大学 | Local Navigation Method and System Based on Semantic Heightmap |
CN110986945B (en) * | 2019-11-14 | 2023-06-27 | 上海交通大学 | Local navigation method and system based on semantic height map |
CN110900560A (en) * | 2019-11-27 | 2020-03-24 | 佛山科学技术学院 | A multi-legged wheeled mobile robot system with scene understanding ability |
CN110969568A (en) * | 2019-11-29 | 2020-04-07 | 广联达科技股份有限公司 | BIM model double-sided display accelerated rendering method, system, product and storage medium |
CN111080659A (en) * | 2019-12-19 | 2020-04-28 | 哈尔滨工业大学 | Environmental semantic perception method based on visual information |
CN111125283A (en) * | 2019-12-23 | 2020-05-08 | 苏州智加科技有限公司 | Electronic map construction method and device, computer equipment and storage medium |
CN110738200A (en) * | 2019-12-23 | 2020-01-31 | 广州赛特智能科技有限公司 | Lane line 3D point cloud map construction method, electronic device and storage medium |
CN111125283B (en) * | 2019-12-23 | 2022-11-15 | 苏州智加科技有限公司 | Electronic map construction method and device, computer equipment and storage medium |
CN111190981A (en) * | 2019-12-25 | 2020-05-22 | 中国科学院上海微系统与信息技术研究所 | Method and device for constructing three-dimensional semantic map, electronic equipment and storage medium |
CN113052761B (en) * | 2019-12-26 | 2024-01-30 | 炬星科技(深圳)有限公司 | Laser point cloud map fusion method, device and computer readable storage medium |
CN111179344A (en) * | 2019-12-26 | 2020-05-19 | 广东工业大学 | Efficient mobile robot SLAM system for repairing semantic information |
CN111179344B (en) * | 2019-12-26 | 2023-05-23 | 广东工业大学 | Efficient mobile robot SLAM system for repairing semantic information |
WO2021129349A1 (en) * | 2019-12-26 | 2021-07-01 | 炬星科技(深圳)有限公司 | Laser point cloud map merging method, apparatus, and computer readable storage medium |
CN113052761A (en) * | 2019-12-26 | 2021-06-29 | 炬星科技(深圳)有限公司 | Laser point cloud map fusion method, device and computer readable storage medium |
CN111105695A (en) * | 2019-12-31 | 2020-05-05 | 智车优行科技(上海)有限公司 | Map making method and device, electronic equipment and computer readable storage medium |
CN111242994B (en) * | 2019-12-31 | 2024-01-09 | 深圳优地科技有限公司 | Semantic map construction method, semantic map construction device, robot and storage medium |
CN111242994A (en) * | 2019-12-31 | 2020-06-05 | 深圳优地科技有限公司 | Semantic map construction method and device, robot and storage medium |
CN111161334A (en) * | 2019-12-31 | 2020-05-15 | 南通大学 | A deep learning-based semantic map construction method |
CN111223136A (en) * | 2020-01-03 | 2020-06-02 | 三星(中国)半导体有限公司 | Depth feature extraction method and device for sparse 2D point set |
CN111223136B (en) * | 2020-01-03 | 2024-04-23 | 三星(中国)半导体有限公司 | Depth feature extraction method and device for sparse 2D point set |
CN111210518B (en) * | 2020-01-15 | 2022-04-05 | 西安交通大学 | Topological map generation method based on visual fusion landmarks |
CN111210518A (en) * | 2020-01-15 | 2020-05-29 | 西安交通大学 | Topological map generation method based on visual fusion landmarks |
CN111248815A (en) * | 2020-01-16 | 2020-06-09 | 珠海格力电器股份有限公司 | Method, device and equipment for generating working map and storage medium |
CN111223101A (en) * | 2020-01-17 | 2020-06-02 | 湖南视比特机器人有限公司 | Point cloud processing method, point cloud processing system, and storage medium |
CN111223101B (en) * | 2020-01-17 | 2023-08-11 | 湖南视比特机器人有限公司 | Point cloud processing method, point cloud processing system and storage medium |
CN111239761A (en) * | 2020-01-20 | 2020-06-05 | 西安交通大学 | Method for indoor real-time establishment of two-dimensional map |
CN111551167B (en) * | 2020-02-10 | 2022-09-27 | 江苏盖亚环境科技股份有限公司 | Global navigation auxiliary method based on unmanned aerial vehicle shooting and semantic segmentation |
CN111551167A (en) * | 2020-02-10 | 2020-08-18 | 江苏盖亚环境科技股份有限公司 | Global navigation auxiliary method based on unmanned aerial vehicle shooting and semantic segmentation |
CN111340939B (en) * | 2020-02-21 | 2023-04-18 | 广东工业大学 | Indoor three-dimensional semantic map construction method |
CN111340939A (en) * | 2020-02-21 | 2020-06-26 | 广东工业大学 | An Indoor 3D Semantic Map Construction Method |
CN111325842B (en) * | 2020-03-04 | 2023-07-28 | Oppo广东移动通信有限公司 | Map construction method, repositioning method and device, storage medium and electronic equipment |
CN111325842A (en) * | 2020-03-04 | 2020-06-23 | Oppo广东移动通信有限公司 | Map construction method, repositioning method and device, storage medium and electronic equipment |
CN111325843A (en) * | 2020-03-09 | 2020-06-23 | 北京航空航天大学 | A real-time semantic map construction method based on semantic inverse depth filtering |
CN111367282A (en) * | 2020-03-09 | 2020-07-03 | 山东大学 | A robot navigation method and system based on multimodal perception and reinforcement learning |
CN111429517A (en) * | 2020-03-23 | 2020-07-17 | Oppo广东移动通信有限公司 | Relocation method, relocation device, storage medium and electronic device |
CN111178342A (en) * | 2020-04-10 | 2020-05-19 | 浙江欣奕华智能科技有限公司 | Pose graph optimization method, device, equipment and medium |
CN111553945B (en) * | 2020-04-13 | 2023-08-11 | 东风柳州汽车有限公司 | Vehicle positioning method |
CN111553945A (en) * | 2020-04-13 | 2020-08-18 | 东风柳州汽车有限公司 | Vehicle positioning method |
CN113534786A (en) * | 2020-04-20 | 2021-10-22 | 深圳市奇虎智能科技有限公司 | SLAM method-based environment reconstruction method and system and mobile robot |
CN111612886B (en) * | 2020-04-21 | 2022-07-19 | 厦门大学 | Indoor 3D model generation method and system |
CN111612886A (en) * | 2020-04-21 | 2020-09-01 | 厦门大学 | Indoor 3D model generation method and system |
CN111563442B (en) * | 2020-04-29 | 2023-05-02 | 上海交通大学 | Slam method and system for fusing point cloud and camera image data based on laser radar |
CN111563442A (en) * | 2020-04-29 | 2020-08-21 | 上海交通大学 | Slam method and system for fusing point cloud and camera image data based on laser radar |
CN113592875B (en) * | 2020-04-30 | 2024-01-23 | 阿里巴巴集团控股有限公司 | Data processing method, image processing method, storage medium, and computing device |
CN113592875A (en) * | 2020-04-30 | 2021-11-02 | 阿里巴巴集团控股有限公司 | Data processing method, image processing method, storage medium and computing device |
CN111667545B (en) * | 2020-05-07 | 2024-02-27 | 东软睿驰汽车技术(沈阳)有限公司 | High-precision map generation method and device, electronic equipment and storage medium |
CN111667545A (en) * | 2020-05-07 | 2020-09-15 | 东软睿驰汽车技术(沈阳)有限公司 | High-precision map generation method and device, electronic equipment and storage medium |
CN111476894A (en) * | 2020-05-14 | 2020-07-31 | 小狗电器互联网科技(北京)股份有限公司 | Three-dimensional semantic map construction method and device, storage medium and electronic equipment |
CN111462324A (en) * | 2020-05-18 | 2020-07-28 | 南京大学 | Online spatiotemporal semantic fusion method and system |
CN111462324B (en) * | 2020-05-18 | 2022-05-17 | 南京大学 | An online spatiotemporal semantic fusion method and system |
CN111783838A (en) * | 2020-06-05 | 2020-10-16 | 东南大学 | A point cloud feature space representation method for laser SLAM |
CN111652174A (en) * | 2020-06-10 | 2020-09-11 | 北京云迹科技有限公司 | Semantic calibration method and device based on laser data |
CN111652174B (en) * | 2020-06-10 | 2024-01-23 | 北京云迹科技股份有限公司 | Semantical calibration method and device based on laser data |
CN111882590A (en) * | 2020-06-24 | 2020-11-03 | 广州万维创新科技有限公司 | AR scene application method based on single picture positioning |
CN113759369B (en) * | 2020-06-28 | 2023-12-05 | 北京京东乾石科技有限公司 | Graph construction method and device based on double multi-line radar |
CN113759369A (en) * | 2020-06-28 | 2021-12-07 | 北京京东乾石科技有限公司 | Image establishing method and device based on double multiline radar |
CN111664860A (en) * | 2020-07-01 | 2020-09-15 | 北京三快在线科技有限公司 | Positioning method and device, intelligent equipment and storage medium |
CN111664860B (en) * | 2020-07-01 | 2022-03-11 | 北京三快在线科技有限公司 | Positioning method and device, intelligent equipment and storage medium |
CN113888691A (en) * | 2020-07-03 | 2022-01-04 | 上海大界机器人科技有限公司 | Method, device and storage medium for building scene semantic map construction |
CN111882663A (en) * | 2020-07-03 | 2020-11-03 | 广州万维创新科技有限公司 | Visual SLAM closed-loop detection method achieved by fusing semantic information |
CN111899277A (en) * | 2020-07-09 | 2020-11-06 | 浙江大华技术股份有限公司 | Moving object detection method and device, storage medium and electronic device |
CN111899277B (en) * | 2020-07-09 | 2024-07-12 | 浙江大华技术股份有限公司 | Moving object detection method and device, storage medium and electronic device |
CN111958592B (en) * | 2020-07-30 | 2021-08-20 | 国网智能科技股份有限公司 | Image semantic analysis system and method for transformer substation inspection robot |
CN111958592A (en) * | 2020-07-30 | 2020-11-20 | 国网智能科技股份有限公司 | Image semantic analysis system and method for transformer substation inspection robot |
CN111737278B (en) * | 2020-08-05 | 2020-12-04 | 鹏城实验室 | Method, system, equipment and storage medium for simultaneous positioning and mapping |
CN111737278A (en) * | 2020-08-05 | 2020-10-02 | 鹏城实验室 | Simultaneous positioning and mapping method, system, device and storage medium |
CN111951397A (en) * | 2020-08-07 | 2020-11-17 | 清华大学 | A method, device and storage medium for multi-machine cooperative construction of three-dimensional point cloud map |
CN111966110A (en) * | 2020-09-08 | 2020-11-20 | 天津海运职业学院 | Automatic navigation method and system for port unmanned transport vehicle |
CN112017188A (en) * | 2020-09-09 | 2020-12-01 | 上海航天控制技术研究所 | Space non-cooperative target semantic identification and reconstruction method |
CN112017188B (en) * | 2020-09-09 | 2024-04-09 | 上海航天控制技术研究所 | Space non-cooperative target semantic recognition and reconstruction method |
CN112347550B (en) * | 2020-12-07 | 2022-07-15 | 厦门大学 | Coupled indoor 3D semantic mapping and modeling method |
CN112347550A (en) * | 2020-12-07 | 2021-02-09 | 厦门大学 | Coupled indoor 3D semantic mapping and modeling method |
CN114625815B (en) * | 2020-12-11 | 2024-12-13 | 广东博智林机器人有限公司 | A method and system for generating semantic maps of construction robots |
CN114625815A (en) * | 2020-12-11 | 2022-06-14 | 广东博智林机器人有限公司 | A method and system for generating semantic map of construction robot |
CN116134488A (en) * | 2020-12-23 | 2023-05-16 | 深圳元戎启行科技有限公司 | Point cloud labeling method, point cloud labeling device, computer equipment and storage medium |
WO2022134057A1 (en) * | 2020-12-25 | 2022-06-30 | Intel Corporation | Re-localization of robot |
CN112904437A (en) * | 2021-01-14 | 2021-06-04 | 支付宝(杭州)信息技术有限公司 | Detection method and detection device of hidden component based on privacy protection |
CN112767485A (en) * | 2021-01-26 | 2021-05-07 | 哈尔滨工业大学(深圳) | Point cloud map creating and scene identifying method based on static semantic information |
CN112767485B (en) * | 2021-01-26 | 2023-07-07 | 哈尔滨工业大学(深圳) | Point cloud map creation and scene identification method based on static semantic information |
CN112802204A (en) * | 2021-01-26 | 2021-05-14 | 山东大学 | Target semantic navigation method and system for three-dimensional space scene prior in unknown environment |
CN112836734A (en) * | 2021-01-27 | 2021-05-25 | 深圳市华汉伟业科技有限公司 | Heterogeneous data fusion method and device and storage medium |
CN114913224A (en) * | 2021-02-07 | 2022-08-16 | 浙江舜宇智能光学技术有限公司 | Composition method for mobile robot based on visual SLAM |
CN112819893A (en) * | 2021-02-08 | 2021-05-18 | 北京航空航天大学 | Method and device for constructing three-dimensional semantic map |
CN113009501A (en) * | 2021-02-25 | 2021-06-22 | 重庆交通大学 | Image and laser data fused robot navigation three-dimensional semantic map generation method |
CN113052369B (en) * | 2021-03-15 | 2024-05-10 | 北京农业智能装备技术研究中心 | Intelligent agricultural machinery operation management method and system |
CN113052369A (en) * | 2021-03-15 | 2021-06-29 | 北京农业智能装备技术研究中心 | Intelligent agricultural machinery operation management method and system |
CN113077512B (en) * | 2021-03-24 | 2022-06-28 | 浙江中体文化集团有限公司 | RGB-D pose recognition model training method and system |
CN113077512A (en) * | 2021-03-24 | 2021-07-06 | 浙江中体文化集团有限公司 | RGB-D pose recognition model training method and system |
CN112862894B (en) * | 2021-04-12 | 2022-09-06 | 中国科学技术大学 | Robot three-dimensional point cloud map construction and expansion method |
CN112862894A (en) * | 2021-04-12 | 2021-05-28 | 中国科学技术大学 | Robot three-dimensional point cloud map construction and expansion method |
CN113238554A (en) * | 2021-05-08 | 2021-08-10 | 武汉科技大学 | Indoor navigation method and system based on SLAM technology integrating laser and vision |
CN113160420A (en) * | 2021-05-17 | 2021-07-23 | 上海商汤临港智能科技有限公司 | Three-dimensional point cloud reconstruction method and device, electronic equipment and storage medium |
CN113405547A (en) * | 2021-05-21 | 2021-09-17 | 杭州电子科技大学 | Unmanned aerial vehicle navigation method based on semantic VSLAM |
CN113405547B (en) * | 2021-05-21 | 2022-03-22 | 杭州电子科技大学 | Unmanned aerial vehicle navigation method based on semantic VSLAM |
CN113256711A (en) * | 2021-05-27 | 2021-08-13 | 南京航空航天大学 | Pose estimation method and system of monocular camera |
CN113256711B (en) * | 2021-05-27 | 2024-03-12 | 南京航空航天大学 | Pose estimation method and system of monocular camera |
CN113576780A (en) * | 2021-08-04 | 2021-11-02 | 北京化工大学 | Intelligent wheelchair based on semantic vision SLAM |
CN113674416B (en) * | 2021-08-26 | 2024-04-26 | 中国电子科技集团公司信息科学研究院 | Three-dimensional map construction method and device, electronic equipment and storage medium |
CN113674416A (en) * | 2021-08-26 | 2021-11-19 | 中国电子科技集团公司信息科学研究院 | Three-dimensional map construction method and device, electronic equipment and storage medium |
CN113763551A (en) * | 2021-09-08 | 2021-12-07 | 北京易航远智科技有限公司 | Point cloud-based rapid repositioning method for large-scale mapping scene |
CN113763551B (en) * | 2021-09-08 | 2023-10-27 | 北京易航远智科技有限公司 | Rapid repositioning method for large-scale map building scene based on point cloud |
CN113916245A (en) * | 2021-10-09 | 2022-01-11 | 上海大学 | Semantic map construction method based on instance segmentation and VSLAM |
CN113935428A (en) * | 2021-10-25 | 2022-01-14 | 山东大学 | Three-dimensional point cloud clustering identification method and system based on image identification |
CN114092530A (en) * | 2021-10-27 | 2022-02-25 | 江西省通讯终端产业技术研究院有限公司 | Ladle visual alignment method, device and equipment based on deep learning semantic segmentation and point cloud registration |
CN114092655B (en) * | 2021-11-08 | 2025-02-25 | 北京三快在线科技有限公司 | Method, device, equipment and storage medium for constructing a map |
CN114092655A (en) * | 2021-11-08 | 2022-02-25 | 北京三快在线科技有限公司 | Map construction method, device, equipment and storage medium |
CN114359493A (en) * | 2021-12-20 | 2022-04-15 | 中国船舶重工集团公司第七0九研究所 | Method and system for generating three-dimensional semantic map for unmanned ship |
CN114782530A (en) * | 2022-03-28 | 2022-07-22 | 杭州国辰机器人科技有限公司 | Three-dimensional semantic map construction method, device, equipment and medium under indoor scene |
CN114863096B (en) * | 2022-04-02 | 2024-04-16 | 合众新能源汽车股份有限公司 | Semantic map construction and positioning method and device for indoor parking lot |
WO2023184869A1 (en) * | 2022-04-02 | 2023-10-05 | 合众新能源汽车股份有限公司 | Semantic map construction and localization method and apparatus for indoor parking lot |
CN114863096A (en) * | 2022-04-02 | 2022-08-05 | 合众新能源汽车有限公司 | Semantic map construction and positioning method and device for indoor parking lot |
CN115035260B (en) * | 2022-05-27 | 2024-11-05 | 哈尔滨工程大学 | A method for constructing three-dimensional semantic maps for indoor mobile robots |
CN115035260A (en) * | 2022-05-27 | 2022-09-09 | 哈尔滨工程大学 | Indoor mobile robot three-dimensional semantic map construction method |
CN114952847B (en) * | 2022-05-31 | 2025-01-03 | 中国电信股份有限公司 | A method and device for constructing a cognitive map |
CN114952847A (en) * | 2022-05-31 | 2022-08-30 | 中国电信股份有限公司 | Cognitive map construction method and device |
CN114742893A (en) * | 2022-06-09 | 2022-07-12 | 浙江大学湖州研究院 | 3D laser data training and rapid positioning method based on deep learning |
CN114742893B (en) * | 2022-06-09 | 2022-10-21 | 浙江大学湖州研究院 | 3D laser data training and rapid positioning method based on deep learning |
CN115408544A (en) * | 2022-08-19 | 2022-11-29 | 梅卡曼德(北京)机器人科技有限公司 | Image database construction method, device, equipment, storage medium and product |
CN115719363A (en) * | 2022-10-31 | 2023-02-28 | 重庆理工大学 | Environment sensing method and system capable of performing two-dimensional dynamic detection and three-dimensional reconstruction |
CN115719363B (en) * | 2022-10-31 | 2024-02-02 | 重庆理工大学 | Environment sensing method and system capable of performing two-dimensional dynamic detection and three-dimensional reconstruction |
CN115880690B (en) * | 2022-11-23 | 2023-08-11 | 郑州大学 | Method for quickly labeling objects in point cloud under assistance of three-dimensional reconstruction |
CN115880690A (en) * | 2022-11-23 | 2023-03-31 | 郑州大学 | Method for quickly marking object in point cloud under assistance of three-dimensional reconstruction |
CN115638788A (en) * | 2022-12-23 | 2023-01-24 | 安徽蔚来智驾科技有限公司 | Semantic vector map construction method, computer equipment and storage medium |
CN115638788B (en) * | 2022-12-23 | 2023-03-21 | 安徽蔚来智驾科技有限公司 | Semantic vector map construction method, computer equipment and storage medium |
CN115655262A (en) * | 2022-12-26 | 2023-01-31 | 广东省科学院智能制造研究所 | Deep learning perception-based multi-level semantic map construction method and device |
CN116664681B (en) * | 2023-07-26 | 2023-10-10 | 长春工程学院 | Semantic perception-based intelligent collaborative augmented reality system and method for electric power operation |
CN116664681A (en) * | 2023-07-26 | 2023-08-29 | 长春工程学院 | Semantic perception-based intelligent collaborative augmented reality system and method for electric power operation |
CN117315092B (en) * | 2023-10-08 | 2024-05-14 | 玩出梦想(上海)科技有限公司 | Automatic labeling method and data processing equipment |
CN117315092A (en) * | 2023-10-08 | 2023-12-29 | 玩出梦想(上海)科技有限公司 | Automatic labeling method and data processing equipment |
CN118031976A (en) * | 2024-04-15 | 2024-05-14 | 中国科学院国家空间科学中心 | Man-machine cooperative system for exploring unknown environment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110243370A (en) | A 3D Semantic Map Construction Method for Indoor Environment Based on Deep Learning | |
US20210142095A1 (en) | Image disparity estimation | |
Yin et al. | Automated semantic segmentation of industrial point clouds using ResPointNet++ | |
Pu et al. | Visual SLAM integration with semantic segmentation and deep learning: A review | |
Wang et al. | 3d lidar and stereo fusion using stereo matching network with conditional cost volume normalization | |
CN110738673A (en) | Visual SLAM method based on example segmentation | |
CN110458939A (en) | Indoor scene modeling method based on perspective generation | |
Matzen et al. | Nyc3dcars: A dataset of 3d vehicles in geographic context | |
Chen et al. | 3d point cloud processing and learning for autonomous driving | |
CN108416840A (en) | A Dense Reconstruction Method of 3D Scene Based on Monocular Camera | |
CN115900710A (en) | Navigation method of dynamic environment based on visual information | |
Yang et al. | Progress and perspectives of point cloud intelligence | |
CN111476089A (en) | Pedestrian detection method, system and terminal based on multi-mode information fusion in image | |
US20220351463A1 (en) | Method, computer device and storage medium for real-time urban scene reconstruction | |
CN112562001A (en) | Object 6D pose estimation method, device, equipment and medium | |
Huang et al. | Overview of LiDAR point cloud target detection methods based on deep learning | |
Li et al. | DNS-SLAM: Dense Neural Semantic-Informed SLAM | |
Zhang et al. | Exploring semantic information extraction from different data forms in 3D point cloud semantic segmentation | |
Zhuang et al. | A survey of point cloud completion | |
CN115965970A (en) | Method and system for realizing bird's-eye view semantic segmentation based on implicit set prediction | |
Lai et al. | 3D semantic map construction system based on visual SLAM and CNNs | |
Rong et al. | Active learning based 3D semantic labeling from images and videos | |
CN113487741B (en) | Dense three-dimensional map updating method and device | |
CN118822906A (en) | Indoor dynamic environment map construction method and system based on image restoration and completion | |
Li et al. | Monocular 3-D Object Detection Based on Depth-Guided Local Convolution for Smart Payment in D2D Systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190917 |