CN110243370A - A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning - Google Patents

A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning Download PDF

Info

Publication number
CN110243370A
CN110243370A CN201910408713.4A CN201910408713A CN110243370A CN 110243370 A CN110243370 A CN 110243370A CN 201910408713 A CN201910408713 A CN 201910408713A CN 110243370 A CN110243370 A CN 110243370A
Authority
CN
China
Prior art keywords
image
semantic
point cloud
map
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910408713.4A
Other languages
Chinese (zh)
Inventor
辛菁
杜柯楠
刘丁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201910408713.4A priority Critical patent/CN110243370A/en
Publication of CN110243370A publication Critical patent/CN110243370A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning, step are as follows: 1) the RGB-D image sequence of indoor environment target scene is obtained with the self-contained Kinect depth camera of mobile robot;2) feature extraction and processing are carried out to the RGB-D image currently obtained based on the semantic segmentation network of RGB-D image with trained;3) according to the corresponding robot posture information P of each frame Image estimation of inputt;4) it is relocated in real time according to Randomized ferns and closed loop detection algorithm optimizes robot pose;5) point cloud map is constructed with key frame, and new acquisition picture frame corresponding points cloud is merged with the point cloud map constructed;6) the Pixel-level semantic annotation result of key frame is mapped on corresponding cloud map;7) the semantic tagger information that three-dimensional point cloud map has been constructed with the semantic label optimization for obtaining key frame obtains the semantic map of three-dimensional of indoor environment;It completes indoor environment semanteme map and constructs task in real time, improve the intelligent level of mobile robot environment sensing.

Description

Indoor environment three-dimensional semantic map construction method based on deep learning
Technical Field
The invention belongs to the technical field of indoor navigation of mobile robots, and particularly relates to a deep learning-based indoor environment three-dimensional semantic map construction method.
Background
The target scene map construction is important research content of autonomous navigation of the mobile robot, semantic annotation is carried out on point clouds of the constructed map, a high-precision point cloud map with semantic information is generated, and the method has important application value in intelligent navigation of the mobile robot in an unknown environment. The mobile robot can naturally communicate with a user through the semantic map, so that human-computer interaction tasks such as automatic driving, home service and the like are completed.
Under the condition that the environment information of the mobile robot is completely unknown, the mobile robot does not have any prior information on the environment and the position of the mobile robot, and therefore the mobile robot is required to acquire the relevant information of the environment in the moving process through a sensor carried by the mobile robot, so that the environment map construction is completed, and the position of the mobile robot in the map is positioned, which is a Simultaneous positioning and map construction technology (SLAM). The existing scene map construction method is to extract and match characteristic points of a target scene image sequence to further obtain a sparse point cloud or a road sign map of a target scene, but the human-computer interaction tasks such as automatic driving, family service and the like are difficult to complete only by depending on the sparse point cloud map.
The scene map with high-level semantic information enables the robot to identify and model objects in the space, and the unknown environment scene information is understood more fully, so that a foundation is laid for higher-level human-computer interaction and more complex tasks. The traditional point cloud map labeling task depends on environment geometric information or user guidance marks, point cloud labeling is not accurate enough and needs to be performed offline, with the rapid development of a deep learning technology in the image perception field, particularly the achievement of a Convolutional Neural Network (CNN) in the aspect of image classification, a large number of scholars begin to apply deep learning to image semantic segmentation, accurate pixel-level semantic labeling is further provided for the point cloud map, and the indoor environment high-precision point cloud map real-time construction task is achieved. Therefore, the research of the indoor scene semantic map construction technology based on deep learning has important theoretical significance and wide application prospect.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention aims to provide an indoor environment three-dimensional semantic map construction method based on deep learning; the invention applies deep learning to a three-dimensional semantic map construction algorithm of an indoor environment, and can carry out real-time and accurate pixel-level semantic annotation on three-dimensional point cloud, thereby constructing the three-dimensional semantic map of the indoor environment in real time. The method has the characteristics of implementation, accuracy and no need of off-line operation.
In order to achieve the purpose, the invention adopts the technical scheme that:
a deep learning-based indoor environment semantic map construction method comprises the following specific steps:
step 1, acquiring an RGB-D image sequence of an indoor environment target scene by using a Kinect depth camera carried by a mobile robot;
step 2, performing feature extraction and processing on each frame of the acquired RGB-D image by adopting a semantic segmentation network based on the RGB-D image;
step 3, estimating corresponding robot pose information P according to each input frame imaget
Step 4, optimizing the pose of the robot according to a Randomized ferns real-time repositioning and closed-loop detection algorithm;
step 5, constructing a point cloud map by utilizing the key frame, and fusing the point cloud corresponding to the newly acquired image frame with the constructed point cloud map;
step 6, mapping the pixel level semantic labeling result of the key frame to a corresponding point cloud map to obtain a semantic label of the key frame;
and 7, optimizing semantic annotation information of the constructed three-dimensional point cloud map by using the newly acquired key frame.
Step 2, performing feature extraction and processing on each frame of the acquired RGB-D image by adopting a semantic segmentation network based on the RGB-D image, wherein the specific method comprises the following steps: and performing pixel-level semantic prediction on each frame of input image by adopting an image cascade network ICNet and taking the depth information of the image as a fourth input channel of the network.
In step 3, estimating corresponding robot pose information P according to each input frame imagetThe specific method comprises the following steps: comprehensively utilizing the geometric pose estimation of the depth image and the photometric pose estimation of the RGB image to obtain the pose P of the robot by minimizing point-surface errors and photometric errorst
Point-surface error:
wherein,is the kth vertex of the current frame depth image; v. ofkAnd nkRespectively corresponding vertex and normal of the previous frame image; t is a current pose transformation matrix;
photometric error:
wherein,is the gray value of the current frame RGB image at point u;representing the gray value of a point u in the current frame RGB image projected on the previous frame RGB image;
joint loss function Etrack=Eicp+0.1Ergb
And solving an updated robot pose transformation matrix by adopting Gaussian-Newton nonlinear least square:namely the updated corresponding pose P of the current framet=T′Pt-1And determining a key frame sequence for constructing the point cloud map according to the robot pose relationship between adjacent image frames.
In step 4, the specific method for optimizing the pose of the robot according to the randomised ferns real-time relocation and closed-loop detection algorithm comprises the following steps: and coding each frame of input image, calculating the similarity between frames of the image according to the coded value, judging whether a new key frame is added or not according to the similarity, and solving a similarity transformation matrix for the new key frame to carry out closed-loop detection.
In step 5, the specific method for constructing the point cloud map by using the key frame and fusing the point cloud corresponding to the newly acquired image frame with the constructed point cloud map comprises the following steps: performing coordinate transformation on the point clouds corresponding to all the depth images to enable the subsequent point clouds and the first frame of point cloud to be in the same coordinate system; the optimal transformation relation is found between every two consecutive point clouds with overlapping, and the transformation relations are accumulated to all the point clouds, so that the current point clouds can be gradually fused into the constructed point cloud map.
In step 6, the specific method for mapping the pixel level semantic labeling result of the key frame to the corresponding point cloud map comprises the following steps: converting matrix T according to position and posture of robotWCAnd then converting the camera coordinate of each pixel point into a world coordinate, and finally mapping the two-dimensional semantic segmentation result of the key frame image onto a corresponding three-dimensional point cloud map according to the three-dimensional space coordinate corresponding to each pixel point to complete the semantic annotation task of the three-dimensional point cloud map.
The closed-loop detection comprises global closed-loop detection and local closed-loop detection, wherein a node parameter optimization equation under the constraint of the global closed-loop is as follows:
wherein H represents a similarity transformation matrix, PtShowing the robot pose corresponding to the current frame image,representing the projection of the point u in the depth map of the current frame,the initial pose of the robot is shown,represents an initial time;
the node parameter optimization equation under the local closed-loop constraint is as follows:
wherein,a constraint representing a deformation between a map built in a recent period and a map model built in a previous period.
The specific method of the step 7 comprises the following steps: initializing point cloud label probability distribution according to a semantic segmentation result of the key frame, and updating the point cloud label distribution probability by adopting recursive Bayes:
wherein, ctRepresenting the class probability distribution of the point cloud at time t,representing a set of key frames K0,K1,…,KtZ denotes a normalization constant, KtA key frame representing time t;
the final semantic label of each point cloud can be obtained by maximizing the probability distribution function:
L(P)=argmaxP(c|K)。
the invention has the beneficial effects that:
1) the invention comprehensively utilizes the color characteristics of the RGB image and the geometric characteristics of the depth image, improves the performance of the image semantic segmentation network, reasonably deletes network parameters through model compression, and can quickly obtain accurate semantic segmentation results in indoor environments with various objects and serious shielding.
2) According to the method, the Kinect depth camera carried by the robot is directly adopted to map the indoor environment, the point cloud can be subjected to real-time semantic annotation, the indoor environment space semantic map is constructed in an incremental mode, the mobile robot can perform intelligent navigation in the indoor global semantic map, and a foundation is laid for completing human-computer interaction tasks such as automatic driving and home service.
Drawings
FIG. 1 is a general block diagram of the system of the present invention.
FIG. 2 is a block diagram of an ICNet semantic segmentation network structure based on RGB-D images.
FIG. 3 is a schematic diagram of semantic segmentation network model compression.
Fig. 4 is a schematic diagram of a robot pose solving process.
FIG. 5 is a block diagram of a Randomized ferns real-time relocation and closed-loop detection algorithm flow.
Fig. 6(a) is a schematic diagram of robot pose optimization in the presence of a global closed loop situation.
Fig. 6(b) is a schematic diagram of robot pose optimization without global closed loop situation.
FIG. 7 is a block diagram of a point cloud fusion process.
Fig. 8 is a flow chart of the present invention.
Detailed Description
Embodiments of the invention will be described in further detail below with reference to the following figures and specific implementation details:
referring to fig. 1 and 8, a deep learning-based indoor environment semantic map construction method includes the following steps:
step 1, collecting an RGB-D image sequence of an indoor environment target scene by using a Kinect depth camera carried by a mobile robot;
step 2, performing feature extraction and processing on each frame of the acquired RGB-D image by adopting a semantic segmentation network based on the RGB-D image; the integral block diagram of the semantic segmentation network based on the RGB-D image is shown in FIG. 2, and the design steps are as follows:
1) in a Linux operating system, a caffe deep learning framework is utilized to construct an ICNet semantic segmentation network based on RGB-D images, and the RGB images and the depth images are spliced through a Concat layer to be used as four input channels of the semantic segmentation network;
2) training an image semantic segmentation network by using an NYUD V2 indoor image standard data set, and dividing an input RGB-D image pair into 1/4 images, 1/2 images and full-resolution images which are respectively used as the input of three branch networks;
3) fusing convolution characteristic graphs obtained by inputting images with three different resolutions through an Eltwise layer after two times of characteristic graph fusion;
4) performing a plurality of times of upsampling operations on the final fusion characteristic graph to restore the image resolution to the original input size, and finally obtaining an accurate semantic segmentation result;
the feature extraction and processing of the depth image increase the number of parameters of a network model, and model compression is needed to ensure the rapidity of the semantic segmentation network. During the network performance test, according to L of each convolution kernel1The norm reasonably subtracts network parameters to achieve the purpose of quickly obtaining the semantic segmentation result of the input image, and the specific flow is shown in fig. 3.
Step 3, comprehensively utilizing the geometric pose estimation of the depth image and the luminosity pose estimation of the RGB image to obtain the depth image by minimizing point-surface errors and luminosity errorsGet the pose P of the robott
Wherein the rotation matrix Rt∈SO3Translation matrix tt∈R3
Point-surface error:
wherein,is the kth vertex of the current frame depth image; v. ofkAnd nkRespectively corresponding vertex and normal of the previous frame image; t is a current pose transformation matrix;
photometric error:
wherein,is the gray value of the current frame RGB image at point u;representing the gray value of a point u in the current frame RGB image projected on the previous frame RGB image;
joint loss function Etrack=Eicp+0.1Ergb
Adopting a Gaussian-Newton nonlinear least square method to obtain an updated pose transformation matrix:namely the updated current frame corresponds to the camera pose Pt=T′Pt-1Specifically, as shown in fig. 4, a specific solving process determines a key frame sequence for constructing a point cloud map according to the robot pose relationship between adjacent image frames;
and 4, optimizing the pose and the point cloud map of the robot according to a randomised ferns real-time relocation and closed-loop detection algorithm, wherein the whole process is shown in fig. 5, the randomised ferns encodes each frame of input image and adopts a special encoding and storing mode to accelerate the efficiency of image similarity comparison, and the encoding mode is as follows:the code representing the image of each frame consists of m block codes,
wherein,indicating that each block code consists of n Ferns,
wherein,representing that each Fern determines the code of Ferns by comparing the size relationship between the pixel value of the c-channel pixel point x and the threshold theta,
calculating block codes for each newly acquired frame image, randomly initializing positions, channels and a threshold theta of Ferns by using a function Fers:: generateFerns (), comparing the similarity of a new input image frame with a previous image frame according to the block codes by using a function Ferns:: addFrame (), and determining whether a new key frame and a closed loop exist or not according to the similarity;
if a global closed loop exists, as shown in fig. 6(a), calculating the pose between the current frame and the ith frame by using the tracking algorithm in the step 1, obtaining a pose transformation matrix, uniformly sampling the image, establishing constraint, and optimizing node parameters, namely the pose of the robot;
node parameter optimization equation under global closed loop constraint:
wherein H represents a similarity transformation matrix, PtShowing the robot pose corresponding to the current frame image,representing the projected point of the current frame depth map in the camera coordinate system,the initial pose of the robot is shown,represents an initial time;
if the global closed loop does not exist, as shown in fig. 6(b), performing pose estimation on the local closed loop, and establishing constraint optimization node parameters;
node parameter optimization equation under local closed loop constraint:
wherein,a constraint representing a deformation between a map constructed in a recent period and a map model constructed in a previous period;
and 5, performing point cloud fusion and updating by adopting OpenGL, wherein the specific flow is shown in FIG. 7. Firstly, converting the 3D coordinates of each input vertex into 2D coordinates, calculating the color value of each vertex according to an illumination formula, and generating texture coordinates; secondly, organizing the vertex processed in the first step and a primitive composed of a plurality of vertices stored by a geometric shader, and performing clipping and rasterization; finally, calculating final color and depth values of the independent fragments generated after rasterization by using a fragment shader, and further splicing the independent fragments into a global point cloud map;
step 6, generating a global three-dimensional point cloud map, carrying out semantic annotation, mapping pixel-level semantic annotation results of key frames to the corresponding point cloud map, and transforming a matrix T according to the pose of the robotWCThe camera coordinate of each pixel point can be converted into a world coordinate, and finally, the two-dimensional semantic segmentation result of the key frame image is mapped onto the corresponding three-dimensional point cloud map according to the three-dimensional space coordinate corresponding to each pixel point, so that the semantic annotation task of the three-dimensional point cloud map is completed;
step 7, because the newly acquired image frame distributes different labels to the point clouds, semantic information of the constructed point cloud map is optimized according to the semantic labels of the newly acquired key frame, the probability distribution of the point cloud labels is initialized according to the semantic segmentation result of the key frame, and the point cloud label distribution probability is updated by adopting recursive bayes:
wherein, ctRepresenting the class probability distribution of the point cloud at time t,representing a set of key frames K0,K1,…,KtZ denotes a normalization constant, KtA key frame representing time t; the final semantic label of each point cloud can be obtained by maximizing the probability distribution function:
L(P)=argmaxP(c|K)
wherein P (c | K) represents the tag probability distribution of the point cloud in the key frame, and l (P) represents the final semantic category of the point cloud.

Claims (10)

1. A deep learning-based indoor environment three-dimensional semantic map construction method is characterized by comprising the following steps:
step 1, acquiring an RGB-D image sequence of an indoor environment target scene by using a Kinect depth camera carried by a mobile robot;
step 2, performing feature extraction and processing on each frame of the acquired RGB-D image by adopting a semantic segmentation network based on the RGB-D image;
step 3, estimating the corresponding according to each input frame imageRobot pose information Pt
Step 4, optimizing the pose of the robot according to a Randomized ferns real-time repositioning and closed-loop detection algorithm;
step 5, constructing a point cloud map by utilizing the key frame, and fusing the point cloud corresponding to the newly acquired image frame with the constructed point cloud map;
step 6, mapping the pixel level semantic labeling result of the key frame to a corresponding point cloud map to obtain a semantic label of the key frame;
and 7, optimizing semantic labeling information of the constructed three-dimensional point cloud map by utilizing the newly acquired semantic labels of the key frames.
2. The method for building the indoor environment semantic map based on deep learning as claimed in claim 1, wherein in step 2, the feature extraction and processing are performed on each frame of the RGB-D image obtained by using the RGB-D image based semantic segmentation network, and the specific method is as follows: and performing pixel-level semantic prediction on each frame of input image by adopting an image cascade network ICNet and taking the depth information of the image as a fourth input channel of the network.
3. The method for building the indoor environment semantic map based on the deep learning as claimed in claim 1, wherein in step 3, the corresponding robot pose information P is estimated according to each frame of the input imagetThe specific method comprises the following steps: comprehensively utilizing the geometric pose estimation of the depth image and the photometric pose estimation of the RGB image to obtain the pose P of the robot by minimizing point-surface errors and photometric errorstAnd determining a key frame sequence for constructing the point cloud map according to the robot pose relationship between adjacent image frames.
4. The method for building the indoor environment semantic map based on the deep learning of claim 1, wherein in the step 4, the specific method for optimizing the pose of the robot according to the Randomized ferns real-time relocation and closed-loop detection algorithm is as follows: and coding each frame of input image, calculating the similarity between frames of the image according to the coded value, judging whether a new key frame is added or not according to the similarity, and solving a similarity transformation matrix for the new key frame to carry out closed-loop detection.
5. The method for constructing the indoor environment semantic map based on the deep learning of claim 1, wherein in the step 5, the point cloud map construction is performed by using the key frame, and a specific method for fusing the point cloud corresponding to the newly acquired image frame with the constructed point cloud map is as follows: performing coordinate transformation on the point clouds corresponding to all the depth images to enable the subsequent point clouds and the first frame of point cloud to be in the same coordinate system; the optimal transformation relation is found between every two consecutive point clouds with overlapping, and the transformation relations are accumulated to all the point clouds, so that the current point clouds can be gradually fused into the reconstructed point cloud map.
6. The method for building the indoor environment semantic map based on the deep learning of claim 1, wherein in the step 6, the specific method for mapping the pixel-level semantic labeling result of the key frame to the corresponding point cloud map is as follows: converting matrix T according to position and posture of robotWCAnd then converting the camera coordinate of each pixel point into a world coordinate, and finally mapping the two-dimensional semantic segmentation result of the key frame image onto a corresponding three-dimensional point cloud map according to the three-dimensional space coordinate corresponding to each pixel point to complete the semantic annotation task of the three-dimensional point cloud map.
7. The method for building the indoor environment semantic map based on the deep learning of claim 1, wherein in step 7, the specific method for optimizing the semantic annotation information of the built three-dimensional point cloud map by using the semantic tags of the newly acquired key frames is as follows: initializing the probability distribution of point cloud labels according to the semantic segmentation result of the key frame, updating the probability distribution of the point cloud labels by adopting recursive Bayes, and obtaining the final semantic label of each point cloud by maximizing the probability distribution function.
8. The deep learning-based indoor environment semantic map construction method as claimed in claim 3, characterized in that the robot pose P is obtained by minimizing point-surface errors and luminosity errors in the step 3tThe specific method comprises the following steps:
point-surface error:
wherein,is the kth vertex of the current frame depth image; v. ofkAnd nkRespectively corresponding vertex and normal of the previous frame image; t is a current pose transformation matrix;
photometric error:
wherein,is the gray value of the current frame RGB image at point u;representing the gray value of a point u in the current frame RGB image projected on the previous frame RGB image;
joint loss function: etrack=Eicp+0.1Ergb
And solving an updated robot pose transformation matrix by adopting Gaussian-Newton nonlinear least square:namely the updated corresponding pose P of the current framet=T′Pt-1
9. The deep learning-based indoor environment semantic map construction method according to claim 4, characterized in that the closed-loop detection comprises global closed-loop detection and local closed-loop detection, wherein under global closed-loop constraint, the node parameter optimization equation is as follows:
wherein H represents a similarity transformation matrix, PtShowing the robot pose corresponding to the current frame image,representing the projection of the point u in the depth map of the current frame,the initial pose of the robot is shown,represents an initial time;
the node parameter optimization equation under the local closed-loop constraint is as follows:
wherein,a constraint representing a deformation between a map built in a recent period and a map model built in a previous period.
10. The method for building an indoor environment semantic map based on deep learning according to claim 7, wherein the updating of the point cloud label distribution probability in step 7 is specifically performed by:
wherein, ctRepresenting the class probability distribution of the point cloud at time t,representing a set of key frames
{K0,K1,...,KtZ denotes a normalization constant, KtRepresenting the key frame at time t.
CN201910408713.4A 2019-05-16 2019-05-16 A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning Pending CN110243370A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910408713.4A CN110243370A (en) 2019-05-16 2019-05-16 A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910408713.4A CN110243370A (en) 2019-05-16 2019-05-16 A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning

Publications (1)

Publication Number Publication Date
CN110243370A true CN110243370A (en) 2019-09-17

Family

ID=67884125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910408713.4A Pending CN110243370A (en) 2019-05-16 2019-05-16 A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning

Country Status (1)

Country Link
CN (1) CN110243370A (en)

Cited By (83)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738200A (en) * 2019-12-23 2020-01-31 广州赛特智能科技有限公司 Lane line 3D point cloud map construction method, electronic device and storage medium
CN110751220A (en) * 2019-10-24 2020-02-04 江西应用技术职业学院 Machine vision indoor positioning method based on improved convolutional neural network structure
CN110781262A (en) * 2019-10-21 2020-02-11 中国科学院计算技术研究所 Semantic map construction method based on visual SLAM
CN110807782A (en) * 2019-10-25 2020-02-18 中山大学 Map representation system of visual robot and construction method thereof
CN110826448A (en) * 2019-10-29 2020-02-21 中山大学 Indoor positioning method with automatic updating function
CN110900560A (en) * 2019-11-27 2020-03-24 佛山科学技术学院 Multi-foot wheeled mobile robot system with scene understanding capability
CN110969568A (en) * 2019-11-29 2020-04-07 广联达科技股份有限公司 BIM model double-sided display accelerated rendering method, system, product and storage medium
CN110986945A (en) * 2019-11-14 2020-04-10 上海交通大学 Local navigation method and system based on semantic height map
CN111080659A (en) * 2019-12-19 2020-04-28 哈尔滨工业大学 Environmental semantic perception method based on visual information
CN111105695A (en) * 2019-12-31 2020-05-05 智车优行科技(上海)有限公司 Map making method and device, electronic equipment and computer readable storage medium
CN111125283A (en) * 2019-12-23 2020-05-08 苏州智加科技有限公司 Electronic map construction method and device, computer equipment and storage medium
CN111161334A (en) * 2019-12-31 2020-05-15 南通大学 Semantic map construction method based on deep learning
CN111179344A (en) * 2019-12-26 2020-05-19 广东工业大学 Efficient mobile robot SLAM system for repairing semantic information
CN111178342A (en) * 2020-04-10 2020-05-19 浙江欣奕华智能科技有限公司 Pose graph optimization method, device, equipment and medium
CN111190981A (en) * 2019-12-25 2020-05-22 中国科学院上海微系统与信息技术研究所 Method and device for constructing three-dimensional semantic map, electronic equipment and storage medium
CN111210518A (en) * 2020-01-15 2020-05-29 西安交通大学 Topological map generation method based on visual fusion landmark
CN111223136A (en) * 2020-01-03 2020-06-02 三星(中国)半导体有限公司 Depth feature extraction method and device for sparse 2D point set
CN111223101A (en) * 2020-01-17 2020-06-02 湖南视比特机器人有限公司 Point cloud processing method, point cloud processing system, and storage medium
CN111242994A (en) * 2019-12-31 2020-06-05 深圳优地科技有限公司 Semantic map construction method and device, robot and storage medium
CN111239761A (en) * 2020-01-20 2020-06-05 西安交通大学 Method for indoor real-time establishment of two-dimensional map
CN111248815A (en) * 2020-01-16 2020-06-09 珠海格力电器股份有限公司 Method, device and equipment for generating working map and storage medium
CN111325843A (en) * 2020-03-09 2020-06-23 北京航空航天大学 Real-time semantic map construction method based on semantic inverse depth filtering
CN111325842A (en) * 2020-03-04 2020-06-23 Oppo广东移动通信有限公司 Map construction method, repositioning method and device, storage medium and electronic equipment
CN111340939A (en) * 2020-02-21 2020-06-26 广东工业大学 Indoor three-dimensional semantic map construction method
CN111367282A (en) * 2020-03-09 2020-07-03 山东大学 Robot navigation method and system based on multimode perception and reinforcement learning
CN111429517A (en) * 2020-03-23 2020-07-17 Oppo广东移动通信有限公司 Relocation method, relocation device, storage medium and electronic device
CN111462324A (en) * 2020-05-18 2020-07-28 南京大学 Online spatiotemporal semantic fusion method and system
CN111476894A (en) * 2020-05-14 2020-07-31 小狗电器互联网科技(北京)股份有限公司 Three-dimensional semantic map construction method and device, storage medium and electronic equipment
CN111553945A (en) * 2020-04-13 2020-08-18 东风柳州汽车有限公司 Vehicle positioning method
CN111551167A (en) * 2020-02-10 2020-08-18 江苏盖亚环境科技股份有限公司 Global navigation auxiliary method based on unmanned aerial vehicle shooting and semantic segmentation
CN111563442A (en) * 2020-04-29 2020-08-21 上海交通大学 Slam method and system for fusing point cloud and camera image data based on laser radar
CN111612886A (en) * 2020-04-21 2020-09-01 厦门大学 Indoor three-dimensional model generation method and system
CN111652174A (en) * 2020-06-10 2020-09-11 北京云迹科技有限公司 Semantic calibration method and device based on laser data
CN111667545A (en) * 2020-05-07 2020-09-15 东软睿驰汽车技术(沈阳)有限公司 High-precision map generation method and device, electronic equipment and storage medium
CN111664860A (en) * 2020-07-01 2020-09-15 北京三快在线科技有限公司 Positioning method and device, intelligent equipment and storage medium
CN111737278A (en) * 2020-08-05 2020-10-02 鹏城实验室 Method, system, equipment and storage medium for simultaneous positioning and mapping
CN111783838A (en) * 2020-06-05 2020-10-16 东南大学 Point cloud characteristic space representation method for laser SLAM
CN111882663A (en) * 2020-07-03 2020-11-03 广州万维创新科技有限公司 Visual SLAM closed-loop detection method achieved by fusing semantic information
CN111882590A (en) * 2020-06-24 2020-11-03 广州万维创新科技有限公司 AR scene application method based on single picture positioning
CN111899277A (en) * 2020-07-09 2020-11-06 浙江大华技术股份有限公司 Moving object detection method and device, storage medium and electronic device
CN111951397A (en) * 2020-08-07 2020-11-17 清华大学 Method, device and storage medium for multi-machine cooperative construction of three-dimensional point cloud map
CN111966110A (en) * 2020-09-08 2020-11-20 天津海运职业学院 Automatic navigation method and system for port unmanned transport vehicle
CN111958592A (en) * 2020-07-30 2020-11-20 国网智能科技股份有限公司 Image semantic analysis system and method for transformer substation inspection robot
CN112017188A (en) * 2020-09-09 2020-12-01 上海航天控制技术研究所 Space non-cooperative target semantic identification and reconstruction method
CN112347550A (en) * 2020-12-07 2021-02-09 厦门大学 Coupling type indoor three-dimensional semantic graph building and modeling method
CN112767485A (en) * 2021-01-26 2021-05-07 哈尔滨工业大学(深圳) Point cloud map creating and scene identifying method based on static semantic information
CN112802204A (en) * 2021-01-26 2021-05-14 山东大学 Target semantic navigation method and system for three-dimensional space scene prior in unknown environment
CN112819893A (en) * 2021-02-08 2021-05-18 北京航空航天大学 Method and device for constructing three-dimensional semantic map
CN112836734A (en) * 2021-01-27 2021-05-25 深圳市华汉伟业科技有限公司 Heterogeneous data fusion method and device and storage medium
CN112862894A (en) * 2021-04-12 2021-05-28 中国科学技术大学 Robot three-dimensional point cloud map construction and expansion method
CN112904437A (en) * 2021-01-14 2021-06-04 支付宝(杭州)信息技术有限公司 Detection method and detection device of hidden component based on privacy protection
CN113009501A (en) * 2021-02-25 2021-06-22 重庆交通大学 Image and laser data fused robot navigation three-dimensional semantic map generation method
CN113052761A (en) * 2019-12-26 2021-06-29 炬星科技(深圳)有限公司 Laser point cloud map fusion method, device and computer readable storage medium
CN113052369A (en) * 2021-03-15 2021-06-29 北京农业智能装备技术研究中心 Intelligent agricultural machinery operation management method and system
CN113077512A (en) * 2021-03-24 2021-07-06 浙江中体文化集团有限公司 RGB-D pose recognition model training method and system
CN113160420A (en) * 2021-05-17 2021-07-23 上海商汤临港智能科技有限公司 Three-dimensional point cloud reconstruction method and device, electronic equipment and storage medium
CN113238554A (en) * 2021-05-08 2021-08-10 武汉科技大学 Indoor navigation method and system based on SLAM technology integrating laser and vision
CN113256711A (en) * 2021-05-27 2021-08-13 南京航空航天大学 Pose estimation method and system of monocular camera
CN113405547A (en) * 2021-05-21 2021-09-17 杭州电子科技大学 Unmanned aerial vehicle navigation method based on semantic VSLAM
CN113534786A (en) * 2020-04-20 2021-10-22 深圳市奇虎智能科技有限公司 SLAM method-based environment reconstruction method and system and mobile robot
CN113576780A (en) * 2021-08-04 2021-11-02 北京化工大学 Intelligent wheelchair based on semantic vision SLAM
CN113592875A (en) * 2020-04-30 2021-11-02 阿里巴巴集团控股有限公司 Data processing method, image processing method, storage medium and computing device
CN113674416A (en) * 2021-08-26 2021-11-19 中国电子科技集团公司信息科学研究院 Three-dimensional map construction method and device, electronic equipment and storage medium
CN113763551A (en) * 2021-09-08 2021-12-07 北京易航远智科技有限公司 Point cloud-based rapid repositioning method for large-scale mapping scene
CN113759369A (en) * 2020-06-28 2021-12-07 北京京东乾石科技有限公司 Image establishing method and device based on double multiline radar
CN113916245A (en) * 2021-10-09 2022-01-11 上海大学 Semantic map construction method based on instance segmentation and VSLAM
CN113935428A (en) * 2021-10-25 2022-01-14 山东大学 Three-dimensional point cloud clustering identification method and system based on image identification
CN114092530A (en) * 2021-10-27 2022-02-25 江西省通讯终端产业技术研究院有限公司 Ladle visual alignment method, device and equipment based on deep learning semantic segmentation and point cloud registration
CN114359493A (en) * 2021-12-20 2022-04-15 中国船舶重工集团公司第七0九研究所 Method and system for generating three-dimensional semantic map for unmanned ship
CN114625815A (en) * 2020-12-11 2022-06-14 广东博智林机器人有限公司 Building robot semantic map generation method and system
WO2022134057A1 (en) * 2020-12-25 2022-06-30 Intel Corporation Re-localization of robot
CN114742893A (en) * 2022-06-09 2022-07-12 浙江大学湖州研究院 3D laser data training and rapid positioning method based on deep learning
CN114863096A (en) * 2022-04-02 2022-08-05 合众新能源汽车有限公司 Semantic map construction and positioning method and device for indoor parking lot
CN115035260A (en) * 2022-05-27 2022-09-09 哈尔滨工程大学 Indoor mobile robot three-dimensional semantic map construction method
CN115408544A (en) * 2022-08-19 2022-11-29 梅卡曼德(北京)机器人科技有限公司 Image database construction method, device, equipment, storage medium and product
CN115638788A (en) * 2022-12-23 2023-01-24 安徽蔚来智驾科技有限公司 Semantic vector map construction method, computer equipment and storage medium
CN115655262A (en) * 2022-12-26 2023-01-31 广东省科学院智能制造研究所 Deep learning perception-based multi-level semantic map construction method and device
CN115719363A (en) * 2022-10-31 2023-02-28 重庆理工大学 Environment sensing method and system capable of performing two-dimensional dynamic detection and three-dimensional reconstruction
CN115880690A (en) * 2022-11-23 2023-03-31 郑州大学 Method for quickly marking object in point cloud under assistance of three-dimensional reconstruction
CN116134488A (en) * 2020-12-23 2023-05-16 深圳元戎启行科技有限公司 Point cloud labeling method, point cloud labeling device, computer equipment and storage medium
CN116664681A (en) * 2023-07-26 2023-08-29 长春工程学院 Semantic perception-based intelligent collaborative augmented reality system and method for electric power operation
CN117315092A (en) * 2023-10-08 2023-12-29 玩出梦想(上海)科技有限公司 Automatic labeling method and data processing equipment
CN118031976A (en) * 2024-04-15 2024-05-14 中国科学院国家空间科学中心 Man-machine cooperative system for exploring unknown environment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
CN107680133A (en) * 2017-09-15 2018-02-09 重庆邮电大学 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm
CN107958482A (en) * 2016-10-17 2018-04-24 杭州海康威视数字技术股份有限公司 A kind of three-dimensional scene models construction method and device
CN108415032A (en) * 2018-03-05 2018-08-17 中山大学 A kind of point cloud semanteme map constructing method based on deep learning and laser radar
CN108615244A (en) * 2018-03-27 2018-10-02 中国地质大学(武汉) A kind of image depth estimation method and system based on CNN and depth filter
CN109636905A (en) * 2018-12-07 2019-04-16 东北大学 Environment semanteme based on depth convolutional neural networks builds drawing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN107958482A (en) * 2016-10-17 2018-04-24 杭州海康威视数字技术股份有限公司 A kind of three-dimensional scene models construction method and device
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
CN107680133A (en) * 2017-09-15 2018-02-09 重庆邮电大学 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm
CN108415032A (en) * 2018-03-05 2018-08-17 中山大学 A kind of point cloud semanteme map constructing method based on deep learning and laser radar
CN108615244A (en) * 2018-03-27 2018-10-02 中国地质大学(武汉) A kind of image depth estimation method and system based on CNN and depth filter
CN109636905A (en) * 2018-12-07 2019-04-16 东北大学 Environment semanteme based on depth convolutional neural networks builds drawing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
单吉超等: "室内场景下实时地三维语义地图构建", 《仪器仪表学报》 *
辛菁等: "基于Kinect的移动机器人大视角3维V-SLAM", 《机器人》 *

Cited By (128)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781262A (en) * 2019-10-21 2020-02-11 中国科学院计算技术研究所 Semantic map construction method based on visual SLAM
CN110781262B (en) * 2019-10-21 2023-06-02 中国科学院计算技术研究所 Semantic map construction method based on visual SLAM
CN110751220B (en) * 2019-10-24 2022-02-11 江西应用技术职业学院 Machine vision indoor positioning method based on improved convolutional neural network structure
CN110751220A (en) * 2019-10-24 2020-02-04 江西应用技术职业学院 Machine vision indoor positioning method based on improved convolutional neural network structure
CN110807782A (en) * 2019-10-25 2020-02-18 中山大学 Map representation system of visual robot and construction method thereof
CN110826448B (en) * 2019-10-29 2023-04-07 中山大学 Indoor positioning method with automatic updating function
CN110826448A (en) * 2019-10-29 2020-02-21 中山大学 Indoor positioning method with automatic updating function
CN110986945A (en) * 2019-11-14 2020-04-10 上海交通大学 Local navigation method and system based on semantic height map
CN110986945B (en) * 2019-11-14 2023-06-27 上海交通大学 Local navigation method and system based on semantic altitude map
CN110900560A (en) * 2019-11-27 2020-03-24 佛山科学技术学院 Multi-foot wheeled mobile robot system with scene understanding capability
CN110969568A (en) * 2019-11-29 2020-04-07 广联达科技股份有限公司 BIM model double-sided display accelerated rendering method, system, product and storage medium
CN111080659A (en) * 2019-12-19 2020-04-28 哈尔滨工业大学 Environmental semantic perception method based on visual information
CN111125283A (en) * 2019-12-23 2020-05-08 苏州智加科技有限公司 Electronic map construction method and device, computer equipment and storage medium
CN111125283B (en) * 2019-12-23 2022-11-15 苏州智加科技有限公司 Electronic map construction method and device, computer equipment and storage medium
CN110738200A (en) * 2019-12-23 2020-01-31 广州赛特智能科技有限公司 Lane line 3D point cloud map construction method, electronic device and storage medium
CN111190981A (en) * 2019-12-25 2020-05-22 中国科学院上海微系统与信息技术研究所 Method and device for constructing three-dimensional semantic map, electronic equipment and storage medium
CN111179344A (en) * 2019-12-26 2020-05-19 广东工业大学 Efficient mobile robot SLAM system for repairing semantic information
CN113052761B (en) * 2019-12-26 2024-01-30 炬星科技(深圳)有限公司 Laser point cloud map fusion method, device and computer readable storage medium
WO2021129349A1 (en) * 2019-12-26 2021-07-01 炬星科技(深圳)有限公司 Laser point cloud map merging method, apparatus, and computer readable storage medium
CN111179344B (en) * 2019-12-26 2023-05-23 广东工业大学 Efficient mobile robot SLAM system for repairing semantic information
CN113052761A (en) * 2019-12-26 2021-06-29 炬星科技(深圳)有限公司 Laser point cloud map fusion method, device and computer readable storage medium
CN111242994A (en) * 2019-12-31 2020-06-05 深圳优地科技有限公司 Semantic map construction method and device, robot and storage medium
CN111242994B (en) * 2019-12-31 2024-01-09 深圳优地科技有限公司 Semantic map construction method, semantic map construction device, robot and storage medium
CN111105695A (en) * 2019-12-31 2020-05-05 智车优行科技(上海)有限公司 Map making method and device, electronic equipment and computer readable storage medium
CN111161334A (en) * 2019-12-31 2020-05-15 南通大学 Semantic map construction method based on deep learning
CN111223136A (en) * 2020-01-03 2020-06-02 三星(中国)半导体有限公司 Depth feature extraction method and device for sparse 2D point set
CN111223136B (en) * 2020-01-03 2024-04-23 三星(中国)半导体有限公司 Depth feature extraction method and device for sparse 2D point set
CN111210518A (en) * 2020-01-15 2020-05-29 西安交通大学 Topological map generation method based on visual fusion landmark
CN111210518B (en) * 2020-01-15 2022-04-05 西安交通大学 Topological map generation method based on visual fusion landmark
CN111248815A (en) * 2020-01-16 2020-06-09 珠海格力电器股份有限公司 Method, device and equipment for generating working map and storage medium
CN111223101A (en) * 2020-01-17 2020-06-02 湖南视比特机器人有限公司 Point cloud processing method, point cloud processing system, and storage medium
CN111223101B (en) * 2020-01-17 2023-08-11 湖南视比特机器人有限公司 Point cloud processing method, point cloud processing system and storage medium
CN111239761A (en) * 2020-01-20 2020-06-05 西安交通大学 Method for indoor real-time establishment of two-dimensional map
CN111551167A (en) * 2020-02-10 2020-08-18 江苏盖亚环境科技股份有限公司 Global navigation auxiliary method based on unmanned aerial vehicle shooting and semantic segmentation
CN111551167B (en) * 2020-02-10 2022-09-27 江苏盖亚环境科技股份有限公司 Global navigation auxiliary method based on unmanned aerial vehicle shooting and semantic segmentation
CN111340939B (en) * 2020-02-21 2023-04-18 广东工业大学 Indoor three-dimensional semantic map construction method
CN111340939A (en) * 2020-02-21 2020-06-26 广东工业大学 Indoor three-dimensional semantic map construction method
CN111325842B (en) * 2020-03-04 2023-07-28 Oppo广东移动通信有限公司 Map construction method, repositioning method and device, storage medium and electronic equipment
CN111325842A (en) * 2020-03-04 2020-06-23 Oppo广东移动通信有限公司 Map construction method, repositioning method and device, storage medium and electronic equipment
CN111367282A (en) * 2020-03-09 2020-07-03 山东大学 Robot navigation method and system based on multimode perception and reinforcement learning
CN111325843A (en) * 2020-03-09 2020-06-23 北京航空航天大学 Real-time semantic map construction method based on semantic inverse depth filtering
CN111429517A (en) * 2020-03-23 2020-07-17 Oppo广东移动通信有限公司 Relocation method, relocation device, storage medium and electronic device
CN111178342A (en) * 2020-04-10 2020-05-19 浙江欣奕华智能科技有限公司 Pose graph optimization method, device, equipment and medium
CN111553945B (en) * 2020-04-13 2023-08-11 东风柳州汽车有限公司 Vehicle positioning method
CN111553945A (en) * 2020-04-13 2020-08-18 东风柳州汽车有限公司 Vehicle positioning method
CN113534786A (en) * 2020-04-20 2021-10-22 深圳市奇虎智能科技有限公司 SLAM method-based environment reconstruction method and system and mobile robot
CN111612886B (en) * 2020-04-21 2022-07-19 厦门大学 Indoor three-dimensional model generation method and system
CN111612886A (en) * 2020-04-21 2020-09-01 厦门大学 Indoor three-dimensional model generation method and system
CN111563442B (en) * 2020-04-29 2023-05-02 上海交通大学 Slam method and system for fusing point cloud and camera image data based on laser radar
CN111563442A (en) * 2020-04-29 2020-08-21 上海交通大学 Slam method and system for fusing point cloud and camera image data based on laser radar
CN113592875B (en) * 2020-04-30 2024-01-23 阿里巴巴集团控股有限公司 Data processing method, image processing method, storage medium, and computing device
CN113592875A (en) * 2020-04-30 2021-11-02 阿里巴巴集团控股有限公司 Data processing method, image processing method, storage medium and computing device
CN111667545A (en) * 2020-05-07 2020-09-15 东软睿驰汽车技术(沈阳)有限公司 High-precision map generation method and device, electronic equipment and storage medium
CN111667545B (en) * 2020-05-07 2024-02-27 东软睿驰汽车技术(沈阳)有限公司 High-precision map generation method and device, electronic equipment and storage medium
CN111476894A (en) * 2020-05-14 2020-07-31 小狗电器互联网科技(北京)股份有限公司 Three-dimensional semantic map construction method and device, storage medium and electronic equipment
CN111462324A (en) * 2020-05-18 2020-07-28 南京大学 Online spatiotemporal semantic fusion method and system
CN111462324B (en) * 2020-05-18 2022-05-17 南京大学 Online spatiotemporal semantic fusion method and system
CN111783838A (en) * 2020-06-05 2020-10-16 东南大学 Point cloud characteristic space representation method for laser SLAM
CN111652174B (en) * 2020-06-10 2024-01-23 北京云迹科技股份有限公司 Semantical calibration method and device based on laser data
CN111652174A (en) * 2020-06-10 2020-09-11 北京云迹科技有限公司 Semantic calibration method and device based on laser data
CN111882590A (en) * 2020-06-24 2020-11-03 广州万维创新科技有限公司 AR scene application method based on single picture positioning
CN113759369B (en) * 2020-06-28 2023-12-05 北京京东乾石科技有限公司 Graph construction method and device based on double multi-line radar
CN113759369A (en) * 2020-06-28 2021-12-07 北京京东乾石科技有限公司 Image establishing method and device based on double multiline radar
CN111664860A (en) * 2020-07-01 2020-09-15 北京三快在线科技有限公司 Positioning method and device, intelligent equipment and storage medium
CN111664860B (en) * 2020-07-01 2022-03-11 北京三快在线科技有限公司 Positioning method and device, intelligent equipment and storage medium
CN111882663A (en) * 2020-07-03 2020-11-03 广州万维创新科技有限公司 Visual SLAM closed-loop detection method achieved by fusing semantic information
CN111899277A (en) * 2020-07-09 2020-11-06 浙江大华技术股份有限公司 Moving object detection method and device, storage medium and electronic device
CN111899277B (en) * 2020-07-09 2024-07-12 浙江大华技术股份有限公司 Moving object detection method and device, storage medium and electronic device
CN111958592A (en) * 2020-07-30 2020-11-20 国网智能科技股份有限公司 Image semantic analysis system and method for transformer substation inspection robot
CN111958592B (en) * 2020-07-30 2021-08-20 国网智能科技股份有限公司 Image semantic analysis system and method for transformer substation inspection robot
CN111737278B (en) * 2020-08-05 2020-12-04 鹏城实验室 Method, system, equipment and storage medium for simultaneous positioning and mapping
CN111737278A (en) * 2020-08-05 2020-10-02 鹏城实验室 Method, system, equipment and storage medium for simultaneous positioning and mapping
CN111951397A (en) * 2020-08-07 2020-11-17 清华大学 Method, device and storage medium for multi-machine cooperative construction of three-dimensional point cloud map
CN111966110A (en) * 2020-09-08 2020-11-20 天津海运职业学院 Automatic navigation method and system for port unmanned transport vehicle
CN112017188A (en) * 2020-09-09 2020-12-01 上海航天控制技术研究所 Space non-cooperative target semantic identification and reconstruction method
CN112017188B (en) * 2020-09-09 2024-04-09 上海航天控制技术研究所 Space non-cooperative target semantic recognition and reconstruction method
CN112347550B (en) * 2020-12-07 2022-07-15 厦门大学 Coupling type indoor three-dimensional semantic graph building and modeling method
CN112347550A (en) * 2020-12-07 2021-02-09 厦门大学 Coupling type indoor three-dimensional semantic graph building and modeling method
CN114625815A (en) * 2020-12-11 2022-06-14 广东博智林机器人有限公司 Building robot semantic map generation method and system
CN116134488A (en) * 2020-12-23 2023-05-16 深圳元戎启行科技有限公司 Point cloud labeling method, point cloud labeling device, computer equipment and storage medium
WO2022134057A1 (en) * 2020-12-25 2022-06-30 Intel Corporation Re-localization of robot
CN112904437A (en) * 2021-01-14 2021-06-04 支付宝(杭州)信息技术有限公司 Detection method and detection device of hidden component based on privacy protection
CN112767485B (en) * 2021-01-26 2023-07-07 哈尔滨工业大学(深圳) Point cloud map creation and scene identification method based on static semantic information
CN112802204A (en) * 2021-01-26 2021-05-14 山东大学 Target semantic navigation method and system for three-dimensional space scene prior in unknown environment
CN112767485A (en) * 2021-01-26 2021-05-07 哈尔滨工业大学(深圳) Point cloud map creating and scene identifying method based on static semantic information
CN112836734A (en) * 2021-01-27 2021-05-25 深圳市华汉伟业科技有限公司 Heterogeneous data fusion method and device and storage medium
CN112819893A (en) * 2021-02-08 2021-05-18 北京航空航天大学 Method and device for constructing three-dimensional semantic map
CN113009501A (en) * 2021-02-25 2021-06-22 重庆交通大学 Image and laser data fused robot navigation three-dimensional semantic map generation method
CN113052369B (en) * 2021-03-15 2024-05-10 北京农业智能装备技术研究中心 Intelligent agricultural machinery operation management method and system
CN113052369A (en) * 2021-03-15 2021-06-29 北京农业智能装备技术研究中心 Intelligent agricultural machinery operation management method and system
CN113077512B (en) * 2021-03-24 2022-06-28 浙江中体文化集团有限公司 RGB-D pose recognition model training method and system
CN113077512A (en) * 2021-03-24 2021-07-06 浙江中体文化集团有限公司 RGB-D pose recognition model training method and system
CN112862894B (en) * 2021-04-12 2022-09-06 中国科学技术大学 Robot three-dimensional point cloud map construction and expansion method
CN112862894A (en) * 2021-04-12 2021-05-28 中国科学技术大学 Robot three-dimensional point cloud map construction and expansion method
CN113238554A (en) * 2021-05-08 2021-08-10 武汉科技大学 Indoor navigation method and system based on SLAM technology integrating laser and vision
CN113160420A (en) * 2021-05-17 2021-07-23 上海商汤临港智能科技有限公司 Three-dimensional point cloud reconstruction method and device, electronic equipment and storage medium
CN113405547B (en) * 2021-05-21 2022-03-22 杭州电子科技大学 Unmanned aerial vehicle navigation method based on semantic VSLAM
CN113405547A (en) * 2021-05-21 2021-09-17 杭州电子科技大学 Unmanned aerial vehicle navigation method based on semantic VSLAM
CN113256711A (en) * 2021-05-27 2021-08-13 南京航空航天大学 Pose estimation method and system of monocular camera
CN113256711B (en) * 2021-05-27 2024-03-12 南京航空航天大学 Pose estimation method and system of monocular camera
CN113576780A (en) * 2021-08-04 2021-11-02 北京化工大学 Intelligent wheelchair based on semantic vision SLAM
CN113674416B (en) * 2021-08-26 2024-04-26 中国电子科技集团公司信息科学研究院 Three-dimensional map construction method and device, electronic equipment and storage medium
CN113674416A (en) * 2021-08-26 2021-11-19 中国电子科技集团公司信息科学研究院 Three-dimensional map construction method and device, electronic equipment and storage medium
CN113763551A (en) * 2021-09-08 2021-12-07 北京易航远智科技有限公司 Point cloud-based rapid repositioning method for large-scale mapping scene
CN113763551B (en) * 2021-09-08 2023-10-27 北京易航远智科技有限公司 Rapid repositioning method for large-scale map building scene based on point cloud
CN113916245A (en) * 2021-10-09 2022-01-11 上海大学 Semantic map construction method based on instance segmentation and VSLAM
CN113935428A (en) * 2021-10-25 2022-01-14 山东大学 Three-dimensional point cloud clustering identification method and system based on image identification
CN114092530A (en) * 2021-10-27 2022-02-25 江西省通讯终端产业技术研究院有限公司 Ladle visual alignment method, device and equipment based on deep learning semantic segmentation and point cloud registration
CN114359493A (en) * 2021-12-20 2022-04-15 中国船舶重工集团公司第七0九研究所 Method and system for generating three-dimensional semantic map for unmanned ship
WO2023184869A1 (en) * 2022-04-02 2023-10-05 合众新能源汽车股份有限公司 Semantic map construction and localization method and apparatus for indoor parking lot
CN114863096A (en) * 2022-04-02 2022-08-05 合众新能源汽车有限公司 Semantic map construction and positioning method and device for indoor parking lot
CN114863096B (en) * 2022-04-02 2024-04-16 合众新能源汽车股份有限公司 Semantic map construction and positioning method and device for indoor parking lot
CN115035260A (en) * 2022-05-27 2022-09-09 哈尔滨工程大学 Indoor mobile robot three-dimensional semantic map construction method
CN114742893A (en) * 2022-06-09 2022-07-12 浙江大学湖州研究院 3D laser data training and rapid positioning method based on deep learning
CN114742893B (en) * 2022-06-09 2022-10-21 浙江大学湖州研究院 3D laser data training and rapid positioning method based on deep learning
CN115408544A (en) * 2022-08-19 2022-11-29 梅卡曼德(北京)机器人科技有限公司 Image database construction method, device, equipment, storage medium and product
CN115719363A (en) * 2022-10-31 2023-02-28 重庆理工大学 Environment sensing method and system capable of performing two-dimensional dynamic detection and three-dimensional reconstruction
CN115719363B (en) * 2022-10-31 2024-02-02 重庆理工大学 Environment sensing method and system capable of performing two-dimensional dynamic detection and three-dimensional reconstruction
CN115880690A (en) * 2022-11-23 2023-03-31 郑州大学 Method for quickly marking object in point cloud under assistance of three-dimensional reconstruction
CN115880690B (en) * 2022-11-23 2023-08-11 郑州大学 Method for quickly labeling objects in point cloud under assistance of three-dimensional reconstruction
CN115638788A (en) * 2022-12-23 2023-01-24 安徽蔚来智驾科技有限公司 Semantic vector map construction method, computer equipment and storage medium
CN115638788B (en) * 2022-12-23 2023-03-21 安徽蔚来智驾科技有限公司 Semantic vector map construction method, computer equipment and storage medium
CN115655262A (en) * 2022-12-26 2023-01-31 广东省科学院智能制造研究所 Deep learning perception-based multi-level semantic map construction method and device
CN116664681B (en) * 2023-07-26 2023-10-10 长春工程学院 Semantic perception-based intelligent collaborative augmented reality system and method for electric power operation
CN116664681A (en) * 2023-07-26 2023-08-29 长春工程学院 Semantic perception-based intelligent collaborative augmented reality system and method for electric power operation
CN117315092A (en) * 2023-10-08 2023-12-29 玩出梦想(上海)科技有限公司 Automatic labeling method and data processing equipment
CN117315092B (en) * 2023-10-08 2024-05-14 玩出梦想(上海)科技有限公司 Automatic labeling method and data processing equipment
CN118031976A (en) * 2024-04-15 2024-05-14 中国科学院国家空间科学中心 Man-machine cooperative system for exploring unknown environment

Similar Documents

Publication Publication Date Title
CN110243370A (en) A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning
CN109816725B (en) Monocular camera object pose estimation method and device based on deep learning
CN108416840B (en) Three-dimensional scene dense reconstruction method based on monocular camera
Wu et al. Mars: An instance-aware, modular and realistic simulator for autonomous driving
CN110363816B (en) Mobile robot environment semantic mapping method based on deep learning
CN110570429B (en) Lightweight real-time semantic segmentation method based on three-dimensional point cloud
CN111563442A (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN112785643A (en) Indoor wall corner two-dimensional semantic map construction method based on robot platform
CN110298884A (en) A kind of position and orientation estimation method suitable for monocular vision camera in dynamic environment
GB2581808A (en) Scene representation using image processing
CN113673425A (en) Multi-view target detection method and system based on Transformer
CN113392584B (en) Visual navigation method based on deep reinforcement learning and direction estimation
CN113850900B (en) Method and system for recovering depth map based on image and geometric clues in three-dimensional reconstruction
CN113554039B (en) Method and system for generating optical flow graph of dynamic image based on multi-attention machine system
CN116453121B (en) Training method and device for lane line recognition model
CN112184780A (en) Moving object instance segmentation method
CN116912404A (en) Laser radar point cloud mapping method for scanning distribution lines in dynamic environment
CN116229247A (en) Indoor scene semantic segmentation method, device, equipment and medium
Huang et al. Overview of LiDAR point cloud target detection methods based on deep learning
Akagic et al. Computer vision with 3d point cloud data: Methods, datasets and challenges
CN115050010B (en) Migration learning method for three-dimensional object detector
Brynte et al. Pose proposal critic: Robust pose refinement by learning reprojection errors
CN113487741B (en) Dense three-dimensional map updating method and device
CN115272666A (en) Online point cloud semantic segmentation method and device, storage medium and electronic equipment
Hazarika et al. Multi-camera 3D object detection for autonomous driving using deep learning and self-attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190917

RJ01 Rejection of invention patent application after publication