CN110706280A - Lightweight semantic driven sparse reconstruction method based on 2D-SLAM - Google Patents

Lightweight semantic driven sparse reconstruction method based on 2D-SLAM Download PDF

Info

Publication number
CN110706280A
CN110706280A CN201910929063.8A CN201910929063A CN110706280A CN 110706280 A CN110706280 A CN 110706280A CN 201910929063 A CN201910929063 A CN 201910929063A CN 110706280 A CN110706280 A CN 110706280A
Authority
CN
China
Prior art keywords
slam
semantic
image
target object
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910929063.8A
Other languages
Chinese (zh)
Inventor
张珂嘉
陈宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Jiaweili Robot Technology Co Ltd
Original Assignee
Chengdu Jiaweili Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Jiaweili Robot Technology Co Ltd filed Critical Chengdu Jiaweili Robot Technology Co Ltd
Publication of CN110706280A publication Critical patent/CN110706280A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a light-weight semantic-driven sparse reconstruction method based on a 2D-SLAM, which can accurately identify a special target object. The method comprises the following steps: (1) presetting a semantic target type, and triggering a 3D reconstruction algorithm module when a target object with the same type as the preset semantic target is judged in a picture shot by a camera through an image semantic recognition algorithm; (2) providing position and posture information of the camera on a 2D map to the 3D reconstruction algorithm module through a 2D-SLAM module, and performing 3D sparse reconstruction on the target object in the step (1) through an image feature extraction matching algorithm to obtain a 3D space coordinate of the target object; (3) and (3) placing the 3D space coordinate in the step (2) on a 2D-SLAM map to obtain the coordinate of the target object on the 2D-SLAM map and the type of the target object.

Description

Lightweight semantic driven sparse reconstruction method based on 2D-SLAM
Technical Field
The invention relates to a cleaning robot, in particular to a light-weight semantic-driven sparse reconstruction method based on a 2D-SLAM.
Background
Based on the 2D-SLAM function, the cleaning robot needs to be able to understand a pool of water on the ground, or a door, a vase and the like as a special landmark, and perform bypassing or related treatment. Two functions need to be supported: semantic understanding and spatial localization of target objects.
How to obtain the understanding of the semantics (object type) and space (3D or 2D range) of a special object on the basis of the basic 2D-SLAM capability without requiring a large computing power is a big problem.
For example, a pool of water on the ground and an upright "water" map, are indistinguishable from image recognition. This requires reliance on the spatial location of "water" to aid in identification and planning. In the case of a map, it must coincide with the boundary of the lidar. If the water on the ground is close to the ground, the image detection cannot be realized by singly depending on the image. Furthermore, 3D reconstruction, which now makes the image have a spatial structure, is very computationally intensive, with computing power being essentially PC-level (e.g., autonomous cars, etc.).
Disclosure of Invention
In view of the above, an object of the present invention is to provide a lightweight semantic-driven sparse reconstruction method based on 2D-SLAM, which can accurately identify a special target object.
The invention discloses a light-weight semantic-driven sparse reconstruction method based on 2D-SLAM, which comprises the following steps of:
(1) presetting a semantic target type, and triggering a 3D reconstruction algorithm module when a target object with the same type as the preset semantic target is judged in a picture shot by a camera through an image semantic recognition algorithm;
(2) providing position and posture information of the camera on a 2D map to the 3D reconstruction algorithm module through a 2D-SLAM module, and performing 3D sparse reconstruction on the target object in the step (1) through an image feature extraction matching algorithm to obtain a 3D space coordinate of the target object;
(3) and (3) placing the 3D space coordinate in the step (2) on a 2D-SLAM map to obtain the coordinate of the target object on the 2D-SLAM map and the type of the target object.
According to the invention, the 2D-SLAM is adopted for constructing the map, and the 3D reconstruction is adopted to ensure that the image has a space structure, particularly, the 3D reconstruction is not carried out in real time, and the 3D reconstruction is carried out only when the image identifies the target object; and 3D reconstruction is not performed on all characteristic points of the target object, and only the characteristic points which can most accurately construct a 3D structure are subjected to 3D reconstruction, so that the calculation amount is reduced.
Preferably, the cleaning robot adopts a corresponding cleaning strategy according to the coordinates of the target object in the 2D-SLAM map obtained in the step (3) and the type of the target object.
According to the invention, the application of combining image recognition in the field of cleaning robots has the beneficial effect that the object to be cleaned can be accurately understood so as to correspondingly adopt a cleaning strategy.
Preferably, the cleaning strategy comprises the cleaning robot bypassing, sweeping only without sweeping, increasing the dust collection wind speed, and/or alarming.
Preferably, in the step (1), the camera performs image recognition on the front scenery, the image of the front scenery is divided into image blocks according to the object types in the image blocks by a semantic recognition algorithm, each image block corresponds to a type and a confidence level, and when the confidence level of a certain image block is higher than a threshold value, it is determined that the object type in the image block is consistent with a preset semantic object type.
Preferably, in the step (2), feature points in the image are extracted through an image feature extraction matching algorithm, feature points related to semantic objects are matched to obtain a motion change relationship of the object in different images, and a 3D space coordinate of the object is obtained based on the motion change relationship and the imaging model.
Preferably, the boundary of the reconstructed object is optimized when the position and posture information provided by the 2D-SLAM module is inaccurate.
Preferably, SBA non-linear optimization may be employed.
The invention uses the camera and the image recognition technology to detect the semantic level of the special target object. When a special object is detected and a certain confidence is reached, 3D reconstruction for the object is initiated.
A camera is used for 3D reconstruction of the target object, for example, 5-10 frames of images can be collected at different points around the target object, and a 3D space structure of the target object is obtained by using a 3D reconstruction and nonlinear optimization method.
And mapping the 3D structure of the target object onto a 2D map to obtain the working range of the cleaning robot.
Has the advantages that:
the 2D-SLAM-based lightweight semantic-driven sparse reconstruction method adopts the 2D-SLAM for constructing the map and adopts 3D reconstruction to enable the image to have a space structure, and particularly, the 3D reconstruction is not carried out in real time, and only when the image identifies a target object, the 3D reconstruction is carried out; and 3D reconstruction is not performed on all characteristic points of the target object, and only the characteristic points which can most accurately construct a 3D structure are subjected to 3D reconstruction, so that the calculation amount is reduced.
Drawings
FIG. 1 is a general algorithm framework diagram of the lightweight semantic driven sparse reconstruction method based on 2D-SLAM of the present invention;
FIG. 2 is a schematic diagram of the image semantic recognition algorithm shown in FIG. 1;
FIG. 3 is a schematic diagram of object recognition in an image semantic recognition algorithm;
FIG. 4 is a schematic diagram of semantic segmentation in an image semantic recognition algorithm;
FIG. 5 is a schematic diagram of the image feature extraction matching algorithm shown in FIG. 1;
FIG. 6 is a schematic diagram of a pinhole camera model of the 3D sparse reconstruction algorithm shown in FIG. 1;
FIG. 7 is a schematic diagram of the epipolar constraint of the 3D sparse reconstruction algorithm shown in FIG. 1;
FIG. 8 is a schematic of SBA non-linear optimization;
FIG. 9 is a flow chart of the lightweight semantic driven sparse reconstruction method based on 2D-SLAM of the present invention.
Detailed Description
The present invention will be described in further detail below with reference to the accompanying drawings.
The lightweight semantic-driven sparse reconstruction method based on the 2D-SLAM is particularly a lightweight semantic-driven 3D sparse reconstruction method based on the 2D-SLAM.
Aiming at the problems of large 3D reconstruction calculation amount and the like in the prior art, the invention provides a light-weight semantic-driven sparse reconstruction method based on 2D-SLAM, which comprises the following steps.
1. And presetting a semantic target type, judging whether a target object with the same type as the preset semantic target exists in the picture shot by the camera through an image semantic recognition algorithm, and if so, starting 3D reconstruction.
2. And (3) providing position and posture information of the camera on the 2D map to a 3D reconstruction algorithm module through a 2D-SLAM module, and performing 3D sparse reconstruction on the target object in the step (1) through an image feature extraction and matching algorithm to obtain a 3D space coordinate of the target object.
3. And (3) placing the 3D space coordinate in the step (2) on a 2D-SLAM map to obtain the coordinate of the target object on the 2D-SLAM map and the type of the target object.
4. And (4) adopting a corresponding cleaning strategy by the cleaning robot according to the coordinates of the target object in the 2D-SLAM map obtained in the step (3) and the type of the target object.
In one embodiment, the target object in the above step includes feces, water, nails, people, flames, and the like.
In one embodiment, the cleaning strategy in step 4 includes the cleaning robot to bypass, only sweep but not sweep, increase the dust collection wind speed, alarm, etc.
FIG. 1 is a general algorithm framework diagram of the lightweight semantic-driven sparse reconstruction method based on 2D-SLAM of the present invention. As shown in fig. 1, the image semantic recognition algorithm is a trigger condition of the whole method. The image semantic recognition algorithm can adopt various existing algorithms, when the recognition confidence coefficient of the semantic object is higher than a threshold value, a 3D reconstruction process is triggered, and position area information of the semantic object in the image is provided for the 3D reconstruction algorithm. That is, the present invention does not perform 3D reconstruction in real time, and 3D reconstruction is performed only if the target object is recognized in the image.
Specifically, the type of the semantic object is preset, the type of the preset semantic object can be the type of an object influencing a cleaning strategy, such as water, pet excrement, nails, people, flames and the like, then the front scenery is subjected to image recognition through a camera, an image of the front scenery is divided into image blocks according to the object type in the image block through a semantic recognition algorithm, each image block corresponds to a type and a confidence coefficient, when the confidence coefficient of a certain image block is higher than a threshold value, the object type in the image block is judged to be consistent with the type of the preset semantic object, and therefore 3D reconstruction is started.
The image feature extraction and matching algorithm extracts feature points in the image (for example, various algorithms in the prior art can be used for extraction), and matches feature points related to semantic objects to obtain a motion change relationship of the target object in different images. In particular, as described in detail below in connection with the mathematical models of fig. 6 and 7, the mathematical models of fig. 6 and 7 may be existing 3D reconstruction models.
The 2D-SLAM module provides the 3D reconstruction algorithm with the position and orientation information of the camera on the 2D map, i.e. provides the initial value of the motion relationship between the images. 2D-SLAM is a class of algorithms that can be laser radar based, or sonar based, visual. The output of which is 2D real-time positioning and map information. The positioning information describes the position and attitude (x, y, theta) of the camera, and the map information is the reference system of the camera. Where Theta is the angle of the camera, i.e., the direction of the optical axis of the camera. The 2D-SLAM module may use various existing means to obtain the above-described position and orientation information.
The 3D sparse reconstruction algorithm takes a camera motion relation (from a 2D-SLAM module) and a camera imaging model (based on the mathematical models of the figure 6 and the figure 7) as input aiming at the feature points of the semantic target, and calculates the 3D coordinates of the feature points of the semantic target.
Furthermore, if the camera pose provided by the 2D-SLAM module is sufficiently accurate, the 3D sparse reconstruction algorithm will output a sufficiently accurate 3D spatial distribution of semantic objects. If the camera pose provided by the 2D-SLAM module is not accurate enough, SBA nonlinear optimization can be performed according to the epipolar constraint error of the semantic target, and more accurate target space distribution is obtained.
Further, fig. 2 is a schematic diagram of the image semantic recognition algorithm shown in fig. 1. The image semantic recognition algorithm comprises the following steps: object recognition (object detection) and semantic segmentation (semantic segmentation). In recent years, the deep learning method surpasses the traditional algorithm and even surpasses the judgment of people. The invention is based on a deep learning method, but is not limited to the image processing technology of deep learning. Their processing is similar as shown in fig. 2.
FIG. 3 is a schematic diagram of object recognition in an image semantic recognition algorithm. The output of object recognition is shown in fig. 3, and the key information of each semantic object is:
type (2): a subset of the object types supported by the recognition algorithm, e.g., "people", "cars", "horses", "dogs" in fig. 3;
position, i.e. the position of each semantic module in the whole picture: in fig. 3, a rectangular area, some object recognition models (Mask RCNN) may also be actual boundary areas to the object;
confidence coefficient: for example, 0.991 in FIG. 3, indicates that the probability of a "person" in the rectangular inner body reaches 99.1%.
FIG. 4 is a schematic diagram of semantic segmentation in an image semantic recognition algorithm. The semantic segmentation algorithm outputs an understanding of the "meaning" of different regions of the image, which is a classification at the pixel level. As shown in fig. 4, cars, people, sidewalks, and motorways have different "roles", and the difference in "meaning" determines the "type" or spatial division of each pixel in the picture. That is, the picture is divided according to the semantics to obtain picture blocks having semantics one by one.
Fig. 5 is a schematic diagram of the image feature extraction matching algorithm shown in fig. 1. The existing image feature extraction (featurextraction) algorithm includes SURF, SIFT, ORB, and the like. The output is the feature points (features) in the image and their descriptors (descriptors). Images captured by the camera during motion have repeated objects between adjacent frames (regardless of camera motion beyond the camera shutter). The repeated objects have the same characteristic points and have corresponding relations, and the corresponding relations are found out through a matching algorithm (matching). As shown in fig. 5, the camera takes a photograph with different viewing angles with respect to the cart, and the algorithm relates feature points of the cart in different pictures.
FIG. 6 is a schematic diagram of a pinhole camera model of the 3D sparse reconstruction algorithm shown in FIG. 1. And (3) calculating the space position (x, y, z) of the target point or the target point group by the 3D sparse reconstruction algorithm according to the pinhole camera model, the epipolar constraint and the pose relationship between two adjacent frames. The mathematical description of the model is:
s*Puv=K*T(R,t)*Pw
wherein: s is the focal length of the camera and,
puv is the position (u, v,1) of the target point on the standard imaging plane, T,
k is an internal parameter of the camera, is a constant,
pw is the 3D coordinate (x, y, z) of the target point T, where the 3D coordinate is not the spatial 3D coordinate of the target point, which is the 3D coordinate with the imaging plane as a reference system,
t (R, T) camera transformation matrix,
r, the rotation matrix,
t, translation amount.
FIG. 7 is a schematic diagram of the epipolar constraint of the 3D sparse reconstruction algorithm shown in FIG. 1. As shown in fig. 7, the system of equations can be constructed by "matching points" (i.e., the same points) in the two pictures:
1)s*Puv=K*Pw
2)s*Puv’=K*T(R,t)1:3*Pw
where Puv is the coordinates of the target point on the first graph and Puv' is the coordinates of the target point on the second graph. Solving this system of equations yields the values of R, t and Pw. Pw is the 3D coordinate of the target point.
The target points were calculated directly using the data of the 2D-SLAM module:
when the 2D-SLAM module can provide accurate camera pose and relative movement R, t values, Pw, i.e., (x, y, z) of the target point, can be directly calculated by equation 2).
The data of the 2D-SLAM module is used for improving the accuracy of the data:
when the variance of R, t provided by the 2D-SLAM module and the variance of R ', t' calculated by epipolar constraint are stable, the filtering algorithm can be used for mutual correction, and more accurate data can be obtained to calculate the coordinate Pw of the target object.
Optionally, when the range of the boundary distribution of the object does not follow the normal distribution, which means that the reconstruction is not accurate enough, the invention can also adopt SBA nonlinear optimization to optimize the boundary of the reconstructed object. When the R, t and pose calculated by the above algorithm are still not accurate enough, the adjacent multiple frames can be used for performing nonlinear optimization to obtain more accurate target object coordinates, as shown in fig. 8. The method is to construct a graph, and nodes of the graph are the pose (x, y, theta) T of the camera and the coordinates (x, y, z) T of a target point.
The constraint edges are the motion (R, T) of neighboring camera poses and the camera projection model between the camera and the target point, i.e., s × Puv ═ K × T × Pw. Finally solved by a non-linear optimization library, such as g2o or ceres.
FIG. 9 is a flow chart of the lightweight semantic driven sparse reconstruction method based on 2D-SLAM of the present invention.
The invention aims to provide a semantic characteristic and a 3D coordinate reconstruction method for a target object under the condition of limited 2D-SLAM resources. It is characterized in that:
1. 3D reconstruction is carried out only when the target object is detected and the recognition confidence coefficient is high to a certain degree; namely, non-real-time 3D reconstruction, and the calculation amount and the energy consumption are reduced;
2. 3D reconstruction is only performed on the characteristic points of the target object in the image, and 3D reconstruction is not performed on the whole image; namely, sparse reconstruction, only reconstructing the characteristic points, but not reconstructing all the points in 3D;
3. the number of feature points may be preferred (selecting points with high feature point matching values) so it is a sparse 3D reconstruction; namely, selecting points with high matching values, and constructing a 3D model by using the points as few as possible;
4. does not need large computing power;
5. is an inexpensive semantic SLAM scheme based on 2D SLAM.
More specifically, as follows: 1. object recognition in image processing typically results in object type, location, confidence. The location information is typically a box, or mask, or border of the object on the picture.
2. When the confidence is above a certain threshold, the robot enters 3D reconstruction mode:
a) controlling the robot to walk transversely or obliquely;
b) selecting 5-10 frames of pictures in an observed area of a target object, and performing ORB feature extraction and matching on two adjacent pictures (Opencv xfeature2d module);
c) and if the matching point falls in the object identification area, the method is used for 3D reconstruction. Deleting other points;
d) motion (R, t) between two adjacent frames is solved using epipolar constraint (based on Opencv calib 3D). Odometry data can be used here as an observation correction;
e) and (3) performing BundleAdjustment (light beam adjustment method) on all 5-10 frame data to obtain the optimized camera motion [ R, t ] and pose { x/y/theta } and the 3D coordinate value of the target feature point. (if step d is accurate after odometer calibration, this step can be skipped).
3. And (5) zeroing the z value of the 3D coordinate of the target object to obtain an area on the 2D map. Thus, the semantic understanding (object type of image recognition) and spatial position (target object region) of the relevant region are obtained.
Furthermore, there may also be the following:
there is a possibility that the image feature points of the object are not abundant. And (4) proposing: the 3D reconstruction is abandoned.
The feature points on the boundary of the object are missing. Resulting in incomplete boundaries (narrowing of the range). It is proposed that an enhanced version of the 3D reconstruction (by circumventing, acquiring more image data) can be performed based on the previous 3D reconstruction results.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (7)

1. The lightweight semantic-driven sparse reconstruction method based on the 2D-SLAM is characterized by comprising the following steps of:
(1) presetting a semantic target type, and triggering a 3D reconstruction algorithm module when a target object with the same type as the preset semantic target is judged in a picture shot by a camera through an image semantic recognition algorithm;
(2) providing position and posture information of the camera on a 2D map to the 3D reconstruction algorithm module through a 2D-SLAM module, and performing 3D sparse reconstruction on the target object in the step (1) through an image feature extraction matching algorithm to obtain a 3D space coordinate of the target object;
(3) and (3) placing the 3D space coordinate in the step (2) on a 2D-SLAM map to obtain the coordinate of the target object on the 2D-SLAM map and the type of the target object.
2. The 2D-SLAM-based lightweight semantics-driven sparse reconstruction method of claim 1, further comprising:
and (4) adopting a corresponding cleaning strategy by the cleaning robot according to the coordinates of the target object in the 2D-SLAM map obtained in the step (3) and the type of the target object.
3. The 2D-SLAM-based lightweight semantics driven sparse reconstruction method of claim 2, wherein the cleaning strategy comprises cleaning robot detour, only sweep and not sweep, increasing dust collection wind speed, and/or alarming.
4. The 2D-SLAM-based lightweight semantic-driven sparse reconstruction method according to claim 2, wherein in the step (1), the front scene is subjected to image recognition by a camera, an image of the front scene is divided into image blocks according to object types therein by a semantic recognition algorithm, each image block corresponds to a type and a confidence level, and when the confidence level of a certain image block is higher than a threshold value, it is determined that the object type in the image block is consistent with a preset semantic object type.
5. The 2D-SLAM-based lightweight semantic-driven sparse reconstruction method of claim 2, wherein in step (2), feature points in the image are extracted through an image feature extraction matching algorithm, feature points related to semantic targets are matched to obtain a motion change relationship of the target in different images, and 3D space coordinates of the target are obtained based on the motion change relationship and an imaging model.
6. The 2D-SLAM-based lightweight semantic driven sparse reconstruction method of claim 1, wherein the boundary of the reconstructed object is optimized when the position and posture information provided by the 2D-SLAM module is inaccurate.
7. The 2D-SLAM-based lightweight semantic driven sparse reconstruction method of claim 6, wherein SBA nonlinear optimization is employed.
CN201910929063.8A 2018-09-28 2019-09-28 Lightweight semantic driven sparse reconstruction method based on 2D-SLAM Pending CN110706280A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2018111378873 2018-09-28
CN201811137887 2018-09-28

Publications (1)

Publication Number Publication Date
CN110706280A true CN110706280A (en) 2020-01-17

Family

ID=69197894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910929063.8A Pending CN110706280A (en) 2018-09-28 2019-09-28 Lightweight semantic driven sparse reconstruction method based on 2D-SLAM

Country Status (1)

Country Link
CN (1) CN110706280A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548173A (en) * 2016-11-24 2017-03-29 国网山东省电力公司电力科学研究院 A kind of improvement no-manned plane three-dimensional information getting method based on classification matching strategy
CN106908038A (en) * 2017-01-04 2017-06-30 成都通甲优博科技有限责任公司 A kind of monitoring device and monitoring system based on fish eye lens video camera
CN107085422A (en) * 2017-01-04 2017-08-22 北京航空航天大学 A kind of tele-control system of the multi-functional Hexapod Robot based on Xtion equipment
CN107291093A (en) * 2017-07-04 2017-10-24 西北工业大学 Unmanned plane Autonomous landing regional selection method under view-based access control model SLAM complex environment
US20180005015A1 (en) * 2016-07-01 2018-01-04 Vangogh Imaging, Inc. Sparse simultaneous localization and matching with unified tracking
CN108038902A (en) * 2017-12-07 2018-05-15 合肥工业大学 A kind of high-precision three-dimensional method for reconstructing and system towards depth camera
CN108133495A (en) * 2016-12-01 2018-06-08 汤姆逊许可公司 For the 3D method for reconstructing, corresponding program product and equipment of mobile device environment
CN108168539A (en) * 2017-12-21 2018-06-15 儒安科技有限公司 A kind of blind man navigation method based on computer vision, apparatus and system
CN108230337A (en) * 2017-12-31 2018-06-29 厦门大学 A kind of method that semantic SLAM systems based on mobile terminal are realized
CN108416385A (en) * 2018-03-07 2018-08-17 北京工业大学 It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method
CN108447116A (en) * 2018-02-13 2018-08-24 中国传媒大学 The method for reconstructing three-dimensional scene and device of view-based access control model SLAM
CN108466268A (en) * 2018-03-27 2018-08-31 苏州大学 A kind of freight classification method for carrying, system and mobile robot and storage medium
CN108536157A (en) * 2018-05-22 2018-09-14 上海迈陆海洋科技发展有限公司 A kind of Intelligent Underwater Robot and its system, object mark tracking
CN108534782A (en) * 2018-04-16 2018-09-14 电子科技大学 A kind of instant localization method of terrestrial reference map vehicle based on binocular vision system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180005015A1 (en) * 2016-07-01 2018-01-04 Vangogh Imaging, Inc. Sparse simultaneous localization and matching with unified tracking
CN106548173A (en) * 2016-11-24 2017-03-29 国网山东省电力公司电力科学研究院 A kind of improvement no-manned plane three-dimensional information getting method based on classification matching strategy
CN108133495A (en) * 2016-12-01 2018-06-08 汤姆逊许可公司 For the 3D method for reconstructing, corresponding program product and equipment of mobile device environment
CN106908038A (en) * 2017-01-04 2017-06-30 成都通甲优博科技有限责任公司 A kind of monitoring device and monitoring system based on fish eye lens video camera
CN107085422A (en) * 2017-01-04 2017-08-22 北京航空航天大学 A kind of tele-control system of the multi-functional Hexapod Robot based on Xtion equipment
CN107291093A (en) * 2017-07-04 2017-10-24 西北工业大学 Unmanned plane Autonomous landing regional selection method under view-based access control model SLAM complex environment
CN108038902A (en) * 2017-12-07 2018-05-15 合肥工业大学 A kind of high-precision three-dimensional method for reconstructing and system towards depth camera
CN108168539A (en) * 2017-12-21 2018-06-15 儒安科技有限公司 A kind of blind man navigation method based on computer vision, apparatus and system
CN108230337A (en) * 2017-12-31 2018-06-29 厦门大学 A kind of method that semantic SLAM systems based on mobile terminal are realized
CN108447116A (en) * 2018-02-13 2018-08-24 中国传媒大学 The method for reconstructing three-dimensional scene and device of view-based access control model SLAM
CN108416385A (en) * 2018-03-07 2018-08-17 北京工业大学 It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method
CN108466268A (en) * 2018-03-27 2018-08-31 苏州大学 A kind of freight classification method for carrying, system and mobile robot and storage medium
CN108534782A (en) * 2018-04-16 2018-09-14 电子科技大学 A kind of instant localization method of terrestrial reference map vehicle based on binocular vision system
CN108536157A (en) * 2018-05-22 2018-09-14 上海迈陆海洋科技发展有限公司 A kind of Intelligent Underwater Robot and its system, object mark tracking

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHAOQUN WANG等: "Autonomous Mobile Robot Navigation in Uneven and Unstructured Indoor Environments", 《ARXIV》 *
白云汉: "基于SLAM算法和深度神经网络的语义地图构建研究", 《计算机应用与软件》 *
苏立: "室内环境下移动机器人双目视觉SLAM研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
隋吉雷: "基于三维机器视觉的机器人定位技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN110349250B (en) RGBD camera-based three-dimensional reconstruction method for indoor dynamic scene
Walch et al. Image-based localization using lstms for structured feature correlation
CN112785702A (en) SLAM method based on tight coupling of 2D laser radar and binocular camera
WO2022188663A1 (en) Target detection method and apparatus
Alizadeh Object distance measurement using a single camera for robotic applications
Tan et al. Integrating Advanced Computer Vision and AI Algorithms for Autonomous Driving Systems
Zhang et al. Multiple vehicle-like target tracking based on the velodyne lidar
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN111612823A (en) Robot autonomous tracking method based on vision
Pirker et al. GPSlam: Marrying Sparse Geometric and Dense Probabilistic Visual Mapping.
Zhang et al. Lidar-guided stereo matching with a spatial consistency constraint
Kurban et al. Plane segmentation of kinect point clouds using RANSAC
CN113593035A (en) Motion control decision generation method and device, electronic equipment and storage medium
Zhang LILO: A novel LiDAR–IMU SLAM system with loop optimization
CN117152249A (en) Multi-unmanned aerial vehicle collaborative mapping and perception method and system based on semantic consistency
Li et al. Seeing through the grass: Semantic pointcloud filter for support surface learning
Shacklock et al. Visual guidance for autonomous vehicles: capability and challenges
CN111198563B (en) Terrain identification method and system for dynamic motion of foot type robot
CN116879870A (en) Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar
CN114648639B (en) Target vehicle detection method, system and device
EP4053801A1 (en) Landmark learning and localization without labels
CN110706280A (en) Lightweight semantic driven sparse reconstruction method based on 2D-SLAM
Botterill et al. Reconstructing partially visible models using stereo vision, structured light, and the g2o framework
Li et al. Fast and robust mapping with low-cost Kinect V2 for photovoltaic panel cleaning robot
Irie et al. A dependence maximization approach towards street map-based localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200117