CN108873908B - Robot city navigation system based on combination of visual SLAM and network map - Google Patents

Robot city navigation system based on combination of visual SLAM and network map Download PDF

Info

Publication number
CN108873908B
CN108873908B CN201810764916.2A CN201810764916A CN108873908B CN 108873908 B CN108873908 B CN 108873908B CN 201810764916 A CN201810764916 A CN 201810764916A CN 108873908 B CN108873908 B CN 108873908B
Authority
CN
China
Prior art keywords
wheeled robot
path
map
robot
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810764916.2A
Other languages
Chinese (zh)
Other versions
CN108873908A (en
Inventor
仲元红
张钊源
丁睿
张顺
黄关
张静
成欣雨
周昭坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seven Teng Robot Co ltd
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201810764916.2A priority Critical patent/CN108873908B/en
Publication of CN108873908A publication Critical patent/CN108873908A/en
Application granted granted Critical
Publication of CN108873908B publication Critical patent/CN108873908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a robot city navigation system based on the combination of visual SLAM and a network map, on the basis of determining the urban road navigation path of the wheeled robot reaching the target location position in the network map by using global path planning, the urban road navigation path is further optimized by the environment object space occupation condition of the position area of the wheeled robot obtained by visual SLAM processing, therefore, by combining the visual SLAM and the network map, the technical limitations and the defects in multiple aspects of the navigation control of the wheeled robot only depending on global path planning are overcome, more detailed positioning and path navigation can be provided for the wheeled robot, the wheeled robot can be ensured to smoothly pass through according to the urban road navigation path after local obstacle avoidance path optimization, and further, the accuracy and the passing effectiveness of path planning and navigation of the wheeled robot are better ensured.

Description

Robot city navigation system based on combination of visual SLAM and network map
Technical Field
The invention relates to the technical field of robot navigation path planning technology and visual SLAM processing, in particular to a robot city navigation system based on the combination of visual SLAM and a network map.
Background
Navigation technology has been a key research technology in the field of mobile robots. It is closely related to the technologies of robot positioning, 3D map reconstruction, path planning, etc. In the current hot automatic driving application technology, path planning and navigation of the wheeled robot are important technical components. The path planning means that an optimal collision-free path from a starting point to a target point is planned for the wheeled robot in an environment containing obstacles according to certain performance indexes or requirements. In the actual operation process, the problems of large amount of environmental information, fast change of obstacles, inaccurate positioning of the robot and the like often exist, and the accuracy and the effectiveness of path planning and navigation are influenced.
The global path planning based on the environment prior information is to search an optimal or suboptimal traffic path from a starting position node to a target position node according to an existing environment map, and is a navigation path planning application method which is commonly used at present. Under the current technical conditions, the environment map which is used more is usually a network map. The network map is a map that is digitally stored and referred to using computer technology; the method for storing information in the network map generally uses vector image storage, and the map scale can be enlarged, reduced or rotated without affecting the display effect; the presenting precision of the network map reaches the grade of the urban road at present, the path difference between different urban roads can be clearly displayed and presented, the passing width of the different urban roads can be recorded, and the passing congestion degree of the urban roads can be presented by acquiring and combining real-time traffic road condition information through the network. Meanwhile, a traffic path planning method for Global path planning by using a network map in combination with a Global Positioning System (GPS) has also been widely applied to various mature planning schemes, such as a shortest path planning method, a fastest traffic path planning method, a path planning method for avoiding or prioritizing a specific route, and the like.
However, global path planning can generally only be applied in environments where static obstacles or environmental information is known. When the obstacles in the environment continuously move or an accurate prior map cannot be obtained in advance, the global path planning is not effective any more. In addition, the Global Positioning System (GPS) used in Global path planning is not suitable for indoor environments or environments with complex obstacles, and cannot acquire the Positioning of the robot at the traffic size level precision in real time and provide the information of the environment in-situ obstacles. These limitations limit the accuracy and effectiveness of path planning and navigation of the wheeled robot only by means of global path planning, and often result in that the wheeled robot cannot pass through according to the navigation path really and effectively.
In recent years, a Simultaneous Localization And Mapping (SLAM) system based on a laser radar or a visual image has been rapidly developed. The SLAM technology can establish an environmental map from sensor measurement data while estimating the position and attitude of the robot itself. Especially, the vision SLAM adopts a binocular vision sensor or a camera, can acquire azimuth characteristic information quantity which is as rich as possible under the condition of lower cost (compared with a laser radar), and is used for executing positioning and path characteristic map construction of a local area environment; meanwhile, a visual depth image obtained based on images collected by a binocular vision sensor or a camera is matched, so that three-dimensional occupation information of the obstacles can be effectively provided, and a local environment 3D scene map can be reconstructed.
Therefore, how to fully utilize the technologies and provide a more accurate and effective solution to the urban navigation problem of the wheeled robot has become a hot research direction in the industry.
Disclosure of Invention
In view of the above-mentioned shortcomings in the prior art, an object of the present invention is to provide a robot city navigation system based on the combination of visual SLAM and network map, which is used for providing more detailed positioning and path navigation for a wheeled robot, so as to better ensure accuracy and traffic effectiveness of path planning and navigation for the wheeled robot.
In order to achieve the purpose, the robot city navigation system scheme based on the combination of the visual SLAM and the network map adopted by the invention is as follows:
the robot city navigation system based on the combination of the visual SLAM and the network map comprises a data communication processing module, a visual SLAM processing module, a three-dimensional scene reconstruction module, a global path planning module, a local path planning module and a robot control instruction generation module;
the data communication processing module is used for acquiring a visual depth image acquired by a binocular camera on the wheeled robot and geographical position information acquired by a positioning sensor on the wheeled robot;
the visual SLAM processing module is used for carrying out real-time positioning and map construction processing on the acquired visual depth image in real time, so that a regional path characteristic map is incrementally established according to the change condition of a region corresponding to the visual depth image, and the position state information and the posture information of the wheeled robot in the regional path characteristic map are determined in real time;
the three-dimensional scene reconstruction module is used for extracting three-dimensional occupation information of an environmental object from the acquired visual depth image in real time, incrementally reconstructing and establishing an area three-dimensional scene map corresponding to the area path feature map according to the change condition of the area corresponding to the visual depth image based on the area path feature map obtained by the processing of the visual SLAM processing module and the position state information and the posture information of the wheeled robot in the area path feature map, presenting the three-dimensional occupation condition of the environmental object in the position area where the wheeled robot is located, and determining the position state information and the posture information of the wheeled robot in the area three-dimensional scene map in real time;
the global path planning module is used for determining the position coordinates of the wheeled robot in a network map in real time according to the set target location and the acquired geographic position information of the wheeled robot, and planning and determining the urban road navigation path of the wheeled robot reaching the target location in the network map;
the local path planning module is used for carrying out planar two-dimensional processing on the regional three-dimensional scene map obtained by the three-dimensional scene reconstruction module to obtain a corresponding regional two-dimensional scene map used for presenting the occupation condition of the two-dimensional plane of the environmental object in the position region where the wheeled robot is located, further determining the position state information of the wheeled robot in the regional two-dimensional scene map, and establishing the corresponding relation of the position coordinates of the wheeled robot in the network map and the regional two-dimensional scene map according to the position coordinates of the wheeled robot in the network map, so that the local obstacle avoidance path optimization processing is carried out on the urban road navigation path of the wheeled robot in the network map reaching the target location position according to the occupation condition of the two-dimensional plane of the environmental object in the position region where the wheeled robot is located in the two-dimensional scene map;
and the robot control instruction generating module is used for generating a corresponding robot operation control instruction according to the position state information and the posture information of the wheeled robot in the regional three-dimensional scene map and the urban road navigation path of the wheeled robot reaching the target location position in the network map after the optimization processing of the local obstacle avoidance path.
In the robot city navigation system based on the combination of the visual SLAM and the network map, as a preferred scheme, the processing process of the visual SLAM processing module is as follows:
reading the visual depth image, calculating and determining the displacement and posture variation of the wheeled robot through instant positioning and map construction processing according to the motion variation condition of the area corresponding to each two adjacent frames of images in the visual depth image, determining the relative displacement of the area path feature presented by each two adjacent frames of images by combining the displacement and posture variation of the wheeled robot corresponding to each two adjacent frames of images based on the area path feature presented by each two adjacent frames of images, thereby incrementally and continuously updating and establishing an area path feature map according to each frame of image in the visual depth image, and determining the position state information and the posture information of the wheeled robot in the area path feature map in real time.
In the robot city navigation system based on the combination of the visual SLAM and the network map, as a preferred scheme, the processing process of the three-dimensional scene reconstruction module is as follows:
reading the visual depth image, respectively carrying out three-dimensional feature point identification processing on each frame image in the visual depth image in real time, marking out the three-dimensional feature point coordinates of the environmental object presented in each frame image, obtaining the corresponding three-dimensional occupation feature point coordinate set of the environmental object, thereby establishing a regional three-dimensional scene map corresponding to the regional path feature map according to the respective three-dimensional occupation feature point coordinate set of the environmental object corresponding to each frame image based on the regional path feature map processed by the visual SLAM processing module and the position state information and posture information of the robot in the regional path feature map, and presenting the three-dimensional occupation condition of the environmental object in the position region where the wheeled robot is located through the three-dimensional feature point coordinates of the environmental object in the regional three-dimensional scene map, and determining the position state information and the posture information of the wheeled robot in the regional three-dimensional scene map in real time.
In the robot city navigation system based on the combination of the visual SLAM and the network map, as an optimal scheme, when the global path planning module plans and determines the city road navigation path of the wheeled robot reaching the target location position in the network map, the city road navigation path reaching the target location position, which can be smoothly passed by the wheeled robot and has the shortest path under the condition that the average speed per hour can reach the preset average speed per hour threshold value, is planned by acquiring the city road traffic width information and the real-time traffic road condition information in the network map and combining the three-dimensional size information of the wheeled robot.
In the robot city navigation system based on the combination of the visual SLAM and the network map, as a preferred scheme, when the local path planning module carries out local obstacle avoidance path optimization processing on the urban road navigation path of the wheeled robot reaching the target location position in the network map according to the two-dimensional plane occupation condition of the environmental object in the position area where the wheeled robot is located in the two-dimensional scene map, the method comprises the steps of calculating the passing size among environment objects according to the two-dimensional plane occupation condition of the environment objects in the position area where the wheeled robot is located in a two-dimensional scene map, planning a local obstacle avoidance passing path which can be smoothly passed by the wheeled robot and has a smooth transition route by combining the two-dimensional plane size information of the wheeled robot and the urban road navigation path where the wheeled robot reaches the target position in the network map, and further carrying out local obstacle avoidance path optimization processing on the urban road navigation path.
In the robot city navigation system based on the combination of the visual SLAM and the network map, as a preferred scheme, the local path planning module is further configured to perform correction processing of accumulated errors on the position coordinates of the wheeled robot in the network map and the position state information and the posture information of the wheeled robot in the regional three-dimensional scene map according to the corresponding relationship between the position coordinates of the wheeled robot in the network map and the regional two-dimensional scene map.
In the robot city navigation system based on the combination of the visual SLAM and the network map, as a preferred scheme, the data communication processing module is further configured to acquire attitude parameter information acquired by an attitude sensor on the wheeled robot; the visual SLAM processing module is further used for correcting the position state information or/and the posture information of the wheeled robot in the regional path characteristic map according to the acquired posture parameter information of the wheeled robot.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention relates to a robot city navigation system based on combination of a visual SLAM and a network map, which is developed on the basis of the technology that a binocular camera for acquiring visual depth images and a positioning sensor for acquiring geographic position information are constructed on a wheeled robot, the visual SLAM is combined with the network map, on the basis of determining an urban road navigation path of the wheeled robot reaching a target position in the network map by using global path planning, the urban road navigation path is further optimized by using the space occupation condition of an environmental object in a position area where the wheeled robot is located, which is obtained by processing of the visual SLAM, so as to make up for navigation failure caused by insufficient accuracy of prior map information and continuous movement of an obstacle in an environment, and the positioning accuracy of the robot passage size level and the obstacle in the environmental field cannot be provided for indoor or obstacle complex environment based on GPS geographic position positioning The method has the advantages that the method has the defects of limited navigation capability and the like easily caused by information, can provide more detailed positioning and path navigation for the wheeled robot, and better ensures that the wheeled robot can smoothly pass through the urban road navigation path optimized according to the local obstacle avoidance path, so that the accuracy and the passing effectiveness of path planning and navigation of the wheeled robot are better ensured.
2. In the robot urban navigation system based on the combination of the visual SLAM and the network map, when the global path planning module plans and determines the urban road navigation path of the wheeled robot reaching the target position in the network map, the urban road navigation path determined by the urban path cost planning mode can be adopted, so that the wheeled robot can travel the path which is not always shortest, but most convenient to travel and most beneficial to pass, and the path planning and navigation effectiveness of the wheeled robot can be better ensured.
3. In the robot city navigation system based on the combination of the visual SLAM and the network map, the regional two-dimensional scene map obtained by performing two-dimensional compression processing on the regional three-dimensional scene map is performed in the process of performing local obstacle avoidance path optimization processing on the city road navigation path planned by the global path, so that the operation processing amount of the path optimization processing is reduced, and the real-time performance of the robot navigation path planning is better ensured.
Drawings
Fig. 1 is a schematic diagram of a system architecture of a robot city navigation system based on combination of a visual SLAM and a network map.
Fig. 2 is a schematic diagram of a hardware design framework of a specific application implementation of the robot city navigation system based on the combination of the visual SLAM and the network map.
Detailed Description
The robot city navigation system based on the combination of the visual SLAM and the network map is further described with reference to the accompanying drawings and the detailed description.
As shown in fig. 1, the invention provides a robot city navigation system based on combination of a visual SLAM and a network map, which comprises a data communication processing module, a visual SLAM processing module, a three-dimensional scene reconstruction module, a global path planning module, a local path planning module and a robot control instruction generation module from the aspect of an overall system architecture.
The data communication processing module is used for acquiring visual depth images acquired through a binocular camera on the wheeled robot and geographic position information acquired through a positioning sensor on the wheeled robot.
The visual SLAM processing module is used for carrying out real-time positioning and map construction processing (namely visual SLAM operation processing) on the acquired visual depth image in real time, so that a regional path characteristic map is incrementally established according to the change condition of a region corresponding to the visual depth image, and the position state information and the posture information of the wheeled robot in the regional path characteristic map are determined in real time.
The three-dimensional scene reconstruction module is used for extracting three-dimensional occupation information of the environmental object from the acquired visual depth image in real time, incrementally reconstructing and establishing a regional three-dimensional scene map corresponding to the regional path feature map according to the change condition of the region corresponding to the visual depth image based on the regional path feature map obtained by the processing of the visual SLAM processing module and the position state information and the posture information of the robot in the regional path feature map, presenting the three-dimensional occupation condition of the environmental object in the regional region where the wheeled robot is located, and determining the position state information and the posture information of the wheeled robot in the regional three-dimensional scene map in real time.
The global path planning module is used for determining the position coordinates of the wheeled robot in a network map in real time according to the set target location and the acquired geographic position information of the wheeled robot, and planning and determining the urban road navigation path of the wheeled robot reaching the target location in the network map.
The local path planning module is used for carrying out planar two-dimensional processing on the regional three-dimensional scene map obtained by the three-dimensional scene reconstruction module to obtain a corresponding regional two-dimensional scene map used for presenting the occupation condition of the two-dimensional plane of the environmental object in the position region where the wheeled robot is located, further determining the position state information of the wheeled robot in the regional two-dimensional scene map, and establishing the corresponding relation of the position coordinates of the wheeled robot in the network map and the regional two-dimensional scene map according to the position coordinates of the wheeled robot in the network map, so that the local obstacle avoidance path optimization processing is carried out on the urban road navigation path of the wheeled robot in the network map reaching the target place position according to the occupation condition of the two-dimensional plane of the environmental object in the position region where the wheeled robot is located in the two-dimensional scene map.
And the robot control instruction generating module is used for generating a corresponding robot operation control instruction according to the position state information and the posture information of the wheeled robot in the regional three-dimensional scene map and the urban road navigation path of the wheeled robot reaching the target location position in the network map after the optimization processing of the local obstacle avoidance path.
The invention relates to a robot city navigation system based on combination of a visual SLAM and a network map, which is developed on the basis of the technology that a binocular camera for acquiring visual depth images and a positioning sensor for acquiring geographic position information are constructed on a wheeled robot, and the whole system design idea is that the visual depth images acquired by the binocular camera on the wheeled robot are acquired, incremental regional path characteristic maps are established in real time through the visual SLAM processing, the wheeled robot is autonomously positioned in real time according to position estimation in the moving process, the position state information and the posture information of the wheeled robot in the regional path characteristic maps are determined in real time, and further, the regional three-dimensional scene maps corresponding to the regional path characteristic maps are established through extracting three-dimensional occupation information of environmental objects from the visual depth images and incremental reconstruction, the method comprises the steps of presenting the three-dimensional occupation situation of an environmental object in the area where the wheeled robot is located, planning and determining the urban road navigation path where the position coordinate of the wheeled robot reaches the target position in a network map by utilizing the acquired geographic position information acquired through a positioning sensor on the wheeled robot, further presenting the two-dimensional plane occupation situation of the environmental object in the area where the wheeled robot is located through a regional two-dimensional scene map obtained by performing planar two-dimensional processing on the regional three-dimensional scene map in order to reduce the operation processing amount of path optimization processing, correspondingly and correlatively performing local obstacle avoidance path optimization processing on the urban road navigation path of the wheeled robot in the network map, and further combining the position state information and the posture information of the wheeled robot in the regional scene three-dimensional map according to the urban road navigation path after the local obstacle avoidance path optimization processing, and generating a corresponding robot operation control command to perform navigation control on the wheel type robot. Therefore, by combining the visual SLAM and the network map, on the basis of determining the urban road navigation path of the wheeled robot reaching the target position in the network map by using global path planning, the urban road navigation path is further optimized by using the environment object space occupation condition of the position area of the wheeled robot obtained by the visual SLAM processing to make up the defects that the global path planning is not accurate enough in prior map information and navigation failure is caused when the barrier in the environment continuously moves, and the navigation capability is limited easily caused by the fact that the positioning precision of the size level of the robot passing and the environment in-situ barrier information can not be provided for the indoor or barrier complex environment based on GPS geographic position positioning, the wheeled robot can be provided with more detailed positioning and path navigation, and the wheeled robot can be better ensured to pass smoothly according to the urban road navigation path after the local barrier avoiding path is optimized, and further, the accuracy and the passing effectiveness of path planning and navigation of the wheeled robot are better ensured, and a new navigation control solution is provided for the application of urban street motion navigation of the wheeled robot, the automatic driving technology of the wheeled robot and the like.
In the robot city navigation system based on the combination of the visual SLAM and the network map, as shown in figure 2, from the aspect of hardware design implementation, a three-layer structure of an upper computer, a data processing board and a mechanical driving board can be designed to implement the robot city navigation system. The data processing board and the mechanical driving board are placed on the wheeled robot, and wireless communication connection can be established between the upper computer and the wheeled robot through wireless network communication modes such as a mobile cellular communication network (for example, 4G-LTE) and a WIFI network. The data processing board mainly comprises an embedded development board and a binocular camera, for example, an NVIDIA Jetson TX2 embedded development board and a ZED binocular camera can be adopted, and the main tasks of the data processing board are to collect visual depth images by using a core processing unit of the embedded development board, perform trajectory tracking and three-dimensional reconstruction of visual SLAM, and perform operations such as local obstacle avoidance path optimization processing according to the surrounding environment. The mechanical driving board can be designed to adopt a microprocessor development board, for example, an STM32F4 single chip microcomputer development board of ST corporation, and performs data exchange with an operation mechanism of the wheeled robot, the operation mechanism of the wheeled robot is designed to include, for example, a driving motor, a GPS geographic position positioning module, an attitude detection module (for example, a 9-axis accelerometer, a gyroscope, a geomagnetic sensor, and the like), and performs data communication with the data processing board through a bluetooth communication mode and the like, uploads the acquired operation state information to the data processing board, acquires necessary data resources for the system, and generates an operation control instruction to control the operation of the wheeled robot and perform tasks such as power management. The upper computer has the main functions of providing a good operation interface for a user, simultaneously displaying the position coordinates of the robot on a network map in real time, presenting a city street map, providing a real-time shot image, providing global path planning based on the network map and the like, and can be in communication connection with the data processing board through a wired communication network or a wireless communication network, for example, the upper computer server can use a fixed IP, and the data processing board can establish communication connection by requesting access to the IP of the upper computer. From the perspective of software design implementation, two parts, namely a client side and a server side, can be designed. The client side is a local software engineering part installed on the wheeled robot and used for real-time image processing, sensor data processing, visual SLAM process processing, three-dimensional scene reconstruction process processing, local obstacle avoidance path optimization processing, robot motion control processing and the like. The server end is a software engineering part installed on the upper computer and mainly provides a remote operation interface of the upper computer, so that a user can set a destination and plan a global path for the robot based on a network map at a far end, the server end can also design a visual depth image visual field of the real-time sharing wheel type robot end and GPS geographic position information of the visual depth image visual field, and mark position coordinates and present urban road navigation paths in the network map.
In the robot city navigation system based on the combination of the visual SLAM and the network map, the processing process of the visual SLAM processing module is as follows: reading the visual depth image, calculating and determining the displacement and posture variation of the wheeled robot through instant positioning and map construction processing according to the motion variation condition of the area corresponding to each two adjacent frames of images in the visual depth image, determining the relative displacement of the area path feature presented by each two adjacent frames of images by combining the displacement and posture variation of the wheeled robot corresponding to each two adjacent frames of images based on the area path feature presented by each two adjacent frames of images, thereby incrementally and continuously updating and establishing an area path feature map according to each frame of image in the visual depth image, and determining the position state information and the posture information of the wheeled robot in the area path feature map in real time. In the process of determining the position state information and the posture information of the wheeled robot in the regional path feature map, the visual SLAM processing module can further execute loop detection, namely identify the position of an environmental regional scene which the wheeled robot has arrived at, and is used for triggering the elimination and optimization of the accumulated error of global path planning, so that the problem of the accumulated error in a large range is eliminated, and the navigation accuracy is further improved.
In the robot city navigation system based on the combination of the visual SLAM and the network map, the three-dimensional scene reconstruction module carries out further three-dimensional scene reconstruction processing on the basis of the visual SLAM processing module, and the processing process comprises the following steps: reading the visual depth image, respectively carrying out three-dimensional feature point identification processing on each frame image in the visual depth image in real time, marking out the three-dimensional feature point coordinates of the environmental object presented in each frame image, obtaining the corresponding three-dimensional occupation feature point coordinate set of the environmental object, thereby establishing a regional three-dimensional scene map corresponding to the regional path feature map according to the respective three-dimensional occupation feature point coordinate set of the environmental object corresponding to each frame image based on the regional path feature map processed by the visual SLAM processing module and the position state information and posture information of the robot in the regional path feature map, and presenting the three-dimensional occupation condition of the environmental object in the position region where the wheeled robot is located through the three-dimensional feature point coordinates of the environmental object in the regional three-dimensional scene map, and determining the position state information and the posture information of the wheeled robot in the regional three-dimensional scene map in real time.
In the robot city navigation system based on the combination of the visual SLAM and the network map, the global path planning module is mainly used for carrying out global path planning on the city road navigation path by utilizing the network map and the geographic position information of the wheeled robot, and in practical application, the global path planning mode can adopt a mature planning scheme in the prior art, such as a shortest path planning mode, a fastest passing path planning mode, a specific evading path or a preferential path planning mode and the like. However, as an optimization selection scheme, when the global path planning module plans and determines the urban road navigation path of the wheeled robot reaching the target location position in the network map, the urban path cost planning mode provided by the invention can be adopted, that is, the urban road navigation path reaching the target location position, which can be smoothly passed by the wheeled robot and has the shortest path under the condition that the average speed per hour can reach the preset average speed per hour threshold, is planned by acquiring the urban road traffic width information and the real-time traffic road condition information in the network map and combining the three-dimensional size information of the wheeled robot. The urban road navigation path determined by the urban path cost planning mode can be planned, so that the wheeled robot can travel the path which is not necessarily shortest, but is most convenient to travel and most beneficial to passing.
In the robot city navigation system based on the combination of the visual SLAM and the network map, the processing task of the local path planning module is mainly divided into two parts. The first part is to perform planar two-dimensional processing on a complex regional three-dimensional scene map, and because a wheeled robot cannot perform vertical motion when passing on an urban road, when local obstacle avoidance path optimization processing is performed, longitudinal height information of an obstacle object does not need to be considered, only two-dimensional plane occupation information of the obstacle object needs to be considered, and therefore data processing of local path planning is performed according to three-dimensional complex environment information without wasting excessive computing resources, and through planar two-dimensional processing, the regional three-dimensional scene map is equivalently compressed into the regional two-dimensional scene map, and the operation processing amount of the path optimization processing is reduced; in addition, if the operation amount is further reduced, in the two-dimensional processing process of the two-dimensional scene map, the two-dimensional scene map may be subjected to rasterization information compression processing, the two-dimensional plane occupation situation of the environmental object presented therein is rasterized, and each grid is used as a basic metering unit of the two-dimensional plane occupation size of the environmental object and is used as an obstacle avoidance size quantification basis for the subsequent local obstacle avoidance path optimization processing, so that the processing operation amount can be reduced more greatly. By reducing the operation processing amount, the real-time performance of the robot navigation path planning is better ensured. The second part is that according to the two-dimensional plane occupation condition of the environment object in the position area where the wheeled robot is located in the two-dimensional scene map, the urban road navigation path of the wheeled robot reaching the target location position in the network map is subjected to local obstacle avoidance path optimization processing, when the processing is specifically executed, the scheme provided by the invention is that the passing size among the environment objects is calculated according to the two-dimensional plane occupation condition of the environment objects in the position area where the wheeled robot is located in the two-dimensional scene map, the local obstacle avoidance passing path which can be smoothly passed by the wheeled robot and has a smooth transition route is planned by combining the two-dimensional plane size information of the wheeled robot and the urban road navigation path where the wheeled robot reaches the target position in the network map, and then the local obstacle avoidance path optimization processing is carried out on the urban road navigation path. Therefore, the local obstacle avoidance passing path which can be smoothly passed by the wheeled robot and has a smooth transition route is optimized, for the mechanical operation of the robot, the smoother transition path means that fewer running state change instructions can be sent to the running control of the wheeled robot, and the fewer running state change instructions mean that higher running control efficiency can be achieved, and meanwhile, the problems of delay errors and the like caused by multiple instructions can be reduced.
In addition, in the robot city navigation system based on the combination of the visual SLAM and the network map, as further optimization, the local path planning module establishes the corresponding relation of the position coordinates of the wheeled robot in the network map and the regional two-dimensional scene map, however, by means of the instant positioning characteristic processed by the visual SLAM, the position state information and the posture information of the wheeled robot in the regional two-dimensional scene map are determined, and meanwhile, the displacement and the posture variation of the wheeled robot can be determined through calculation according to the motion variation condition of the region corresponding to every two adjacent frames of images in the visual depth image, so that the displacement and the posture variation of the wheeled robot can be continuously and repeatedly corrected along with the continuous motion variation of the visual depth image to reduce the accumulated error; therefore, in order to further improve the accuracy of correcting the accumulated error, the local path planning module may be further configured to correct the accumulated error according to the corresponding relationship between the position coordinates of the wheeled robot in the network map and the regional two-dimensional scene map, and to more accurately eliminate the accumulated error in the short time of the pose of the wheeled robot by performing the accumulated error correction processing on the position coordinates of the wheeled robot in the network map and the position state information and the pose information of the wheeled robot in the regional three-dimensional scene map. In addition, if an attitude sensor (for example, an Inertial Measurement Unit (IMU) or the like) for acquiring attitude parameter information is arranged on the wheeled robot, the data communication processing module in the system may also be designed to acquire the attitude parameter information acquired by the attitude sensor on the wheeled robot; correspondingly, the visual SLAM processing module can be further designed to correct the position state information or/and the posture information of the wheeled robot in the regional path characteristic map according to the acquired posture parameter information of the wheeled robot, so as to further eliminate the accumulated posture error in a short time. Meanwhile, if the integrated error elimination and optimization processing of the loop detection to the global path planning is further executed by combining the visual SLAM processing module, the path optimization precision and the navigation control precision of urban navigation to the wheel robot can be further improved.
In summary, the robot city navigation system based on the combination of the visual SLAM and the network map performs further local obstacle avoidance path optimization on the city road navigation path by means of the environment object space occupation condition of the position area where the wheeled robot is located, which is obtained by the processing of the visual SLAM, on the basis of determining the city road navigation path where the wheeled robot reaches the target location position in the network map by using the global path planning, so that the combination of the visual SLAM and the network map makes up for the various technical limitations and deficiencies existing in the wheeled robot navigation control only depending on the global path planning, can provide more detailed positioning and path navigation for the wheeled robot, better ensure that the wheeled robot can smoothly pass through according to the city road navigation path after the local obstacle avoidance path optimization, and further better ensure the accuracy and the pass validity of the wheeled robot path planning and navigation, the method can provide a new navigation control solution for the application of the wheeled robot in urban street motion navigation, the wheeled robot automatic driving technology and the like.
Finally, the above embodiments are only intended to illustrate the technical solution of the present invention and not to limit the same, and although the present invention has been described in detail with reference to the embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention, which should be covered by the claims of the present invention.

Claims (7)

1. The robot city navigation system based on the combination of the visual SLAM and the network map is characterized by comprising a data communication processing module, a visual SLAM processing module, a three-dimensional scene reconstruction module, a global path planning module, a local path planning module and a robot control instruction generation module;
the data communication processing module is used for acquiring a visual depth image acquired by a binocular camera on the wheeled robot and geographical position information acquired by a positioning sensor on the wheeled robot;
the visual SLAM processing module is used for carrying out real-time positioning and map construction processing on the acquired visual depth image in real time, so that a regional path characteristic map is incrementally established according to the change condition of a region corresponding to the visual depth image, and the position state information and the posture information of the wheeled robot in the regional path characteristic map are determined in real time;
the three-dimensional scene reconstruction module is used for extracting three-dimensional occupation information of an environmental object from the acquired visual depth image in real time, incrementally reconstructing and establishing an area three-dimensional scene map corresponding to the area path feature map according to the change condition of the area corresponding to the visual depth image based on the area path feature map obtained by the processing of the visual SLAM processing module and the position state information and the posture information of the wheeled robot in the area path feature map, presenting the three-dimensional occupation condition of the environmental object in the position area where the wheeled robot is located, and determining the position state information and the posture information of the wheeled robot in the area three-dimensional scene map in real time;
the global path planning module is used for determining the position coordinates of the wheeled robot in a network map in real time according to the set target location and the acquired geographic position information of the wheeled robot, and planning and determining the urban road navigation path of the wheeled robot reaching the target location in the network map;
the local path planning module is used for carrying out planar two-dimensional processing on the regional three-dimensional scene map obtained by the three-dimensional scene reconstruction module to obtain a corresponding regional two-dimensional scene map used for presenting the occupation condition of the two-dimensional plane of the environmental object in the position region where the wheeled robot is located, further determining the position state information of the wheeled robot in the regional two-dimensional scene map, and establishing the corresponding relation of the position coordinates of the wheeled robot in the network map and the regional two-dimensional scene map according to the position coordinates of the wheeled robot in the network map, so that the local obstacle avoidance path optimization processing is carried out on the urban road navigation path of the wheeled robot in the network map reaching the target location position according to the occupation condition of the two-dimensional plane of the environmental object in the position region where the wheeled robot is located in the two-dimensional scene map;
and the robot control instruction generating module is used for generating a corresponding robot operation control instruction according to the position state information and the posture information of the wheeled robot in the regional three-dimensional scene map and the urban road navigation path of the wheeled robot reaching the target location position in the network map after the optimization processing of the local obstacle avoidance path.
2. The robot city navigation system based on the combination of the visual SLAM and the network map as claimed in claim 1, wherein the processing procedure of the visual SLAM processing module is as follows:
reading the visual depth image, calculating and determining the displacement and posture variation of the wheeled robot through instant positioning and map construction processing according to the motion variation condition of the area corresponding to each two adjacent frames of images in the visual depth image, determining the relative displacement of the area path feature presented by each two adjacent frames of images by combining the displacement and posture variation of the wheeled robot corresponding to each two adjacent frames of images based on the area path feature presented by each two adjacent frames of images, thereby incrementally and continuously updating and establishing an area path feature map according to each frame of image in the visual depth image, and determining the position state information and the posture information of the wheeled robot in the area path feature map in real time.
3. The robot city navigation system based on the combination of the visual SLAM and the network map as claimed in claim 1, wherein the processing procedure of the three-dimensional scene reconstruction module is as follows:
reading the visual depth image, respectively carrying out three-dimensional feature point identification processing on each frame image in the visual depth image in real time, marking out the three-dimensional feature point coordinates of the environmental object presented in each frame image, obtaining the corresponding three-dimensional occupation feature point coordinate set of the environmental object, thereby establishing a regional three-dimensional scene map corresponding to the regional path feature map according to the respective three-dimensional occupation feature point coordinate set of the environmental object corresponding to each frame image based on the regional path feature map processed by the visual SLAM processing module and the position state information and posture information of the robot in the regional path feature map, and presenting the three-dimensional occupation condition of the environmental object in the position region where the wheeled robot is located through the three-dimensional feature point coordinates of the environmental object in the regional three-dimensional scene map, and determining the position state information and the posture information of the wheeled robot in the regional three-dimensional scene map in real time.
4. The robot city navigation system based on the combination of the visual SLAM and the network map as claimed in claim 1, wherein the global path planning module plans the city road navigation path which is determined to reach the target location position of the wheeled robot in the network map, and plans the city road navigation path which can be smoothly passed by the wheeled robot and can reach the target location position with the shortest path under the condition that the average speed per hour can reach the predetermined average speed per hour threshold value by acquiring the city road passing width information and the real-time traffic road condition information in the network map and combining the three-dimensional size information of the wheeled robot.
5. The combined robot city navigation system based on visual SLAM and network map of claim 1, the local path planning module is characterized in that when the local path planning module carries out local obstacle avoidance path optimization processing on the urban road navigation path of the wheeled robot reaching the target location position in the network map according to the two-dimensional plane occupation condition of the environmental object in the position area where the wheeled robot is located in the two-dimensional scene map, the method comprises the steps of calculating the passing size among environment objects according to the two-dimensional plane occupation condition of the environment objects in the position area where the wheeled robot is located in a two-dimensional scene map, planning a local obstacle avoidance passing path which can be smoothly passed by the wheeled robot and has a smooth transition route by combining the two-dimensional plane size information of the wheeled robot and the urban road navigation path where the wheeled robot reaches the target position in the network map, and further carrying out local obstacle avoidance path optimization processing on the urban road navigation path.
6. The robot city navigation system based on combination of the visual SLAM and the network map as claimed in claim 1, wherein the local path planning module is further configured to perform correction processing of accumulated errors on the position coordinates of the wheeled robot in the network map and the position status information and the posture information of the wheeled robot in the regional three-dimensional scene map according to the corresponding relationship between the position coordinates of the wheeled robot in the network map and the regional two-dimensional scene map.
7. The robot city navigation system based on the combination of visual SLAM and network map as claimed in claim 1, wherein the data communication processing module is further configured to obtain attitude parameter information collected by an attitude sensor on the wheeled robot;
the visual SLAM processing module is further used for correcting the position state information or/and the posture information of the wheeled robot in the regional path characteristic map according to the acquired posture parameter information of the wheeled robot.
CN201810764916.2A 2018-07-12 2018-07-12 Robot city navigation system based on combination of visual SLAM and network map Active CN108873908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810764916.2A CN108873908B (en) 2018-07-12 2018-07-12 Robot city navigation system based on combination of visual SLAM and network map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810764916.2A CN108873908B (en) 2018-07-12 2018-07-12 Robot city navigation system based on combination of visual SLAM and network map

Publications (2)

Publication Number Publication Date
CN108873908A CN108873908A (en) 2018-11-23
CN108873908B true CN108873908B (en) 2020-01-24

Family

ID=64301596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810764916.2A Active CN108873908B (en) 2018-07-12 2018-07-12 Robot city navigation system based on combination of visual SLAM and network map

Country Status (1)

Country Link
CN (1) CN108873908B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109375211B (en) * 2018-12-10 2023-03-10 西安电子科技大学 Radar and multi-optical equipment-based mobile unmanned platform target searching method
CN109782766B (en) * 2019-01-25 2023-01-03 北京百度网讯科技有限公司 Method and device for controlling vehicle driving
CN109782768A (en) * 2019-01-26 2019-05-21 哈尔滨玄智科技有限公司 A kind of autonomous navigation system adapting to expert's planetary compound gear train transfer robot
CN111489393B (en) * 2019-01-28 2023-06-02 速感科技(北京)有限公司 VSLAM method, controller and mobile device
CN109828588A (en) * 2019-03-11 2019-05-31 浙江工业大学 Paths planning method in a kind of robot chamber based on Multi-sensor Fusion
CN111735433B (en) * 2019-03-25 2022-05-20 杭州海康威视数字技术股份有限公司 Method and device for establishing two-dimensional map
CN110083158B (en) * 2019-04-28 2022-08-16 深兰科技(上海)有限公司 Method and equipment for determining local planning path
CN110260857A (en) * 2019-07-02 2019-09-20 北京百度网讯科技有限公司 Calibration method, device and the storage medium of vision map
CN110220517A (en) * 2019-07-08 2019-09-10 紫光云技术有限公司 A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words
CN111369688B (en) * 2020-03-11 2023-05-09 暗物智能科技(广州)有限公司 Cognitive navigation method and system for structured scene expression
CN111429791B (en) * 2020-04-09 2022-11-18 浙江大华技术股份有限公司 Identity determination method, identity determination device, storage medium and electronic device
EP4154946A4 (en) * 2020-06-30 2024-02-28 Siemens Aktiengesellschaft Fire extinguishing system, server, fire-fighting robot, and fire extinguishing method
CN111762049A (en) * 2020-07-01 2020-10-13 国网智能科技股份有限公司 Shared electric automobile self-induction dynamic control method and system
CN112629528A (en) * 2020-11-28 2021-04-09 北京瞪羚云智科技有限公司 System for realizing four-legged robot positioning navigation by using visual camera and working method thereof
CN112797991A (en) * 2021-02-09 2021-05-14 中科大路(青岛)科技有限公司 Method and system for generating driving path of unmanned vehicle
CN112998606B (en) * 2021-03-01 2022-04-22 深圳市无限动力发展有限公司 Cooperative sweeping method and device for intelligent equipment and cleaning machine and computer equipment
CN113110466B (en) * 2021-04-22 2021-12-21 深圳市井智高科机器人有限公司 High-sensitivity obstacle avoidance system and method for AGV robot
CN113408784B (en) * 2021-05-18 2024-01-12 华中科技大学 Multi-robot transfer cooperative assembly method and equipment
CN113820697B (en) * 2021-09-09 2024-03-26 中国电子科技集团公司第五十四研究所 Visual positioning method based on city building features and three-dimensional map
CN114137955B (en) * 2021-10-26 2023-04-28 中国人民解放军军事科学院国防科技创新研究院 Multi-robot rapid collaborative mapping method based on improved market method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413313A (en) * 2013-08-19 2013-11-27 国家电网公司 Binocular vision navigation system and method based on power robot
CN107167139A (en) * 2017-05-24 2017-09-15 广东工业大学 A kind of Intelligent Mobile Robot vision positioning air navigation aid and system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009026177A (en) * 2007-07-23 2009-02-05 Clarion Co Ltd Display control device, display control method and control program
US8473187B2 (en) * 2009-06-01 2013-06-25 Robert Bosch Gmbh Method and apparatus for combining three-dimensional position and two-dimensional intensity mapping for localization
US9179133B2 (en) * 2010-10-19 2015-11-03 Mitsubishi Electric Corporation 3Dimension stereoscopic display device
CN103885443B (en) * 2012-12-20 2017-02-08 联想(北京)有限公司 Device, system and method for simultaneous localization and mapping unit
CN103389104B (en) * 2013-07-17 2015-12-02 北京龙图通信息技术有限公司 A kind of three-dimensional air navigation aid synchronous with two dimensional navigation and device thereof
CN103926927A (en) * 2014-05-05 2014-07-16 重庆大学 Binocular vision positioning and three-dimensional mapping method for indoor mobile robot
JP6559535B2 (en) * 2015-10-22 2019-08-14 株式会社東芝 Obstacle map generation device, method thereof, and program thereof
CN105843223B (en) * 2016-03-23 2018-11-20 东南大学 A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method
CN107402569B (en) * 2016-05-19 2020-01-21 科沃斯机器人股份有限公司 Self-moving robot, map construction method and combined robot map calling method
CN106767853B (en) * 2016-12-30 2020-01-21 中国科学院合肥物质科学研究院 Unmanned vehicle high-precision positioning method based on multi-information fusion
CN107063256A (en) * 2017-01-23 2017-08-18 斑马信息科技有限公司 Vehicle synchronous builds figure and localization method
CN107203214B (en) * 2017-07-31 2018-03-27 中南大学 A kind of cooperative self-adapted Intelligent planning method in carrying robot COMPLEX MIXED path

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413313A (en) * 2013-08-19 2013-11-27 国家电网公司 Binocular vision navigation system and method based on power robot
CN107167139A (en) * 2017-05-24 2017-09-15 广东工业大学 A kind of Intelligent Mobile Robot vision positioning air navigation aid and system

Also Published As

Publication number Publication date
CN108873908A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108873908B (en) Robot city navigation system based on combination of visual SLAM and network map
CN106461402B (en) For determining the method and system of the position relative to numerical map
KR102630740B1 (en) Method and system for generating and using location reference data
KR20180079428A (en) Apparatus and method for automatic localization
CN102147260B (en) Electronic map matching method and device
CN103412565B (en) A kind of robot localization method with the quick estimated capacity of global position
CN110617821B (en) Positioning method, positioning device and storage medium
JP6456405B2 (en) Three-dimensional information calculation device, three-dimensional information calculation method, and autonomous mobile device
CN110361027A (en) Robot path planning method based on single line laser radar Yu binocular camera data fusion
CN108917758B (en) Navigation method and system based on AR
US9098088B2 (en) Method for building outdoor map for moving object and apparatus thereof
CN104850134A (en) High-precision autonomous obstacle-avoiding flying method for unmanned plane
JP2022542289A (en) Mapping method, mapping device, electronic device, storage medium and computer program product
CN109541535A (en) A method of AGV indoor positioning and navigation based on UWB and vision SLAM
CN103207634A (en) Data fusion system and method of differential GPS (Global Position System) and inertial navigation in intelligent vehicle
CN103747207A (en) Positioning and tracking method based on video monitor network
JP7245084B2 (en) Autonomous driving system
JP7431320B2 (en) Methods and systems for using digital map data
CN110031880B (en) High-precision augmented reality method and equipment based on geographical position positioning
JP5852645B2 (en) Trajectory correction method, trajectory correction device, and moving body device
CN115049910A (en) Foot type robot mapping and navigation method based on binocular vision odometer
CN112651991B (en) Visual positioning method, device and computer system
CN114719830B (en) Backpack type mobile mapping system and mapping instrument with same
CN113776515B (en) Robot navigation method and device, computer equipment and storage medium
CN111856537B (en) Navigation method and device for automatically driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210511

Address after: 401120 No.1-4, 16th floor, no.6, Yangliu North Road, Yubei District, Chongqing

Patentee after: Chongqing QiTeng Technology Co.,Ltd.

Address before: 400044 No. 174 Sha Jie street, Shapingba District, Chongqing

Patentee before: Chongqing University

TR01 Transfer of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Robot City navigation system based on visual slam and network map

Effective date of registration: 20210618

Granted publication date: 20200124

Pledgee: China Minsheng Banking Corp Chongqing branch

Pledgor: Chongqing QiTeng Technology Co.,Ltd.

Registration number: Y2021500000023

CP01 Change in the name or title of a patent holder

Address after: 401120 No.1-4, 16th floor, no.6, Yangliu North Road, Yubei District, Chongqing

Patentee after: Seven Teng Robot Co.,Ltd.

Address before: 401120 No.1-4, 16th floor, no.6, Yangliu North Road, Yubei District, Chongqing

Patentee before: Chongqing QiTeng Technology Co.,Ltd.

CP01 Change in the name or title of a patent holder
PM01 Change of the registration of the contract for pledge of patent right

Change date: 20221009

Registration number: Y2021500000023

Pledgor after: Seven Teng Robot Co.,Ltd.

Pledgor before: Chongqing QiTeng Technology Co.,Ltd.

PM01 Change of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230925

Granted publication date: 20200124

Pledgee: China Minsheng Banking Corp Chongqing branch

Pledgor: Seven Teng Robot Co.,Ltd.

Registration number: Y2021500000023

PC01 Cancellation of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Robot City Navigation System Based on the Combination of Visual SLAM and Network Map

Effective date of registration: 20231019

Granted publication date: 20200124

Pledgee: Chongqing Yuzhong Sub branch of China Construction Bank Corp.

Pledgor: Seven Teng Robot Co.,Ltd.

Registration number: Y2023980061902

PE01 Entry into force of the registration of the contract for pledge of patent right