CN112784942A - Special color block coding method for positioning navigation in large-scale scene - Google Patents

Special color block coding method for positioning navigation in large-scale scene Download PDF

Info

Publication number
CN112784942A
CN112784942A CN202011593672.XA CN202011593672A CN112784942A CN 112784942 A CN112784942 A CN 112784942A CN 202011593672 A CN202011593672 A CN 202011593672A CN 112784942 A CN112784942 A CN 112784942A
Authority
CN
China
Prior art keywords
color
blocks
block
color block
mobile robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011593672.XA
Other languages
Chinese (zh)
Other versions
CN112784942B (en
Inventor
杨国青
冯凯
吕攀
李红
吴朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202011593672.XA priority Critical patent/CN112784942B/en
Publication of CN112784942A publication Critical patent/CN112784942A/en
Application granted granted Critical
Publication of CN112784942B publication Critical patent/CN112784942B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06046Constructional details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06187Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with magnetically detectable marking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K2019/06215Aspects not covered by other subgroups
    • G06K2019/06225Aspects not covered by other subgroups using wavelength selection, e.g. colour code

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a special color block coding method for positioning navigation in a large-scale scene, which is characterized in that a color block with coding information is designed, the color block is pasted on a marker in the scene according to a designed combination mode, and a mobile robot identifies the color block in the scene by using a camera to build a picture and position. The invention comprehensively considers the advantages of the characteristic point method and the traditional landmark, can effectively solve the problems of the mobile robot in positioning and navigation, and can realize the construction and positioning by remotely observing and identifying the combined color block through a common camera in a large-scale scene; in addition, the invention fuses color block coding information and magnetometer information by introducing a 'block' concept, thereby reducing the color block coding length and improving the field deployment efficiency.

Description

Special color block coding method for positioning navigation in large-scale scene
Technical Field
The invention belongs to the technical field of mobile robot positioning and navigation, and particularly relates to a special color block coding method for positioning and navigation in a large-scale scene.
Background
With the development of AI technology, mobile robots have been widely applied in the fields of unmanned logistics, intelligent factories, intelligent homes, etc., and positioning and navigation modules are the most important and complex modules in mobile robots.
Common positioning And navigation technologies can be roughly divided into a feature point method And a landmark method, wherein the feature point method mainly uses a computer vision algorithm to collect a series of representative points in a scene as feature points, And deduces the position of the feature point according to the transformation of the matched feature points, And the famous methods include ORB-SLAM (organized FAST And BRIEF-simulation Localization And Mapping), PTAM (parallel Tracking And Mapping), And the like; because the method randomly acquires the characteristic points in the environment, the method has no good robustness, such as: the extraction of the feature points is usually performed by searching for gray level changes in the image, so that when the illumination changes obviously, the matching of the feature points can cause a great problem; the extraction of the feature points needs texture information, and in places (such as white walls) lacking the texture information, the extraction and matching of the feature points are invalid, and the required computing power is too large because thousands of feature points are matched each time.
The other method is a road marking method, namely identifying fixed road markings in a scene, and establishing a map according to the positions of the road markings; the guideposts are arranged in the scene in advance, so that the mobile robot can identify the guideposts more conveniently due to the fact that the guideposts have the prior information of the guideposts, and the identification precision is improved; each road sign comprises ID information, and only the ID information needs to be compared when road sign matching is carried out, so that the matching precision of the road sign method is very high. The method greatly improves the accuracy of mapping and reduces the calculation force, but how to design and arrange the road signs becomes a key problem of the method. The chinese patent with application number 201510398506.7 proposes a technical solution for positioning by using two long-stripe color block combinations, which solves the positioning problem, but only uses the combination of two color blocks, and does not propose a method for distinguishing the combination of multiple color blocks. The method proposed in the documents [ Jim nez Serrata, Albert A, Yang S, Li R.An intersecting approximation of FastSLAM2.0 on a low-power Embedded architecture [ J ]. EURASIP Journal on Embedded Systems,2017 (1) ], although using color block map-building, only builds map in a small range, and does not solve the problem of large-scale scene.
Disclosure of Invention
In view of the above, the invention provides a special color block coding method for positioning and navigation in a large-scale scene, and the method can effectively solve the problems of the mobile robot in positioning and navigation by comprehensively considering the advantages of a feature point method and the advantages of the traditional landmark.
A special color block coding method for positioning and navigation in large-scale scenes comprises the following steps:
(1) designing a color block with coding information;
(2) designing a combination mode of color blocks, and pasting the color blocks into a scene according to the designed combination mode;
(3) and the mobile robot observes the color blocks in the scene by using the camera and identifies the color blocks, so that the map building and the positioning are completed.
Further, the color block designed in the step (1) is composed of a plurality of blocks side by side, the blocks have two colors, the block of one color represents 1, and the block of the other color represents 0, so that the color block contains multi-bit binary coding information.
Furthermore, the color block is composed of 7 red and blue blocks, each block is 10cm long and 10cm wide, the red block represents 1, the blue block represents 0, and the color block is 70cm long and 10cm wide and contains 7-bit binary coding information.
The color blocks are formed by red and blue, because the red and the blue are convenient to identify in computer vision, and the red or blue objects are rarely contained in the environment, thus reducing the noise interference; the mobile robot needs to identify a unique landmark to realize mapping and positioning, so that a unique color block code is also needed to represent different landmark information, and 128 different color block information can be formed by 7-bit binary systems.
Further, the specific implementation manner of the step (2) is as follows: firstly, pasting color blocks to each marker in a scene (such as a support column in an underground garage scene), and ensuring that four surrounding color blocks can be observed at any position of a mobile robot in the scene; the mobile robot observes four color blocks which are nearest to the mobile robot at the same time, the four color blocks can form a block, and then the mobile robot constructs a map by identifying the block instead of identifying each color block.
Furthermore, the color blocks are pasted and arranged on each marker in the scene in an array mode, and the four cameras are arranged on the mobile robot and distributed in four directions, namely front, back, left and right, so that the mobile robot can observe the periphery simultaneously.
Furthermore, for blocks observed by the mobile robot at different positions in the scene, the blocks may include the same four color blocks, but the orientation orders of the four color blocks in different blocks are different, the mobile robot may use the magnetometer to detect and distinguish the different orientation orders of the color blocks so as to distinguish the blocks, thereby greatly increasing the combination method of the color blocks, and increasing the types of the blocks without increasing the number of codes.
Furthermore, in the step (3), the mobile robot firstly uses the camera to observe four color blocks which are nearest to the mobile robot at the periphery, and then uses the magnetometer to detect the azimuth sequence of the four color blocks, so as to identify a unique block corresponding to the position where the mobile robot is located, and then establishes a road sign map according to the block to complete positioning.
Based on the technical scheme, the invention has the following beneficial technical effects:
1. in a large-scale scene, the invention can realize mapping and positioning by carrying out long-distance observation and identification on the combined color block through a common camera.
2. The invention fuses the color block coding information and the magnetometer information by introducing the concept of 'block', thereby reducing the color block coding length and improving the field deployment efficiency.
Drawings
Fig. 1(a) is a schematic diagram of a color block corresponding to the code 1000111.
Fig. 1(b) is a schematic diagram of a color block corresponding to the code 0110011.
Fig. 1(c) is a schematic diagram of a color block corresponding to the code 0011011.
FIG. 2 is a schematic diagram of the arrangement of color patches in a scene according to the present invention.
Fig. 3(a) and fig. 3(b) are schematic diagrams corresponding to two blocks having the same color block and different orientation orders.
Fig. 4 is a schematic view of a scene in which the present invention is applied to an underground parking lot.
Fig. 5 is a schematic diagram of camera model coordinates.
Detailed Description
In order to more specifically describe the present invention, the following detailed description is provided for the technical solution of the present invention with reference to the accompanying drawings and the specific embodiments.
The mobile robot in this embodiment includes four cameras, wheel speed encoders, IMU and other sensors oriented in four directions, front, back, left and right.
The color block coding design in this embodiment is as follows:
step 1: the color blocks are composed of two colors of dark and light, the light color represents 1, the dark color represents 0, each block is 10cm long and 10cm wide, each color block is composed of seven color blocks, so that the color block is 70cm long and 10cm wide.
Step 2: for example, the color block code shown in FIG. 1(a) represents code 1000111, FIG. 1(b) represents code 110011, and FIG. 1(c) represents 0011011. Each color block is composed of seven color blocks, so the code of each color block is composed of seven binary digits, the total number of the codes contains 128 digits, and except the combinations which are not easy to identify, such as full white, full black, and the like, the available color combination modes are about 100.
After the color blocks are designed, the specific steps of arranging the color blocks in the scene are as follows:
step 1: as shown in fig. 4, the color blocks are attached to the columns and the white walls of the parking lot, so that the mobile robot can be surrounded by four color blocks at any place in the scene, and the mobile robot can observe at least four color blocks at any time, thereby ensuring that sufficient information is provided to help the mobile robot to position.
Step 2: as shown in fig. 2, the black triangle and the white triangle respectively represent different poses of the mobile robot in the scene; squares represent the locations where the blobs are posted, and the numbers on the squares represent the blob code values. The four color blocks observed at this time of the white triangle are 1, 2, 3 and 4, and form a block; the four color blocks observed at this time for the black triangle are 4, 8, 6, 1, constituting one block. When the mobile robot recognizes the color block 1, the codes of other color blocks around the mobile robot can be considered, and if the codes of other color blocks around the mobile robot are 2, 3 and 4, the mobile robot is obviously at a white triangle; if the encoding of the surrounding color blocks is 4, 8, 6, the mobile robot is at a black triangle. Thus, the embodiment realizes the purpose of establishing the image according to the identification block instead of the color block, thereby avoiding the defect of using repeated color blocks.
And step 3: as shown in fig. 3(a) and 3(b), similar to step 2, four color blocks of 1, 2, 3, 4 are observed at the positions of the black triangle and the white triangle, and the differences cannot be distinguished. The magnetometer is introduced into the embodiment, the magnetometer is also called as a magnetic sensor and can be used for measuring the magnetic field intensity and the direction, the principle is similar to that of a compass, and the direction included angle of the magnetometer can be measured; the magnetometer can be used for determining that the orientations of the four color blocks are different, so that the four different color blocks also represent different blocks, the combination modes of the blocks are greatly increased, and the applicability of the method is improved.
And 4, step 4: the method comprises the following steps of constructing a camera model, wherein the imaging process of a camera is to map information in a three-dimensional world onto a pixel plane of a two-dimensional image, modeling is generally carried out through a pinhole camera model, and the method mainly comprises the following steps of converting four coordinate systems: a world coordinate system, a camera coordinate system, a normalized coordinate system, and a pixel coordinate system, as shown in fig. 5, the specific process is as follows:
(1) defining a world coordinate system o-x in a three-dimensional worldw-yw-zwFixing the origin of the coordinate system, and setting the absolute coordinate of each three-dimensional map point P as P ═ Xw,Yw,Zw]T
(2) Defining a camera coordinate system o-x in a three-dimensional worldc-yc-zcThe camera position is taken as the origin of a coordinate system, and the coordinate of each three-dimensional map point P relative to the camera pose is Pc=[Xc,Yc,Zc]T
(3) Defining a normalized coordinate system o-x '-y' on the imaging plane, projecting the map point P under the camera coordinate system to the normalized plane z as 1, thus obtaining the normalized camera coordinate of the map point P
Figure BDA0002869784770000051
Figure BDA0002869784770000052
(4) Defining a pixel coordinate system o-u-v on a pixel plane, wherein the coordinate of each pixel point p is [ u, v ]]TAccording to the principle of pinhole imaging
Figure BDA0002869784770000053
Then there are:
Figure BDA0002869784770000054
written in matrix form as:
Figure BDA0002869784770000055
the matrix composed of the intermediate quantities is called an internal reference (Intranics) matrix K of the camera, and generally, the internal reference of the camera is considered to be fixed and not to change in use, and needs to be calibrated in advance before use.
Figure BDA0002869784770000056
The coordinates of the camera in the world coordinate system may also be referred to as external parameters (externcics) of the camera, and the external parameters mainly include a rotation matrix R and a translation vector t, and the external parameters change when the camera moves, that is, the coordinates of the mobile robot in the world coordinate system.
Figure BDA0002869784770000057
And 5: before the mobile robot is positioned, a map is firstly constructed; the mobile robot observes scene information through the camera to obtain the position of the color block in the image, and then calculates the three-dimensional position of the color block in the world according to the two-dimensional position of the color block in the image through the camera projection model and the image projection matrix.
The monocular camera cannot obtain the depth information of the pixels, and needs to estimate the depth of a map point through parallax, which is called triangulation. Suppose two frames ItAnd It+1In each case two matching characteristic points p1And p2According to the theory of antipodal geometry, the following conditions are satisfied:
s1x1=s2Rx2+t
upper type left and right simultaneous left-riding
Figure BDA0002869784770000061
Figure BDA0002869784770000062
Thus solving for s1、s2The depth of the map points is also obtained, although the triangularization method is not robust. Firstly, triangles in epipolar geometry can be generated only by translation, and triangularization can be performed only by pure rotation, because epipolar constraints can be satisfied forever; therefore, in the process of drawing and positioning, the mobile robot is ensured not to perform pure rotation movement as much as possible.
The three-dimensional coordinates of a single color block are obtained by a triangulation method, and when a map is constructed, the single color block is identified and not directly put into the map, and after other surrounding color blocks are identified and the positions of the other surrounding color blocks are calculated, the map is constructed by using the information of the four color blocks, namely the information of one block. Meanwhile, absolute position information is added to four color blocks contained in the blocks through a magnetometer carried on the mobile robot, the mobile robot fully observes scene information after driving for a circle in the scene, so that the position information of all the color blocks is obtained, and a map of the scene is constructed.
Step 6: after a scene map is built, when the mobile robot drives in the scene, color blocks in the scene are observed through a camera, the position of each color block in a two-dimensional image is compared with the previously built map information, when the identified color blocks which are possibly repeated are identified, other color blocks around the mobile robot are identified, and positioning is carried out simultaneously according to the information of four surrounding color blocks instead of singly using a single color block; while the direction is calculated using the magnetometer. Therefore, the situation that the color blocks are identified and not subjected to mismatching is ensured, and the positioning work is finished.
The foregoing description of the embodiments is provided to enable one of ordinary skill in the art to make and use the invention, and it is to be understood that other modifications of the embodiments, and the generic principles defined herein may be applied to other embodiments without the use of inventive faculty, as will be readily apparent to those skilled in the art. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications to the present invention based on the disclosure of the present invention within the protection scope of the present invention.

Claims (8)

1. A special color block coding method for positioning and navigation in large-scale scenes comprises the following steps:
(1) designing a color block with coding information;
(2) designing a combination mode of color blocks, and pasting the color blocks into a scene according to the designed combination mode;
(3) and the mobile robot observes the color blocks in the scene by using the camera and identifies the color blocks, so that the map building and the positioning are completed.
2. The special color block encoding method according to claim 1, wherein: the color block designed in the step (1) is composed of a plurality of blocks in parallel, the blocks have two colors, the block of one color represents 1, the block of the other color represents 0, and the color block contains multi-bit binary coding information.
3. The special color block encoding method according to claim 2, wherein: the color block is formed by arranging 7 red and blue squares, the length of each square is 10cm, the width of each square is 10cm, the red square represents 1, the blue square represents 0, the length of each color block is 70cm, the width of each color block is 10cm, and the color blocks contain 7-bit binary coding information.
4. The special color block encoding method according to claim 1, wherein: the specific implementation manner of the step (2) is as follows: firstly, pasting color blocks to each marker in a scene, and ensuring that four surrounding color blocks can be observed at any position of a mobile robot in the scene; the mobile robot observes four color blocks which are nearest to the mobile robot at the same time, the four color blocks can form a block, and then the mobile robot constructs a map by identifying the block instead of identifying each color block.
5. The special color block encoding method of claim 4, wherein: the color blocks are pasted and arranged on each marker in the scene in an array mode, the four cameras are arranged on the mobile robot and are distributed in four directions, namely front, back, left and right, so that the mobile robot can observe the periphery simultaneously.
6. The special color block encoding method of claim 4, wherein: for blocks observed by the mobile robot at different positions in a scene, the blocks can contain the same four color blocks, but the direction sequences of the four color blocks in different blocks are different, the mobile robot can detect and judge different direction sequences of the color blocks by using a magnetometer so as to distinguish the blocks, the combination method of the color blocks is greatly increased, and the types of the blocks are increased under the condition of not increasing the number of codes.
7. The special color block encoding method of claim 6, wherein: in the step (3), the mobile robot firstly uses the camera to observe four color blocks which are nearest to the mobile robot at the periphery, and then uses the magnetometer to detect the azimuth sequence of the four color blocks, so that the unique block corresponding to the position where the mobile robot is located is identified, and then a road sign map is established according to the blocks to complete positioning.
8. The special color block encoding method according to claim 1, wherein: the coding method comprises the steps of designing a color block with coding information, pasting the color block onto a marker in a scene according to a designed combination mode, identifying the color block in the scene by a mobile robot through a camera, and establishing and positioning; the advantages of a characteristic point method and the advantages of a traditional landmark are comprehensively considered, the problems of the mobile robot in positioning and navigation can be effectively solved, and the combined color block can be remotely observed and identified through a common camera in a large-scale scene to realize mapping and positioning; in addition, by introducing a 'block' concept, color block coding information and magnetometer information are fused, so that the color block coding length is reduced, and the field deployment efficiency is improved.
CN202011593672.XA 2020-12-29 2020-12-29 Special color block coding method for positioning navigation in large-scale scene Active CN112784942B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011593672.XA CN112784942B (en) 2020-12-29 2020-12-29 Special color block coding method for positioning navigation in large-scale scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011593672.XA CN112784942B (en) 2020-12-29 2020-12-29 Special color block coding method for positioning navigation in large-scale scene

Publications (2)

Publication Number Publication Date
CN112784942A true CN112784942A (en) 2021-05-11
CN112784942B CN112784942B (en) 2022-08-23

Family

ID=75753208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011593672.XA Active CN112784942B (en) 2020-12-29 2020-12-29 Special color block coding method for positioning navigation in large-scale scene

Country Status (1)

Country Link
CN (1) CN112784942B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114295202A (en) * 2021-12-29 2022-04-08 湖南汉状元教育科技有限公司 Infrared information processing method and device, electronic equipment and readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016115714A1 (en) * 2015-01-22 2016-07-28 江玉结 Color block tag-based localization and mapping method and device thereof
CN106127822A (en) * 2016-03-16 2016-11-16 上海海笑网络技术有限公司 At physical isolation terminal room based on encoding of graphs one-way data transmission method and system
CN106845491A (en) * 2017-01-18 2017-06-13 浙江大学 Automatic correction method based on unmanned plane under a kind of parking lot scene
CN107901907A (en) * 2017-09-30 2018-04-13 惠州市德赛西威汽车电子股份有限公司 A kind of method for detecting lane lines based on color lump statistics
CN108108795A (en) * 2017-12-18 2018-06-01 无锡费舍太格科技有限公司 A kind of binary system color code
CN108388244A (en) * 2018-01-16 2018-08-10 上海交通大学 Mobile-robot system, parking scheme based on artificial landmark and storage medium
US20190072394A1 (en) * 2016-06-22 2019-03-07 Ping An Technology (Shenzhen) Co., Ltd. Indoor navigation method of handheld terminal, handheld terminal, and storage medium
CN110659710A (en) * 2019-10-09 2020-01-07 陈浩能 Encoding indication label, accurate identification method and intelligent processing system
CN111767854A (en) * 2020-06-29 2020-10-13 浙江大学 SLAM loop detection method combined with scene text semantic information
CN112033408A (en) * 2020-08-27 2020-12-04 河海大学 Paper-pasted object space positioning system and positioning method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016115714A1 (en) * 2015-01-22 2016-07-28 江玉结 Color block tag-based localization and mapping method and device thereof
CN106127822A (en) * 2016-03-16 2016-11-16 上海海笑网络技术有限公司 At physical isolation terminal room based on encoding of graphs one-way data transmission method and system
US20190072394A1 (en) * 2016-06-22 2019-03-07 Ping An Technology (Shenzhen) Co., Ltd. Indoor navigation method of handheld terminal, handheld terminal, and storage medium
CN106845491A (en) * 2017-01-18 2017-06-13 浙江大学 Automatic correction method based on unmanned plane under a kind of parking lot scene
CN107901907A (en) * 2017-09-30 2018-04-13 惠州市德赛西威汽车电子股份有限公司 A kind of method for detecting lane lines based on color lump statistics
CN108108795A (en) * 2017-12-18 2018-06-01 无锡费舍太格科技有限公司 A kind of binary system color code
CN108388244A (en) * 2018-01-16 2018-08-10 上海交通大学 Mobile-robot system, parking scheme based on artificial landmark and storage medium
CN110659710A (en) * 2019-10-09 2020-01-07 陈浩能 Encoding indication label, accurate identification method and intelligent processing system
CN111767854A (en) * 2020-06-29 2020-10-13 浙江大学 SLAM loop detection method combined with scene text semantic information
CN112033408A (en) * 2020-08-27 2020-12-04 河海大学 Paper-pasted object space positioning system and positioning method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114295202A (en) * 2021-12-29 2022-04-08 湖南汉状元教育科技有限公司 Infrared information processing method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN112784942B (en) 2022-08-23

Similar Documents

Publication Publication Date Title
Heng et al. Project autovision: Localization and 3d scene perception for an autonomous vehicle with a multi-camera system
CN108571971B (en) AGV visual positioning system and method
CN109509230B (en) SLAM method applied to multi-lens combined panoramic camera
Guindel et al. Automatic extrinsic calibration for lidar-stereo vehicle sensor setups
Ji et al. Panoramic SLAM from a multiple fisheye camera rig
CN113052903B (en) Vision and radar fusion positioning method for mobile robot
US9989969B2 (en) Visual localization within LIDAR maps
Wolcott et al. Visual localization within lidar maps for automated urban driving
CN102646275B (en) The method of virtual three-dimensional superposition is realized by tracking and location algorithm
Won et al. OmniSLAM: Omnidirectional localization and dense mapping for wide-baseline multi-camera systems
CN112734765B (en) Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
Pandey et al. Visually bootstrapped generalized ICP
CN104794748A (en) Three-dimensional space map construction method based on Kinect vision technology
CN115272494B (en) Calibration method and device for camera and inertial measurement unit and computer equipment
CN111508026A (en) Vision and IMU integrated indoor inspection robot positioning and map construction method
CN112784942B (en) Special color block coding method for positioning navigation in large-scale scene
CN105737849A (en) Calibration method of relative position between laser scanner and camera on tunnel car
Tamas et al. Relative pose estimation and fusion of omnidirectional and lidar cameras
CN103759724A (en) Indoor navigation method based on decorative lighting characteristic and system
Zeng et al. Monocular visual odometry using template matching and IMU
CN103260008A (en) Projection converting method from image position to actual position
Wang et al. Real-time omnidirectional visual SLAM with semi-dense mapping
Dai et al. Roadside Edge Sensed and Fused Three-dimensional Localization using Camera and LiDAR
CN108986025B (en) High-precision different-time image splicing and correcting method based on incomplete attitude and orbit information
Li et al. Automatic Multi-Camera Calibration and Refinement Method in Road Scene for Self-driving Car

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant