CN115267812A - Positioning method, device, medium and robot based on highlight area - Google Patents
Positioning method, device, medium and robot based on highlight area Download PDFInfo
- Publication number
- CN115267812A CN115267812A CN202210901692.1A CN202210901692A CN115267812A CN 115267812 A CN115267812 A CN 115267812A CN 202210901692 A CN202210901692 A CN 202210901692A CN 115267812 A CN115267812 A CN 115267812A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- point
- highlight
- pose
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000003068 static effect Effects 0.000 claims description 17
- 238000006073 displacement reaction Methods 0.000 claims description 6
- 238000005303 weighing Methods 0.000 claims 2
- 238000004590 computer program Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000009432 framing Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 241000764238 Isis Species 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000005507 spraying Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/15—Correlation function computation including computation of convolution operations
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Mathematical Optimization (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Analysis (AREA)
- Computer Networks & Wireless Communication (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Physics (AREA)
- Computational Mathematics (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention provides a positioning method based on a highlight area, S1, loading a point cloud map and a highlight area information table; s2, acquiring a coarse pose provided by a robot wheel speed meter; s3, point cloud data scanned by the robot at the current position are obtained; converting the point cloud data into a point cloud map coordinate system according to the coarse pose to obtain converted point cloud data; s4, traversing all point clouds in the converted point cloud data, and obtaining each point in the converted point cloud dataCorrespondingly searching a closest pointFor each pointJudging whether the image falls into a highlight area, if so, weighting value w = a, otherwise, weighting value w = b; wherein a is>b; obtaining N point cloud pairs which are found to all meet the conditionsS5, optimizing the following formula by using a Gaussian-Newton iteration method: and obtaining and finding the optimal pose.
Description
Technical Field
The invention relates to the technical field of robots, in particular to a positioning method, a positioning device, a positioning medium and a robot based on highlight areas.
Background
The autonomous mobile robot requires the robot to realize the autonomous path-finding walking capability, and the realization of the capability is based on the premise that the robot knows the position of the robot.
The working principle of the laser radar is that laser is used as a signal source, pulse laser emitted by a laser device is used for striking trees, roads, bridges, buildings and the like, and light waves are reflected to a receiver of the laser radar; and calculating according to a laser ranging principle to obtain the distance from the laser radar to the target point, continuously scanning the target object by the pulse laser to obtain the data of all the target points on the target object, and finally outputting in a laser point cloud format.
Based on a laser positioning principle of a point cloud map, a patrol robot carries out point cloud mapping on a patrol scene in advance through a laser SLAM technology, and under the condition that all point cloud maps of the patrol scene are obtained, the patrol robot registers a current laser radar scanning point cloud frame with the point cloud map in real time in the patrol process to obtain the current pose (position and orientation) of the robot under the point cloud map.
In order to ensure the reliability of acquiring the position posture of the robot, the point cloud map is required to be ensured to completely and correctly reflect the actual situation of a patrol scene, which is difficult in practice. For example, a lot of dynamic objects (cars, people) exist in a patrol scene for a certain period of time, and some layout changes of the patrol scene, such as the position change of a garbage can. The above problems may cause the reliability of robot positioning to be reduced, because the current lidar point cloud frame is registered with the point cloud map on the premise that the scanned point cloud is consistent with the map point cloud. Therefore, when the patrol robot patrols in a scene with dynamic interference, the laser scanning point cloud frame and the point cloud map are directly used for registration, and positioning may be unreliable.
The prior art typically fuses other sensor devices, such as GPS, for location.
The background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the material described in this section is not prior art to the claims in this application and is not admitted to be prior art by inclusion in this section.
Disclosure of Invention
In view of the above technical problems in the related art, the present invention provides a positioning method based on a highlight region, including the following steps:
s1, loading a point cloud map and a highlight area information table;
s2, acquiring a coarse pose provided by a robot wheel speed meter;
s3, acquiring laser radar point cloud data scanned by the robot at the current position; converting the laser radar point cloud data into a point cloud map coordinate system according to the rough pose to obtain converted point cloud data;
s4, traversing all point clouds in the converted point cloud data, and obtaining each point in the converted point cloud dataCorrespondingly searching a closest pointFor each pointJudging whether the image falls into a highlight area, if so, weighting value w = a, otherwise, weighting value w = b; wherein a > b; obtaining N point cloud pairs which are found to all meet the conditions
S5, optimizing the following formula by using a Gaussian-Newton iteration method:
and obtaining and finding the optimal pose, wherein R is a pose angle, and T is displacement.
Specifically, the highlight information list includes a static object and an area where the static object is located.
Specifically, the coarse attitude position;
position = position _0+ delt \ u position, where delt _ position is the position of the wheel speed meter output and position _0 is the initial position.
Specifically, the determination point in the step S4Whether or not to fall highThe bright area satisfies the following formula
Where length is the side length of the highlight region in the highlight region information table.
In a second aspect, another embodiment of the present invention discloses a positioning device based on highlight areas, including the following units:
the map and highlight area loading unit is used for loading the point cloud map and the highlight area information table;
a coarse pose acquisition unit for acquiring a coarse pose provided by a robot wheel speed meter;
the system comprises a current position laser point cloud obtaining unit, a laser radar point cloud acquiring unit and a control unit, wherein the current position laser point cloud obtaining unit is used for obtaining laser radar point cloud data scanned by a robot at a current position; converting the laser radar point cloud data into a point cloud map coordinate system according to the rough pose to obtain converted point cloud data;
a weight distribution unit for traversing all point clouds in the converted point cloud data and providing each point of the converted point cloud dataCorrespondingly finding a closest pointFor each pointJudging whether the image falls into a highlight area, if so, judging that the image is in the highlight area, if not, judging that the image is in the highlight area, if so, judging that the image is in the highlight area, if so, the weight value is w = a, otherwise, judging that the image is in the highlight area, if not, judging that the image is in the highlight area, if so, judging that the image is in the highlight area, if not, the weight value is w = a, otherwise, judging that the image is in the highlight area, if not, the weight value is in the highlight area, and if the image is in the highlight area, the weight value is in the highlight area, and if the weight value is in the highlight area, the weight value is not in the highlight area, and the weight value is notw = b; wherein a > b; obtaining N point cloud pairs which are found to all meet the conditions
The optimal pose acquisition unit is used for optimizing the following formula according to a Gauss-Newton iteration method:
and obtaining and finding the optimal pose, wherein R is a pose angle, and T is displacement.
Specifically, the highlight information list includes a static object and an area where the static object is located.
Specifically, the coarse attitude position;
position = position _0+ delt \uposition, where delt _ position is the position output by the wheel speed meter, and position _0 is the initial position.
Specifically, the weight assignment unit determines a pointWhether the highlight region falls or not satisfies the following formula
Where length is the side length of the highlight region in the highlight region information table.
In a third aspect, another embodiment of the present invention discloses a non-volatile memory, where the memory has stored thereon instructions, and the processor is configured to implement the above-mentioned highlight region-based positioning method when executing the instructions.
In a fourth aspect, another embodiment of the present invention discloses a robot, which includes a central processing unit, a memory, and a laser radar, wherein the memory stores instructions, and the processor is configured to implement the positioning method based on highlight areas when executing the instructions.
The method comprises the steps of selecting a highlight area in a point cloud map frame in advance, wherein the highlight area needs to meet the 'invariable' principle, and the highlight area is a static object; and secondly, performing weight distribution in the process of registering the current laser point cloud frame and the point cloud map by the robot, wherein if the registered point cloud is in a highlight area, the registration basis weight is high, and the weights of other parts are reduced, so that the influence of some scanned dynamic interference point clouds on registration of points can be eliminated.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required in the embodiments will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a positioning method based on highlight areas according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a positioning apparatus based on a highlight region according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a positioning apparatus based on a highlight area according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present invention.
Example one
Referring to fig. 1, the present embodiment discloses a positioning method based on a highlight area, which includes the following steps:
s1, loading a point cloud map and a highlight area information table;
the robot of the present embodiment has a laser radar that maps an environment into a point cloud map by laser SLAM in advance.
For example, the robot needs to walk in a park or an indoor area, and the robot can use the laser radar carried by the robot to build a point cloud map for the park or the indoor based on the SLAM.
Specifically, this embodiment describes the scheme of this embodiment in terms of a campus, and it is assumed that there are a plurality of targets in the campus as follows: the system comprises a street lamp A, a street lamp B, an automobile A, an automobile B, an electric vehicle A, an electric vehicle B, a sentry box A, a sentry box B, a building A and a building B. The automobile A, the automobile B, the electric vehicle A and the electric vehicle B are parked temporarily, namely the automobile A, the automobile B, the electric vehicle A and the electric vehicle B are located at a certain position at the present moment, but the automobile A, the automobile B, the electric vehicle A and the electric vehicle B are not located at the certain position at a future moment. That is, the present embodiment refers to an object that is easy to move, such as an automobile, an electric vehicle, or a pedestrian, as a dynamic object. The gate sentry box A, the gate sentry box B, the building A and the building B are generally not changed in a short term or a long term, especially not changed in a short term, and the embodiment is called as a static object.
Specifically, when the robot uses the laser SLAM to construct a map, a dynamic object, such as a car a or a pedestrian a, exists at a certain position. However, when the robot uses the mapped point cloud map to navigate, a dynamic object at a certain position changes, or the robot does not exist or a pedestrian a replaces an automobile a. For the robot, when the laser radar scanning point cloud frame is registered with the point cloud map to acquire the current pose of the robot under the point cloud map, the robot cannot be successfully registered during registration, and further cannot acquire pose information.
The present embodiment uses a laserSLAM obtains a point cloud map of a scene, and highlight areas on the point cloud map are manually selected in a framing mode, wherein each highlight area is selected in a framing mode through a cube, and the center (x) of the cubec,yc,zc) And length of the side is used for expressing the information of the highlight area, and the information of all the highlight areas is collected to form an information table. Wherein the target object in the highlight region is a static object.
The highlight area needs to meet the principle of 'invariance', such as tall buildings, company water spraying pools, company doorposts, park large-sized indication boards, street lamps and other elements which are not easy to change.
As another implementation, this embodiment may also use a deep convolutional network to identify the object in the point cloud map, and select the region where the static object is located using a cube. Specifically, when a cube is used for frame selection, no dynamic object can exist in the cube. For example, a building a is identified in the point cloud map by using a convolutional network, but a parking space is located near the building a, and when a cube is used for framing, it is not suitable to frame an area with the parking space.
Although, at the present moment, the parking space is not used for parking the vehicle, the parking space is used for parking the vehicle in the foreseeable future.
S2, acquiring a coarse attitude pos provided by a robot wheel speed meter;
the robot of the embodiment uses the laser radar light source as the origin of a coordinate system, X points to the advancing direction of the robot, Y points to the left direction of the robot, and Z points to the upper direction of the robot, so that a three-dimensional coordinate system is established.
When the robot navigates, the robot has an initial pose position _0, and the specific initial pose position _0 may be manually entered by a user. And then controlling the patrol robot to walk in the scene (such as a garden).
The robot of the present embodiment is equipped with a wheel speed meter for measuring the rotational speed of the wheels of the robot. In the embodiment, the position deltat _ position output by the wheel speed meter is firstly obtained, and the coarse position of the robot is obtained according to the initial position _ 0;
pose=pose_0+delt_pose。
s3, acquiring laser radar point cloud data scan _ lidar scanned by the robot at the current position; converting the laser radar point cloud data scan _ lidar into a point cloud map coordinate system according to the coarse pose to obtain converted point cloud data scan _ map;
scan_map=pose*scan_lidar
s4, traversing all point clouds in the converted point cloud data scan _ map, and determining each point in the scan _ mapCorrespondingly searching a closest pointFor each pointJudging whether the image falls into a highlight area, if so, weighting value w = a, otherwise, weighting value w = b; wherein a is>b; obtaining N point cloud pairs which are found to all meet the conditions
Traversing all the point clouds in the converted point cloud data scan _ map, and determining each point in the scan _ mapCorrespondingly searching a closest pointAt the same time, the distance between the two is not more than 2 meters, and the point isIs a point in a point cloud map. Recently, two-point difference vector two-norm representation is adopted
For each pointJudging whether the image falls into a highlight area, if so, weighting value w = a, otherwise, weighting value w = b; judgment pointWhether the highlight region falls or not satisfies the following formula
S5, optimizing the following formula by using a Gaussian-Newton iteration method:
obtaining and finding an optimal pose (R, T), wherein R is a pose angle, and T is displacement;
the above formula is a typical least square optimization problem, which sets the quantity to be solved (R, T), and sets the initial value of iteration of (R, T) pos (RP,TPAnd setting an iteration end condition (the number of times is not more than 30 times, (loss)i+1-lossi) < -0.00001), performed using a gauss-newton iteration method;
in the embodiment, a highlight area is selected in a point cloud map frame in advance, wherein the highlight area needs to meet the 'invariable' principle, and is a static object; and secondly, carrying out weight distribution in the registration process of the robot by using the current laser point cloud frame and the point cloud map, wherein if the registered point cloud is in a highlight area, the registration basis weight is high, and the weights of other parts are reduced, so that the influence of some scanned dynamic interference point clouds of points on registration can be eliminated.
Example two
Referring to fig. 2, the present embodiment discloses a positioning device based on highlight areas, which includes the following units:
the map and highlight area loading unit is used for loading the point cloud map and the highlight area information table;
the robot of the embodiment has a laser radar which figures the environment into a point cloud map in advance by means of a laser SLAM.
For example, the robot needs to walk in a park or an indoor area, and the robot can use the laser radar carried by the robot to build a point cloud map for the park or the indoor based on the SLAM.
Specifically, the present embodiment describes the scheme of the present embodiment with a campus, and it is assumed that there are a plurality of targets in the campus as follows: the system comprises a street lamp A, a street lamp B, an automobile A, an automobile B, an electric vehicle A, an electric vehicle B, a sentry box A, a sentry box B, a building A and a building B. The automobiles A, B, the electric vehicles A and B are temporarily parked, namely the automobiles A, B, the electric vehicles A and the electric vehicles B are arranged at a certain position at the present moment, but the automobiles A, B, the electric vehicles A and the electric vehicles B are not arranged at the certain position at a future moment. That is, the present embodiment refers to an object that is easy to move, such as an automobile, an electric vehicle, or a pedestrian, as a dynamic object. The gate sentry box A, the gate sentry box B, the building A and the building B are generally not changed in a short term or a long term, especially not changed in a short term, and the embodiment is called as a static object.
In particular, when the robot uses the laser SLAM to construct a map, a dynamic object, such as a car a or a pedestrian a, is present at a certain position. However, when the robot uses the mapped point cloud map to navigate, a dynamic object at a certain position changes, or the robot does not exist or a pedestrian a replaces an automobile a. For the robot, when the laser radar scanning point cloud frame is registered with the point cloud map to acquire the current pose of the robot under the point cloud map, the robot cannot be successfully registered during registration, and further cannot acquire pose information.
In the embodiment, a point cloud map of a scene is acquired by using a laser SLAM, highlight areas on the point cloud map are manually framed and selected, wherein each highlight area is framed and selected by a cube, and the center point center (x) of the cubec,yc,zc) And length of the side is used for expressing the information of the highlight area, and the information of all the highlight areas is collected to form an information table. Wherein the target object in the highlight region is a static object.
The highlight area needs to satisfy the principle of 'invariance', such as high-rise buildings, company water spraying pools, company doorposts, park large-sized indication boards, street lamps and other elements which are not easy to change.
As another implementation, in this embodiment, a deep convolutional network may also be used to identify the object in the point cloud map, and the region where the static object is located is selected by using a cube. Specifically, when a cube is used for frame selection, no dynamic object can exist in the cube. For example, a building a is identified in the point cloud map by using a convolutional network, but a parking space is located near the building a, and when a cube is used for framing, it is not suitable to frame an area with the parking space.
Although, at the present moment, the parking space is not used for parking the vehicle, the parking space is used for parking the vehicle in the foreseeable future.
The rough pose acquisition unit is used for acquiring a rough pose position provided by a robot wheel speed meter;
the robot of the embodiment uses the laser radar light source as the origin of a coordinate system, X points to the advancing direction of the robot, Y points to the left direction of the robot, and Z points to the upper direction of the robot, so that a three-dimensional coordinate system is established.
When the robot navigates, the robot has an initial pose position _0, and the specific initial pose position _0 may be manually entered by a user. And then controlling the patrol robot to walk in the scene (such as a garden).
The robot of the present embodiment is provided with a wheel speed meter for measuring the wheel rotational speed of the robot. In the embodiment, a position deltat _ position output by a wheel speed meter is firstly obtained, and a coarse position of the robot is obtained according to the initial position _ 0;
pose=pose_0+delt_pose。
the system comprises a current position laser point cloud obtaining unit, a current position laser point cloud obtaining unit and a control unit, wherein the current position laser point cloud obtaining unit is used for obtaining laser radar point cloud data scan _ lidar scanned by a robot at a current position; converting the laser radar point cloud data scan _ lidar into a point cloud map coordinate system according to the coarse pose to obtain converted point cloud data scan _ map;
scan_map=pose*scan_lidar
a weight distribution unit for traversing all the point clouds in the converted point cloud data scan _ map and for each point in the scan _ mapCorrespondingly searching a closest pointFor each pointJudging whether the image falls into a highlight area, if so, weighting value w = a, otherwise, weighting value w = b; wherein a is>b; obtaining N point cloud pairs which are found to all meet the conditions
Traversing all the point clouds in the converted point cloud data scan _ map, and determining each point in the scan _ mapCorrespondingly searching a closest pointAt the same time, the distance between the two is not more than 2 meters, and the point isIs a point in a point cloud map. Recently, two-point difference vector two-norm representation is adopted
For each pointJudging whether the image falls into a highlight area, if so, weighting value w = a, otherwise, weighting value w = b; judgment pointWhether the highlight region falls or not satisfies the following formula
The optimal pose obtaining unit is used for optimizing the following formula according to a Gauss-Newton iteration method:
obtaining and finding an optimal pose (R, T), wherein R is a pose angle, and T is displacement;
the above formula is a typical least square optimization problem, which sets the quantity to be solved (R, T), and sets the initial value of iteration of (R, T) pos (RP,TPAnd, set the iteration end condition (the number of times does not exceed 30 times, (loss)i+1-lossi)<-0.00001), performed using a gaussian-newton iteration method;
in the embodiment, a highlight area is selected in a point cloud map frame in advance, wherein the highlight area needs to meet the 'invariable' principle, and is a static object; and secondly, performing weight distribution in the process of registering the current laser point cloud frame and the point cloud map by the robot, wherein if the registered point cloud is in a highlight area, the registration basis weight is high, and the weights of other parts are reduced, so that the influence of some scanned dynamic interference point clouds on registration of points can be eliminated.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of a positioning apparatus based on highlight areas according to this embodiment. The highlight-based region positioning apparatus 20 of this embodiment comprises a processor 21, a memory 22 and a computer program stored in said memory 22 and executable on said processor 21. The processor 21 realizes the steps in the above-described method embodiments when executing the computer program. Alternatively, the processor 21 implements the functions of the modules/units in the above-described device embodiments when executing the computer program.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 22 and executed by the processor 21 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program in the highlight-based area positioning device 20. For example, the computer program may be divided into the modules in the second embodiment, and for the specific functions of the modules, reference is made to the working process of the apparatus in the foregoing embodiment, which is not described herein again.
The highlight based location apparatus 20 may include, but is not limited to, a processor 21, a memory 22. Those skilled in the art will appreciate that the schematic diagram is merely an example of the highlight based pointing device 20 and does not constitute a limitation of the highlight based pointing device 20 and may include more or less components than shown, or combine certain components, or different components, for example, the highlight based pointing device 20 may also include input output devices, network access devices, buses, etc.
The Processor 21 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 21 is the control center of the highlight based pointing device 20 and connects the various parts of the entire highlight based pointing device 20 using various interfaces and lines.
The memory 22 can be used for storing the computer programs and/or modules, and the processor 21 can implement various functions of the highlight-based positioning device 20 by running or executing the computer programs and/or modules stored in the memory 22 and calling the data stored in the memory 22. The memory 22 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory 22 may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein the modules/units integrated with the highlight-based pointing device 20 may be stored in a computer-readable storage medium if implemented in the form of software functional units and sold or used as a stand-alone product. Based on such understanding, all or part of the flow of the method according to the above embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and used by the processor 21 to implement the steps of the above embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection therebetween, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. A positioning method based on highlight areas comprises the following steps:
s1, loading a point cloud map and a highlight area information table;
s2, acquiring a coarse pose provided by a robot wheel speed meter;
s3, acquiring laser radar point cloud data scanned by the robot at the current position; converting the laser radar point cloud data into a point cloud map coordinate system according to the coarse pose to obtain converted point cloud data;
s4, traversing all point clouds in the converted point cloud data, and obtaining each point in the converted point cloud dataCorrespondingly searching a closest pointFor each pointJudging whether the image falls into a highlight area, if so, weighting value w = a, otherwise, weighting value w = b; wherein a is>b; obtaining N point cloud pairs which are found to all meet the conditions
S5, optimizing the following formula by using a Gaussian-Newton iteration method:
and obtaining and finding the optimal pose, wherein R is a pose angle, and T is displacement.
2. The method of claim 1, wherein the highlight information list comprises static objects and areas where the static objects are located.
3. The method of claim 1, the coarse pose;
position = position _0+ delt \ u position, where delt _ position is the position of the wheel speed meter output and position _0 is the initial position.
5. A positioning device based on highlight areas comprises the following units:
the map and highlight area loading unit is used for loading the point cloud map and the highlight area information table;
a coarse pose acquisition unit for acquiring a coarse pose provided by a robot wheel speed meter;
the system comprises a current position laser point cloud obtaining unit, a laser radar point cloud obtaining unit and a control unit, wherein the current position laser point cloud obtaining unit is used for obtaining laser radar point cloud data scanned by a robot at a current position; converting the laser radar point cloud data into a point cloud map coordinate system according to the rough pose to obtain converted point cloud data;
a weight distribution unit for traversing all point clouds in the converted point cloud data and providing each point of the converted point cloud dataCorrespondingly searching a closest pointFor each pointJudging whether the image falls into a highlight area, if the image is a highlight area point, weighing value w = a, and otherwise, weighing value w = b; wherein a is>b; obtaining N point cloud pairs which are found to all meet the conditions
The optimal pose obtaining unit is used for optimizing the following formula according to a Gauss-Newton iteration method:
and obtaining and finding the optimal pose, wherein R is a pose angle, and T is displacement.
6. The apparatus of claim 5, wherein the highlight information list comprises static objects and areas where the static objects are located.
7. The apparatus of claim 5, the coarse attitude pos;
position = position _0+ delt \ u position, where delt _ position is the position of the wheel speed meter output and position _0 is the initial position.
9. A non-volatile memory having instructions stored thereon, the processor when executing the instructions to implement the method of any of claims 1-4.
10. A robot comprising a central processor, a memory, a lidar having instructions stored thereon, the processor when executing the instructions being configured to implement the method of any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210901692.1A CN115267812A (en) | 2022-07-28 | 2022-07-28 | Positioning method, device, medium and robot based on highlight area |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210901692.1A CN115267812A (en) | 2022-07-28 | 2022-07-28 | Positioning method, device, medium and robot based on highlight area |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115267812A true CN115267812A (en) | 2022-11-01 |
Family
ID=83771330
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210901692.1A Pending CN115267812A (en) | 2022-07-28 | 2022-07-28 | Positioning method, device, medium and robot based on highlight area |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115267812A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116972831A (en) * | 2023-09-25 | 2023-10-31 | 山东亚历山大智能科技有限公司 | Dynamic scene mobile robot positioning method and system based on salient features |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107991683A (en) * | 2017-11-08 | 2018-05-04 | 华中科技大学 | A kind of robot autonomous localization method based on laser radar |
CN110927740A (en) * | 2019-12-06 | 2020-03-27 | 合肥科大智能机器人技术有限公司 | Mobile robot positioning method |
CN111429574A (en) * | 2020-03-06 | 2020-07-17 | 上海交通大学 | Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion |
CN112612862A (en) * | 2020-12-24 | 2021-04-06 | 哈尔滨工业大学芜湖机器人产业技术研究院 | Grid map positioning method based on point cloud registration |
CN112882056A (en) * | 2021-01-15 | 2021-06-01 | 西安理工大学 | Mobile robot synchronous positioning and map construction method based on laser radar |
CN113050116A (en) * | 2021-03-05 | 2021-06-29 | 深圳市优必选科技股份有限公司 | Robot positioning method and device, robot and readable storage medium |
WO2021219023A1 (en) * | 2020-04-30 | 2021-11-04 | 北京猎户星空科技有限公司 | Positioning method and apparatus, electronic device, and storage medium |
CN113721248A (en) * | 2021-08-30 | 2021-11-30 | 浙江吉利控股集团有限公司 | Fusion positioning method and system based on multi-source heterogeneous sensor |
-
2022
- 2022-07-28 CN CN202210901692.1A patent/CN115267812A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107991683A (en) * | 2017-11-08 | 2018-05-04 | 华中科技大学 | A kind of robot autonomous localization method based on laser radar |
CN110927740A (en) * | 2019-12-06 | 2020-03-27 | 合肥科大智能机器人技术有限公司 | Mobile robot positioning method |
CN111429574A (en) * | 2020-03-06 | 2020-07-17 | 上海交通大学 | Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion |
WO2021219023A1 (en) * | 2020-04-30 | 2021-11-04 | 北京猎户星空科技有限公司 | Positioning method and apparatus, electronic device, and storage medium |
CN112612862A (en) * | 2020-12-24 | 2021-04-06 | 哈尔滨工业大学芜湖机器人产业技术研究院 | Grid map positioning method based on point cloud registration |
CN112882056A (en) * | 2021-01-15 | 2021-06-01 | 西安理工大学 | Mobile robot synchronous positioning and map construction method based on laser radar |
CN113050116A (en) * | 2021-03-05 | 2021-06-29 | 深圳市优必选科技股份有限公司 | Robot positioning method and device, robot and readable storage medium |
CN113721248A (en) * | 2021-08-30 | 2021-11-30 | 浙江吉利控股集团有限公司 | Fusion positioning method and system based on multi-source heterogeneous sensor |
Non-Patent Citations (1)
Title |
---|
张伟伟;陈超;徐军;: "融合激光与视觉点云信息的定位与建图方法", 计算机应用与软件, no. 07, 12 July 2020 (2020-07-12), pages 114 - 119 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116972831A (en) * | 2023-09-25 | 2023-10-31 | 山东亚历山大智能科技有限公司 | Dynamic scene mobile robot positioning method and system based on salient features |
CN116972831B (en) * | 2023-09-25 | 2024-02-02 | 山东亚历山大智能科技有限公司 | Dynamic scene mobile robot positioning method and system based on salient features |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7358636B2 (en) | Method and system for real-time localization of autonomous vehicles | |
CN110758246B (en) | Automatic parking method and device | |
US9298992B2 (en) | Geographic feature-based localization with feature weighting | |
CN110119698B (en) | Method, apparatus, device and storage medium for determining object state | |
CN105973145B (en) | Mobile three-dimensional laser scanning system and mobile three-dimensional laser scanning method | |
CN110889808B (en) | Positioning method, device, equipment and storage medium | |
CN110221616A (en) | A kind of method, apparatus, equipment and medium that map generates | |
JP2017194527A (en) | Data structure of circumstance map, creation system of circumstance map and creation method, update system and update method of circumstance map | |
CN108931253A (en) | Air navigation aid, device, intelligently guiding vehicle and the medium of intelligently guiding vehicle | |
CN112652062B (en) | Point cloud map construction method, device, equipment and storage medium | |
CN112505671B (en) | Millimeter wave radar target positioning method and device under GNSS signal missing environment | |
JP2022020797A (en) | Self-location estimation device, control method, program, and storage medium | |
CN111339876A (en) | Method and device for identifying types of regions in scene | |
JP2020030200A (en) | System and method for locating vehicle using accuracy specification | |
CN115267812A (en) | Positioning method, device, medium and robot based on highlight area | |
JP2018116004A (en) | Data compression apparatus, control method, program and storage medium | |
CN113076824A (en) | Parking space acquisition method and device, vehicle-mounted terminal and storage medium | |
CN114115263B (en) | Autonomous mapping method and device for AGV, mobile robot and medium | |
CN113433566B (en) | Map construction system and map construction method | |
CN115248443A (en) | Map construction method, system, equipment and storage medium based on laser radar | |
CN113421306A (en) | Positioning initialization method, system and mobile tool | |
CN112115741B (en) | Parking garage position detection method and device | |
CN114694106A (en) | Extraction method and device of road detection area, computer equipment and storage medium | |
TWI843116B (en) | Moving object detection method, device, electronic device and storage medium | |
CN115269763B (en) | Local point cloud map updating and maintaining method and device, mobile tool and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |