CN115773759A - Indoor positioning method, device and equipment of autonomous mobile robot and storage medium - Google Patents

Indoor positioning method, device and equipment of autonomous mobile robot and storage medium Download PDF

Info

Publication number
CN115773759A
CN115773759A CN202211677413.4A CN202211677413A CN115773759A CN 115773759 A CN115773759 A CN 115773759A CN 202211677413 A CN202211677413 A CN 202211677413A CN 115773759 A CN115773759 A CN 115773759A
Authority
CN
China
Prior art keywords
mobile robot
autonomous mobile
ceiling
standard
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211677413.4A
Other languages
Chinese (zh)
Inventor
汪顺利
闵伟
陈智超
袁士琳
丁浩
陈羽雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shangfei Intelligent Technology Co ltd
Original Assignee
Shanghai Aircraft Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Aircraft Manufacturing Co Ltd filed Critical Shanghai Aircraft Manufacturing Co Ltd
Priority to CN202211677413.4A priority Critical patent/CN115773759A/en
Publication of CN115773759A publication Critical patent/CN115773759A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses an indoor positioning method of an autonomous mobile robot, which comprises the following steps: when the condition that the vision auxiliary positioning is met is determined, controlling the autonomous mobile robot to acquire a ceiling collected image, and generating a position descriptor according to the vision characteristics detected in the ceiling collected image; searching a target visual characteristic road sign matched with the position descriptor in a pre-established visual characteristic map; and positioning the autonomous mobile robot according to the characteristic pose of the target image in the target visual characteristic road sign and the current shooting posture information when the autonomous mobile robot acquires the ceiling collected image. By the technical scheme, the global positioning of the autonomous mobile robot can be realized by means of the pre-established indoor ceiling map, the success rate and the accuracy of the indoor positioning of the autonomous mobile robot are improved, and the positioning tag does not need to be pre-deployed on the ceiling, so that the positioning cost is reduced.

Description

Indoor positioning method, device and equipment of autonomous mobile robot and storage medium
Technical Field
The invention relates to the field of aircraft assembly and manufacturing, in particular to an indoor positioning method, device, equipment and storage medium for an autonomous mobile robot.
Background
The autonomous mobile robot is a comprehensive system integrating multiple functions of environment perception, dynamic decision planning, behavior control, execution and the like. Compared with the prior generations of mobile robots needing positioning and navigation by means of magnetic stripes or two-dimensional codes, the latest generation of autonomous mobile robots do not need to rely on magnetic stripes or two-dimensional codes for positioning and navigation, have the capabilities of environmental perception and autonomous decision and control, can dynamically plan paths according to field conditions and autonomously avoid obstacles, and are mobile robots at the front of the prior art.
In the prior art, the autonomous mobile robot applied indoors generally adopts the following two methods: 1) The method comprises the steps that the single-line laser radar is used for drawing and navigating and positioning, the navigating and positioning mode is based on a two-dimensional map generated by a single-line laser radar slam algorithm, and the two-dimensional map describes obstacle information of a plane where a single-line laser radar scanning plane is located. During positioning, the obstacle outline of the plane scanned by the laser radar is compared with a map, and the position of the robot is obtained through self-adaptive Monte Carlo positioning and other algorithms. 2) A ceiling preset label method is a method for presetting a positioning label on a ceiling and realizing positioning by acquiring an image of the positioning label through a robot camera.
In the process of implementing the invention, the inventor finds that the prior art has the following defects: as for the method 1), due to the demand of production and living indoors, the position of the obstacle used as a positioning road sign near the movement path of the autonomous mobile robot often changes, which is inconsistent with the situation of the positioning road sign when a map is generated during scanning, so that the positioning of the robot based on the obstacle map fails, and the indoor positioning of the autonomous mobile robot cannot be realized. For the method 2), a positioning label with a special pattern needs to be arranged on the ceiling in advance, and high installation and maintenance cost is needed.
Disclosure of Invention
The invention provides an indoor positioning method, device, equipment and storage medium of an autonomous mobile robot, and aims to solve the problems that the indoor positioning of the autonomous mobile robot fails due to the change of a positioning road sign in the indoor positioning process of the conventional autonomous mobile robot, and the installation and maintenance costs of a positioning system are high.
According to an aspect of the present invention, there is provided an indoor positioning method of an autonomous mobile robot, the method including:
when the condition that the vision auxiliary positioning is met is determined, controlling the autonomous mobile robot to acquire a ceiling collected image, and generating a position descriptor according to the vision characteristics detected in the ceiling collected image;
searching a target visual characteristic road sign matched with the position descriptor in a pre-established visual characteristic map;
the visual feature road sign comprises an image feature descriptor and an image feature pose, the image feature descriptor is used for being compared with the position descriptor, and the image feature pose is used for describing standard position information and standard shooting posture information of the autonomous mobile robot when the image feature descriptor is generated;
and positioning the autonomous mobile robot according to the characteristic pose of the target image in the target visual characteristic road sign and the current shooting pose information when the autonomous mobile robot acquires the image from the ceiling.
According to another aspect of the present invention, there is provided an indoor positioning apparatus of an autonomous mobile robot, the apparatus including:
the descriptor generation module is used for controlling the autonomous mobile robot to acquire a ceiling collected image when the condition that the vision auxiliary positioning is met is determined, and generating a position descriptor according to the vision characteristic detected in the ceiling collected image;
the landmark searching module is used for searching a target visual characteristic landmark matched with the position descriptor in a pre-established visual characteristic map;
the visual feature road sign comprises an image feature descriptor and an image feature pose, the image feature descriptor is used for being compared with the position descriptor, and the image feature pose is used for describing standard position information and standard shooting posture information of the autonomous mobile robot when the image feature descriptor is generated;
and the positioning module is used for positioning the autonomous mobile robot according to the characteristic pose of the target image in the target visual characteristic road sign and the current shooting posture information when the autonomous mobile robot acquires the image from the ceiling.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method for indoor positioning of an autonomous mobile robot according to any of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to implement an indoor positioning method of an autonomous mobile robot according to any one of the embodiments of the present invention when the computer instructions are executed.
According to the technical scheme of the embodiment of the invention, the ceiling collected image is obtained when the vision auxiliary positioning condition is met, the visual characteristic in the image is detected to obtain the position descriptor, then the target visual characteristic road sign matched with the position descriptor is searched in the visual characteristic map, and finally the autonomous mobile robot is positioned according to the obtained target visual characteristic road sign and the current shooting attitude information when the autonomous mobile robot obtains the ceiling collected image, so that the problems of failure of indoor positioning of the autonomous mobile robot caused by the change of the positioning road sign in the indoor positioning process of the autonomous mobile robot and higher installation and maintenance cost of a positioning system are solved, the global positioning of the autonomous mobile robot is realized by depending on the pre-established indoor ceiling map, the success rate and the accuracy of indoor positioning of the autonomous mobile robot are improved, and the positioning cost is reduced.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present invention, nor are they intended to limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of an indoor positioning method of an autonomous mobile robot according to an embodiment of the present invention;
fig. 2 is a flowchart of an indoor positioning method for an autonomous mobile robot according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an indoor positioning device of an autonomous mobile robot according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device implementing an indoor positioning method of an autonomous mobile robot according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart of an indoor positioning method for an autonomous mobile robot according to an embodiment of the present invention, which is applicable to a situation where the autonomous mobile robot needs to perform positioning when working in an indoor environment, and the method may be performed by an indoor positioning device of the autonomous mobile robot, which may be implemented in hardware and/or software, and the indoor positioning device of the autonomous mobile robot may be configured in an autonomous mobile robot with a positioning function. As shown in fig. 1, the method includes:
and S110, when the condition that the vision auxiliary positioning is met is determined, controlling the autonomous mobile robot to acquire the ceiling collected image, and generating a position descriptor according to the visual features detected in the ceiling collected image.
The autonomous mobile robot can be provided with a binocular camera at the top, reserves a single-line laser radar and an Inertial Measurement Unit (IMU) required by a single-line laser radar slam algorithm, and has autonomous mobile performance.
In this embodiment, the visual feature may be a pipeline of an indoor ceiling or a building feature of a steel structure having image corners and angular vertices; it is easy to understand that the architectural features of the pipeline or steel structure of the indoor ceiling with image corners and angular vertexes are kept unchanged for a long time, so that the visual map established by the method based on the visual features which are unchanged for a long time has the characteristic of long-term stability, and the problem of indoor positioning failure of the autonomous mobile robot caused by the change of the positioning road sign in the indoor positioning process of the autonomous mobile robot in the prior art is solved.
Optionally, the autonomous mobile robot is controlled to shoot a ceiling collected image through a camera arranged at the top.
The camera can be a binocular camera; furthermore, the binocular camera can carry out absolute measurement on the distance of the target object and carry out necessary early warning or braking on any type of obstacles according to the change of distance information.
Specifically, determining that the visual auxiliary positioning condition is met includes:
when the laser radar device of the autonomous mobile robot is in a positioning failure state or when the current indoor scene is determined to be an indoor scene needing vision-assisted positioning, determining that a vision-assisted positioning condition is met; and/or
And detecting visual features in the ceiling collected image through at least one of a feature point detection algorithm, an accelerated robust feature algorithm and a feature descriptor extraction algorithm.
Wherein, the feature point detection algorithm may be; generating an image scale space, detecting local extreme points in the scale space, and then removing low-contrast points and edge response point degree local extreme points for accurate positioning; further, the accelerated robust feature algorithm may be: processing the image through continuous Gaussian filters with different scales, detecting characteristic points with unchanged scales in the image through Gaussian differences, and performing accurate positioning; further, the feature descriptor extraction algorithm may be: and carrying out accurate positioning on the characteristic points through the Hamming distance and the XOR operation between the bits.
Optionally, according to the detection position of the visual feature in the ceiling collected image, extracting a plurality of pixel points around the visual feature in the ceiling collected image;
and generating a position descriptor according to the visual features and the plurality of pixel points.
In this embodiment, after the ceiling collected image detects a visual feature, the visual feature and a plurality of pixels around the visual feature are extracted, the visual feature and the plurality of pixels around the visual feature are calculated and converted into a quantitative measure based on a corresponding algorithm, and the quantitative measure is used as a position descriptor of the visual feature; further, the corresponding algorithm may be: corresponding to the algorithm adopted in the step of detecting the visual characteristics in the ceiling collected image, namely if the characteristic point detection algorithm is used for detecting the visual characteristics of the ceiling collected image, the corresponding characteristic point detection algorithm is used for calculating the position descriptor.
And S120, searching a target visual characteristic road sign matched with the position descriptor in a pre-established visual characteristic map.
Wherein the visual feature map comprises: the ceiling captures images as well as visual characteristic landmarks.
Further, the visual feature landmark comprises an image feature descriptor and an image feature pose, the image feature descriptor is used for being compared with the position descriptor, and the image feature pose is used for describing standard position information and standard shooting pose information of the autonomous mobile robot when the image feature descriptor is generated.
In this embodiment, the autonomous mobile robot captures a ceiling image through a camera, detects a visual feature from the ceiling image, extracts the visual feature together with surrounding pixels after detecting the visual feature, calculates a location descriptor, matches the location descriptor with an image feature descriptor in the visual feature map, and if matching is successful, uses the visual feature landmark of the corresponding image feature descriptor as the visual feature landmark of the ceiling image, and extracts an image feature pose in the visual feature landmark as the image feature pose of the current autonomous mobile robot.
Wherein the image feature pose comprises: attitude information of the autonomous mobile robot, photographing angle information, pixel information of the camera, and the like when generating the image feature descriptor.
And S130, positioning the autonomous mobile robot according to the feature pose of the target image in the target visual feature road sign and the current shooting posture information when the autonomous mobile robot acquires the ceiling collected image.
Further, target standard position information and target standard shooting posture information included in the target image characteristic pose are obtained;
and calculating current position information corresponding to the autonomous mobile robot according to the target standard position information, the target standard shooting attitude information, the current shooting attitude information and shooting parameters of a camera in the autonomous mobile robot.
In this embodiment, when it is determined that the visual assistance positioning condition is satisfied, the autonomous mobile robot takes a ceiling image, detects a visual feature in the image, extracts the visual feature from the image together with its surrounding pixels after detecting the visual feature, and calculates a descriptor. And then searching and matching are carried out according to the descriptor of the visual characteristic road sign of the image descriptor in the visual characteristic map, so as to obtain the visual characteristic road sign matched with the image characteristic. The feature detected by the picture can be considered as the projection of the visual feature road sign in the map in the image because the visual feature road sign has uniqueness; further, the visual feature road sign in the visual feature map comprises a three-dimensional space coordinate of the feature road sign in the space besides the visual feature descriptor of the road sign. Then, according to the two-dimensional pixel coordinate positions of the visual features in the left and right pictures of the binocular camera, calculating the spatial three-dimensional coordinates of the object features corresponding to the visual features in a camera three-dimensional coordinate system by adopting a binocular parallax ranging algorithm; and further. For a monocular camera, the front frame picture and the rear frame picture can be regarded as pose transformation of two camera coordinate systems of a binocular camera, and the pose transformation is calculated according to a method similar to a binocular method. And finally, according to the three-dimensional space coordinates of the space multiple characteristic road signs in the world coordinate system and the coordinates in the camera three-dimensional coordinate system, the pose of the camera coordinate system in the world coordinate system is solved through coordinate transformation, according to the pose change relationship between the camera coordinate system and the robot coordinate system, the pose of the robot coordinate system in the world coordinate system is obtained, and final positioning is achieved.
According to the technical scheme of the embodiment of the invention, the ceiling collected image is obtained when the vision auxiliary positioning condition is met, the visual characteristic in the image is detected to obtain the position descriptor, then the target visual characteristic road sign matched with the position descriptor is searched in the visual characteristic map, and finally the autonomous mobile robot is positioned according to the obtained target visual characteristic road sign and the current shooting attitude information when the autonomous mobile robot obtains the ceiling collected image, so that the problems of failure of indoor positioning of the autonomous mobile robot caused by the change of the positioning road sign in the indoor positioning process of the autonomous mobile robot and higher installation and maintenance cost of a positioning system are solved, the global positioning of the autonomous mobile robot is realized by depending on the pre-established indoor ceiling map, the success rate and the accuracy of indoor positioning of the autonomous mobile robot are improved, and the positioning cost is reduced.
Example two
Fig. 2 is a flowchart of an indoor positioning method for an autonomous mobile robot according to a second embodiment of the present invention, where the embodiment is a supplement to the first embodiment, and specifically includes: before searching for a target visual characteristic landmark matched with the position descriptor in the pre-established visual characteristic map, the method further comprises the following steps: controlling the autonomous mobile robot to move indoors, and synchronously acquiring laser radar data, standard ceiling images and standard shooting attitude information corresponding to each standard ceiling image in the moving process; determining standard position information respectively corresponding to a plurality of laser key frames according to laser radar data; screening to obtain standard ceiling images respectively corresponding to each standard position information according to the time stamp of each laser key frame and the shooting time point of each standard ceiling image; constructing and obtaining a plurality of visual characteristic road signs according to the standard ceiling images corresponding to the standard position information and the standard shooting posture information corresponding to the standard ceiling images; and respectively adding the plurality of visual characteristic road signs into the visual characteristic map.
Accordingly, as shown in fig. 2, the method comprises:
s210, controlling the autonomous mobile robot to move indoors, and synchronously acquiring laser radar data, standard ceiling images and standard shooting attitude information corresponding to each standard ceiling image in the moving process.
The lidar data may be laser key frame data, laser key frame acquisition time stamp, and related data required by the algorithm for generating the location descriptor in S260.
The size and pixels of the standard ceiling image can be determined by shooting parameters of a camera and are manually set.
And S220, determining standard position information respectively corresponding to the plurality of laser key frames according to the laser radar data.
In this embodiment, on the basis of S210, the laser radar slam algorithm is used to calculate the plurality of laser key frames, obtain the current pose of the autonomous mobile robot at the moment of acquiring the laser key frame timestamp, record the timestamp of the current laser key frame, and form standard position information corresponding to the current laser key together with the current pose
And S230, screening to obtain the standard ceiling images respectively corresponding to the standard position information according to the time stamps of the laser key frames and the shooting time points of the standard ceiling images.
In the present embodiment, for example, on the basis of S220, a group of ceiling images meeting the preset requirement with each laser key frame timestamp time interval is matched in each ceiling image; the time interval can be set manually and adjusted, and can be 0.1 second for example; further, matching laser key frames corresponding to the timestamps of the laser key frames, and acquiring standard position information respectively corresponding to the laser key frames; further, the standard position information is matched with the standard ceiling image based on the timestamp of the laser key frame, and a standard ceiling image corresponding to each standard position information is obtained.
And S240, constructing and obtaining a plurality of visual characteristic road signs according to the standard ceiling images respectively corresponding to the standard position information and the standard shooting posture information corresponding to each standard ceiling image.
Optionally, mapping between a plurality of standard position information and standard shooting posture information is constructed based on the standard position information and the standard shooting posture information corresponding to the standard ceiling image, and the mapping is used as a plurality of visual feature road signs.
In this embodiment, specifically, on the basis of S230, the standard ceiling image matched with the standard position information is subjected to visual feature detection, where the visual feature detection may adopt at least one of a feature point detection algorithm, an accelerated robust feature algorithm, and a feature descriptor extraction algorithm; further, after the visual features are detected, the visual features and surrounding pixels are extracted together, image feature descriptors are calculated, then matching is carried out according to standard position information of the autonomous mobile robot and the visual image feature descriptors, and if matching is successful, mapping of the corresponding standard position information and standard shooting posture information is used as visual feature signposts of the ceiling images.
And S250, adding the visual characteristic road signs into the visual characteristic map respectively.
The visual feature map comprises a plurality of groups of visual feature road signs and a current indoor ceiling map.
And S260, when the condition that the vision auxiliary positioning is met is determined, controlling the autonomous mobile robot to acquire the ceiling collected image, and generating a position descriptor according to the visual features detected in the ceiling collected image.
And S270, searching a target visual characteristic road sign matched with the position descriptor in a pre-established visual characteristic map.
The visual feature landmark comprises an image feature descriptor and an image feature pose, the image feature descriptor is used for being compared with the position descriptor, and the image feature pose is used for describing standard position information and standard shooting pose information of the autonomous mobile robot when the image feature descriptor is generated.
And S280, positioning the autonomous mobile robot according to the feature pose of the target image in the target visual feature road sign and the current shooting attitude information when the autonomous mobile robot acquires the image from the ceiling.
According to the technical scheme of the embodiment of the invention, the ceiling collected image is obtained when the vision auxiliary positioning condition is met, the visual characteristics in the image are detected to obtain the position descriptor, the plurality of visual characteristic road signs are constructed through the indoor movement of the autonomous mobile robot and the operation of data collection, the visual characteristic road signs are respectively added into the visual characteristic map to complete the construction of the visual characteristic map, then the target visual characteristic road signs matched with the position descriptor are searched in the visual characteristic map, and finally the global positioning of the autonomous mobile robot is realized by means of the pre-established indoor ceiling map according to the obtained target visual characteristic road signs and the current shooting posture information when the ceiling collected image is obtained by the autonomous mobile robot, so that the success rate and the accuracy of the indoor positioning of the autonomous mobile robot are improved, and the positioning cost is reduced.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an indoor positioning device of an autonomous mobile robot according to a third embodiment of the present invention. As shown in fig. 3, the apparatus includes:
and a descriptor generation module 310, configured to control the autonomous mobile robot to acquire the ceiling captured image and generate a location descriptor according to the visual feature detected in the ceiling captured image when it is determined that the visual auxiliary positioning condition is satisfied.
And a landmark searching module 320, configured to search for a target visual feature landmark matching the location descriptor in a pre-established visual feature map.
The visual feature road sign comprises an image feature descriptor and an image feature pose, the image feature descriptor is used for being compared with the position descriptor, and the image feature pose is used for describing standard position information and standard shooting posture information of the autonomous mobile robot when the image feature descriptor is generated;
and the positioning module 330 is configured to position the autonomous mobile robot according to the feature pose of the target image in the target visual feature landmark and the current shooting pose information when the autonomous mobile robot acquires an image from a ceiling.
According to the technical scheme of the embodiment of the invention, the ceiling collected image is obtained when the vision auxiliary positioning condition is met, the visual characteristic in the image is detected to obtain the position descriptor, then the target visual characteristic road sign matched with the position descriptor is searched in the visual characteristic map, and finally the autonomous mobile robot is positioned according to the obtained target visual characteristic road sign and the current shooting posture information when the autonomous mobile robot obtains the ceiling collected image, the global positioning of the autonomous mobile robot is carried out by means of the pre-established indoor ceiling map, the indoor positioning success rate and accuracy of the autonomous mobile robot are improved, and the positioning cost is reduced.
On the basis of the foregoing embodiments, the descriptor generating module 310 may include:
and the image acquisition unit is used for controlling the autonomous mobile robot to shoot the acquired images of the ceiling through a camera arranged at the top.
On the basis of the foregoing embodiments, the descriptor generating module 310 may further include:
the characteristic extraction unit is used for extracting a plurality of pixel points around the visual characteristic from the ceiling collected image according to the detection position of the visual characteristic in the ceiling collected image;
and the descriptor acquisition unit is used for generating the position descriptor according to the visual characteristics and the plurality of pixel points.
On the basis of the above embodiments, the landmark searching module 320 may include:
the indoor information acquisition unit is used for controlling the autonomous mobile robot to move indoors and synchronously acquiring laser radar data, standard ceiling images and standard shooting attitude information corresponding to each standard ceiling image in the moving process;
the standard position information determining unit is used for determining standard position information respectively corresponding to the plurality of laser key frames according to the laser radar data;
the image screening unit is used for screening standard ceiling images respectively corresponding to each piece of standard position information according to the time stamps of the laser key frames and the shooting time points of the standard ceiling images;
the visual characteristic road sign construction unit is used for constructing and obtaining a plurality of visual characteristic road signs according to the standard ceiling images corresponding to the standard position information and the standard shooting posture information corresponding to the standard ceiling images;
and the visual characteristic map establishing unit is used for respectively adding the visual characteristic road signs into the visual characteristic map.
On the basis of the above embodiments, the image filtering unit may include:
and the mapping construction unit is used for constructing mapping between a plurality of standard position information and standard shooting attitude information as a plurality of visual characteristic road signs based on the standard position information and the standard shooting attitude information corresponding to the standard ceiling image.
On the basis of the above embodiments, the positioning module 330 may include:
the attitude information acquisition unit is used for acquiring target standard position information and target standard shooting attitude information which are included in the characteristic pose of the target image;
and the position information calculating unit is used for calculating the current position information corresponding to the autonomous mobile robot according to the target standard position information, the target standard shooting attitude information, the current shooting attitude information and the shooting parameters of a camera in the autonomous mobile robot.
On the basis of the foregoing embodiments, the descriptor generating module 310 may further include:
the system comprises a visual characteristic detection unit, a positioning failure detection unit and a vision auxiliary positioning unit, wherein the visual characteristic detection unit is used for determining that a visual auxiliary positioning condition is met when a laser radar device of the autonomous mobile robot is in a positioning failure state or when the current indoor scene is determined to be an indoor scene needing vision auxiliary positioning; and/or
And detecting visual features in the ceiling collected image through at least one of a feature point detection algorithm, an accelerated robust feature algorithm and a feature descriptor extraction algorithm.
The indoor positioning device of the autonomous mobile robot provided by the embodiment of the invention can execute the indoor positioning method of the autonomous mobile robot provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
FIG. 4 shows a schematic block diagram of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM13, various programs and data necessary for the operation of the electronic apparatus 10 may also be stored. The processor 11, the ROM12, and the RAM13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 11 performs the various methods and processes described above, such as the indoor positioning method of the autonomous mobile robot.
Specifically, the method comprises the following steps:
when the condition that the vision auxiliary positioning is met is determined, controlling the autonomous mobile robot to acquire a ceiling collected image, and generating a position descriptor according to the vision characteristics detected in the ceiling collected image;
searching a target visual characteristic road sign matched with the position descriptor in a pre-established visual characteristic map;
the visual feature road sign comprises an image feature descriptor and an image feature pose, the image feature descriptor is used for being compared with the position descriptor, and the image feature pose is used for describing standard position information and standard shooting posture information of the autonomous mobile robot when the image feature descriptor is generated;
and positioning the autonomous mobile robot according to the characteristic pose of the target image in the target visual characteristic road sign and the current shooting pose information when the autonomous mobile robot acquires the image from the ceiling.
In some embodiments, the indoor positioning method of the autonomous mobile robot may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM12 and/or the communication unit 19. When the computer program is loaded into the RAM13 and executed by the processor 11, one or more steps of the above described indoor positioning method of the autonomous mobile robot may be performed. Alternatively, in other embodiments, the processor 11 may be configured by any other suitable means (e.g. by means of firmware) to perform the indoor positioning method of the autonomous mobile robot.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Computer programs for implementing the methods of the present invention can be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the Internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An indoor positioning method of an autonomous mobile robot, comprising:
when the condition that the vision auxiliary positioning is met is determined, controlling the autonomous mobile robot to acquire a ceiling collected image, and generating a position descriptor according to the vision characteristics detected in the ceiling collected image;
searching a target visual characteristic road sign matched with the position descriptor in a pre-established visual characteristic map;
the visual feature road sign comprises an image feature descriptor and an image feature pose, the image feature descriptor is used for being compared with the position descriptor, and the image feature pose is used for describing standard position information and standard shooting posture information of the autonomous mobile robot when the image feature descriptor is generated;
and positioning the autonomous mobile robot according to the characteristic pose of the target image in the target visual characteristic road sign and the current shooting posture information when the autonomous mobile robot acquires the ceiling collected image.
2. The method of claim 1, wherein controlling the autonomous mobile robot to acquire the ceiling captured image comprises:
and controlling the autonomous mobile robot to shoot the collected images of the ceiling through a camera arranged at the top.
3. The method of claim 1, wherein generating location descriptors from visual features detected in ceiling captured images comprises:
extracting a plurality of pixel points around the visual feature from the ceiling collected image according to the detection position of the visual feature in the ceiling collected image;
and generating a position descriptor according to the visual features and the plurality of pixel points.
4. The method of any of claims 1-3, further comprising, prior to searching for a target visual feature landmark that matches the location descriptor in a pre-established visual feature map:
controlling the autonomous mobile robot to move indoors, and synchronously acquiring laser radar data, standard ceiling images and standard shooting attitude information corresponding to each standard ceiling image in the moving process;
determining standard position information respectively corresponding to a plurality of laser key frames according to laser radar data;
screening to obtain standard ceiling images respectively corresponding to each standard position information according to the time stamp of each laser key frame and the shooting time point of each standard ceiling image;
constructing and obtaining a plurality of visual characteristic road signs according to the standard ceiling images corresponding to the standard position information and the standard shooting posture information corresponding to the standard ceiling images;
and respectively adding the plurality of visual characteristic road signs into the visual characteristic map.
5. The method of claim 4, wherein constructing a plurality of visual characteristic road signs according to the standard ceiling images corresponding to the standard position information and the standard shooting posture information corresponding to the standard ceiling images comprises:
and constructing mapping between a plurality of standard position information and standard shooting attitude information as a plurality of visual characteristic road signs based on the standard position information and the standard shooting attitude information corresponding to the standard ceiling image.
6. The method of claim 1, wherein positioning the autonomous mobile robot according to the target image feature pose in the target visual feature landmark and the current shooting pose information when the autonomous mobile robot acquires the ceiling collected image comprises:
acquiring target standard position information and target standard shooting posture information included in the target image characteristic pose;
and calculating current position information corresponding to the autonomous mobile robot according to the target standard position information, the target standard shooting attitude information, the current shooting attitude information and shooting parameters of a camera in the autonomous mobile robot.
7. The method of claim 1, wherein determining that a visual-assisted positioning condition is satisfied comprises:
when the laser radar device of the autonomous mobile robot is in a positioning failure state or when the current indoor scene is determined to be an indoor scene needing vision-assisted positioning, determining that a vision-assisted positioning condition is met; and/or
And detecting visual characteristics in the ceiling collected image through at least one of a characteristic point detection algorithm, an accelerated robust characteristic algorithm and a characteristic descriptor extraction algorithm.
8. An indoor positioning device of an autonomous mobile robot, comprising:
the descriptor generation module is used for controlling the autonomous mobile robot to acquire a ceiling collected image when the condition that the vision auxiliary positioning is met is determined, and generating a position descriptor according to the vision characteristics detected in the ceiling collected image;
the landmark searching module is used for searching a target visual characteristic landmark matched with the position descriptor in a pre-established visual characteristic map;
the visual feature road sign comprises an image feature descriptor and an image feature pose, the image feature descriptor is used for being compared with the position descriptor, and the image feature pose is used for describing standard position information and standard shooting posture information of the autonomous mobile robot when the image feature descriptor is generated;
and the positioning module is used for positioning the autonomous mobile robot according to the target image characteristic pose in the target visual characteristic road sign and the current shooting posture information when the autonomous mobile robot acquires the ceiling collected image.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the indoor positioning method of an autonomous mobile robot of any of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a processor to perform the method for indoor positioning of an autonomous mobile robot of any of claims 1-7 when executed.
CN202211677413.4A 2022-12-26 2022-12-26 Indoor positioning method, device and equipment of autonomous mobile robot and storage medium Pending CN115773759A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211677413.4A CN115773759A (en) 2022-12-26 2022-12-26 Indoor positioning method, device and equipment of autonomous mobile robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211677413.4A CN115773759A (en) 2022-12-26 2022-12-26 Indoor positioning method, device and equipment of autonomous mobile robot and storage medium

Publications (1)

Publication Number Publication Date
CN115773759A true CN115773759A (en) 2023-03-10

Family

ID=85392900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211677413.4A Pending CN115773759A (en) 2022-12-26 2022-12-26 Indoor positioning method, device and equipment of autonomous mobile robot and storage medium

Country Status (1)

Country Link
CN (1) CN115773759A (en)

Similar Documents

Publication Publication Date Title
CN109506642B (en) Robot multi-camera visual inertia real-time positioning method and device
CN108898676B (en) Method and system for detecting collision and shielding between virtual and real objects
CN111462029B (en) Visual point cloud and high-precision map fusion method and device and electronic equipment
CN110675635B (en) Method and device for acquiring external parameters of camera, electronic equipment and storage medium
CN111337037B (en) Mobile laser radar slam drawing device and data processing method
CN113989450A (en) Image processing method, image processing apparatus, electronic device, and medium
CN112967345B (en) External parameter calibration method, device and system of fish-eye camera
CN111721281B (en) Position identification method and device and electronic equipment
CN115578433B (en) Image processing method, device, electronic equipment and storage medium
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN113377888A (en) Training target detection model and method for detecting target
CN114494629A (en) Three-dimensional map construction method, device, equipment and storage medium
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN114140759A (en) High-precision map lane line position determining method and device and automatic driving vehicle
JP7351892B2 (en) Obstacle detection method, electronic equipment, roadside equipment, and cloud control platform
CN114528941A (en) Sensor data fusion method and device, electronic equipment and storage medium
CN114299230A (en) Data generation method and device, electronic equipment and storage medium
CN113932796A (en) High-precision map lane line generation method and device and electronic equipment
CN113984068A (en) Positioning method, positioning apparatus, and computer-readable storage medium
CN111553342B (en) Visual positioning method, visual positioning device, computer equipment and storage medium
CN112509126A (en) Method, device, equipment and storage medium for detecting three-dimensional object
CN117232499A (en) Multi-sensor fusion point cloud map construction method, device, equipment and medium
WO2023030062A1 (en) Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program
CN115790621A (en) High-precision map updating method and device and electronic equipment
CN114266876B (en) Positioning method, visual map generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20231204

Address after: Room 712, South, No. 69 Zhangjiang Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai, 2012

Applicant after: Shangfei Intelligent Technology Co.,Ltd.

Address before: 919 Shangfei Road, Pudong New Area, Shanghai, 201324

Applicant before: SHANGHAI AIRCRAFT MANUFACTURING Co.,Ltd.

TA01 Transfer of patent application right