CN110561416B - Laser radar repositioning method and robot - Google Patents

Laser radar repositioning method and robot Download PDF

Info

Publication number
CN110561416B
CN110561416B CN201910707269.6A CN201910707269A CN110561416B CN 110561416 B CN110561416 B CN 110561416B CN 201910707269 A CN201910707269 A CN 201910707269A CN 110561416 B CN110561416 B CN 110561416B
Authority
CN
China
Prior art keywords
key frame
position information
real
key
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910707269.6A
Other languages
Chinese (zh)
Other versions
CN110561416A (en
Inventor
叶力荣
任娟娟
张国栋
闫瑞君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Silver Star Intelligent Group Co Ltd
Original Assignee
Shenzhen Silver Star Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Silver Star Intelligent Technology Co Ltd filed Critical Shenzhen Silver Star Intelligent Technology Co Ltd
Priority to CN201910707269.6A priority Critical patent/CN110561416B/en
Publication of CN110561416A publication Critical patent/CN110561416A/en
Application granted granted Critical
Publication of CN110561416B publication Critical patent/CN110561416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The application is applicable to the technical field of computer application, and provides a laser radar repositioning method and a robot, which comprise the following steps: the method comprises the steps of comparing a real-time image frame shot in real time with key frames in a preset key frame library to determine a target key frame matched with the real-time image frame, and determining real-time position information corresponding to the target key frame according to the corresponding relation between position information prestored in the key frame library and the key frames, so that the inaccurate positioning caused by the fact that no reference object exists when the position is determined through synchronous positioning and mapping is avoided, and the accuracy and the reliability of the robot positioning are improved.

Description

Laser radar repositioning method and robot
Technical Field
The application belongs to the technical field of computer application, and particularly relates to a laser radar repositioning method and a robot.
Background
When the robot is positioned, there are many times that there is no clear reference object, which causes a problem that the robot is difficult to position. In the prior art, a robot is positioned in a laser synchronous positioning and Mapping (SLAM) mode, a large number of map samples are required to serve as data support in the SLAM mode to be matched with map information acquired in real time so as to determine the real-time position of the robot, and when the acquired map information is less, the problem of inaccurate robot positioning is easily caused.
Disclosure of Invention
The embodiment of the application provides a laser radar repositioning method and a robot, and can solve the problem that in the prior art, the robot is inaccurate in positioning.
In a first aspect, an embodiment of the present application provides a laser radar relocation method, including:
in one possible implementation manner of the first aspect, the real-time image frames are obtained by acquiring real-time image frames captured by a camera; identifying a key frame matched with the real-time image frame as a target key frame from key frames in a preset key frame library; and determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library, and identifying the target position information as real-time position information of the robot.
It should be understood that, in the embodiment, by comparing the real-time image frame shot in real time with the key frame in the preset key frame library, the target key frame matched with the real-time image frame is determined, and then the real-time position information corresponding to the target key frame is determined according to the corresponding relationship between the position information prestored in the key frame library and the key frame, thereby avoiding inaccurate positioning caused by no reference object when the position is determined by synchronous positioning and mapping, and improving the accuracy and reliability of robot positioning.
In a second aspect, an embodiment of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the following steps when executing the computer program:
acquiring a real-time image frame shot by a camera;
identifying a key frame matched with the real-time image frame as a target key frame from key frames in a preset key frame library;
and determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library, and identifying the target position information as real-time position information of the robot.
In a third aspect, an embodiment of the present application provides a robot, including:
the acquisition unit is used for acquiring real-time image frames shot by the camera;
the identification unit is used for identifying key frames matched with the real-time image frames from key frames in a preset key frame library as target key frames;
and the positioning unit is used for determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library and identifying the target position information as the real-time position information of the robot.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to execute the laser radar relocation method according to any one of the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that: acquiring a real-time image frame shot by a camera; identifying a key frame matched with the real-time image frame as a target key frame from key frames in a preset key frame library; and determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library, and identifying the target position information as real-time position information of the robot. The method comprises the steps of comparing a real-time image frame shot in real time with key frames in a preset key frame library to determine a target key frame matched with the real-time image frame, and determining real-time position information corresponding to the target key frame according to the corresponding relation between position information prestored in the key frame library and the key frames, so that the inaccurate positioning caused by the fact that no reference object exists when the position is determined through synchronous positioning and mapping is avoided, and the accuracy and the reliability of the robot positioning are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of a laser radar relocation method according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a laser radar relocation method according to a second embodiment of the present application;
FIG. 3 is a schematic view of a robot provided in the third embodiment of the present application;
fig. 4 is a schematic view of a robot according to the fourth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Referring to fig. 1, fig. 1 is a flowchart of a laser radar relocation method according to an embodiment of the present disclosure. The execution subject of the laser radar relocation method in this embodiment is a robot. The lidar relocation method as shown in the figure may comprise the steps of:
s101: and acquiring a real-time image frame shot by a camera.
When the robot is positioned, there are many times that there is no clear reference object, which causes a problem that the robot is difficult to position. In the prior art, a robot is positioned in a laser synchronous positioning and Mapping (SLAM) mode, and a large number of map samples are required to be used as data supports in the SLAM mode to be matched with map information acquired in real time so as to determine the real-time position of the robot. The present invention relates to a method for mapping an environment of a robot, and in particular to a method for mapping an environment of the robot, which method comprises placing a robot at an unknown position in an unknown environment, and if there is a way for the robot to move while gradually mapping a complete map of the environment, wherein the complete map comprises any position or corner in a room where the robot travels unimpeded. However, when the position of the robot itself changes in this manner, the map construction is problematic if the map is constructed according to the previous position.
The robot in this embodiment is provided with an accessory having a camera function, and is configured to operate to a certain position, and capture a corresponding image or video at the certain position, so as to determine the current position of the robot according to the image or video as a reference.
It should be noted that, in this embodiment, the robot may acquire an image or a video, where when the acquired data is an image, the image is a real-time image frame, and when the acquired data is a video, the image frame in the video is extracted as the real-time image frame. The real-time image frame in the embodiment may include an object in any area of the environment where the robot is located, such as a floor, a sofa, a wall, and the like, which is not limited herein.
For example, the robot in this embodiment may be a sweeping robot, which is referred to as a sweeping machine for short, and the camera device installed on the sweeping machine may acquire a current real-time image frame of the sweeping robot at any time, so as to determine the current position of the sweeping machine according to the real-time image frame, and know which ones are not swept and which ones are swept.
S102: and identifying the key frame matched with the real-time image frame as a target key frame from key frames in a preset key frame library.
In this embodiment, a key frame library is preset, where the key frame library includes a plurality of key frames, each key frame has its corresponding location information, and a one-to-one correspondence relationship is formed between the key frames and their corresponding location information, that is, the location information corresponding to the key frame when the key frame is shot can be determined by the current key frame, so that the current real-time image frame shot determines the current accurate location of the robot. The key frame library has a position of the sweeper and a key frame serial number, when the key frame is stored in the key frame library each time, the position of the sweeper corresponding to the key frame is also stored, and the position of the sweeper can be given by a laser slam algorithm.
After the real-time image frame is acquired, determining a target key frame matched with the real-time key frame from key frames in a preset key frame library, wherein a specific matching method is as follows:
further, step S102 includes:
s1021: and extracting a first characteristic point of the key frame and a second characteristic point of the real-time image frame.
In this embodiment, the key frame and the real-time image frame are both composed of pixel points, feature points are extracted from the key frame and the real-time image frame, the feature points are respectively a first feature point and a second feature point, and the number of the two feature points is determined, and for convenience of matching, the number of the two feature points is the same. The feature point in this embodiment refers to a point at which the image gradation value changes drastically or a point at which the curvature is large on the edge of the image (i.e., the intersection of two edges). Matching of images can be completed through matching of feature points.
Optionally, when feature extraction is performed, linear transformation is found according to a certain performance target by using a linear projection analysis method, and original signal data is compressed to a low-dimensional subspace, so that the distribution of the data in the subspace is more compact, a means is provided for better description of the data, and meanwhile, the complexity of calculation is greatly reduced. Optionally, the features may be extracted according to an algorithm for FAST feature point extraction and description ((ordered FAST and indexed BRIEF, ORB)), and the features extracted by the ORB algorithm are feature points, which is not specifically limited herein.
S1022: and calculating the similarity between the pixel value of the first characteristic point and the pixel value of the second characteristic point to obtain the coincidence degree between the key frame and the real-time image frame.
After the first characteristic point of the key frame and the second characteristic point of the real-time image frame are obtained, the similarity between the pixel value of the first characteristic point and the pixel value of the second characteristic point is calculated and used as the coincidence degree between the key frame and the real-time image frame.
Specifically, the similarity may be calculated by determining a pixel value of the first feature point and a pixel value of the second feature point, determining whether the pixel values of the first feature point and the second feature point are the same according to the pixel value of the first feature point and the pixel value of the second feature point, if so, determining that the pixel values of the first feature point and the second feature point are the same, and if not, calculating a time difference, for example, a difference, between the two feature points, so as to measure the similarity between the two feature points through the difference. The method avoids the influence on the matching between the key frame and the real-time image frame caused by the change of the image color depth or the contrast caused by the problems of illumination intensity and the like.
After the similarity between the feature points is obtained through calculation, the feature points with the same pixel value, namely overlapped feature points, are determined, the number of the same feature points is determined, and the proportion of the number of the same feature points to the total number of the feature points is calculated to serve as the overlap ratio between the key frame and the real-time image frame.
S1023: and if the coincidence degree is greater than or equal to a preset coincidence degree threshold value, identifying the key frame corresponding to the coincidence degree as the target key frame.
In this embodiment, a threshold value of the degree of coincidence is preset and used to measure the matching degree between the key frame and the real-time image frame. After the coincidence degree is determined, if the coincidence degree is larger than or equal to a preset coincidence degree threshold value, determining that the matching degree between the key frame and the real-time image frame is high, and identifying the key frame corresponding to the coincidence degree as a target key frame. Further, if the contact ratio is smaller than a preset contact ratio threshold value, it is determined that the robot is failed to reposition.
Further, in this embodiment, the coincidence degree may be obtained by matching the real-time image frame with two adjacent key frames in the key frame library, that is, calculating the coincidence degree between the real-time image frame and two adjacent key frames in the key frame library, so as to ensure the accuracy of frame matching, and determining the corresponding position information through the two key frames. For example, if the number of feature points overlapped after the real-time image frame is matched with any two adjacent frames in the key frame library is greater than 50% of the total number of feature points, the matching is considered to be successful, that is, the key frame corresponding to the overlap ratio is the target key frame.
S103: and determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library, and identifying the target position information as real-time position information of the robot.
In the key frame library of this embodiment, the position information corresponds to the key frame identifier of the key frame through the corresponding relationship between the position information and the key frame, that is, the preset position information corresponding to each key frame stored in the key frame library, each key frame has the corresponding position information, and the key frames and the corresponding position information form a one-to-one correspondence relationship, that is, the corresponding position information when the key frame is shot can be determined through the current key frame, so that the current real-time image frame is used to determine the current accurate position of the robot.
For example, in the embodiment, the robot may confirm the current position of the sweeper through the laser slam, and the key frame numbers of the key frames shot by the camera are in one-to-one correspondence, so that the current position of the sweeper is recorded when the key frames are stored each time, and the correspondence between the position information and the key frames in the embodiment may be: and the Poston (sweeper position, key frame number) stores the corresponding relation between all the position information and the key frames, namely the Poston 1 (sweeper position 1, key frame number 1), the Poston 2 (sweeper position 2, key frame number 2) … Poston (sweeper position n, key frame number n), so as to form a key frame library. And when the real-time image frame is successfully matched with the front frame and the rear frame of the key frame library, the position of the sweeper at present can be determined according to the position of the sweeper in the key frame library.
After the target key frame is determined, according to the corresponding relation between the key frame preset in the key frame library and the position information, the position information corresponding to the target key frame is identified as the real-time position information of the robot. Specifically, when the sweeper arrives at a new position, the real-time image frame is matched with the key frame library, and if the number of the feature points of the real-time image frame matched with two adjacent frames in the key frame library is more than 50%, the real-time image frame is considered to be matched. And if the relocation is successful, determining the real-time position information of the sweeper, otherwise, failing to relocate by using Poison (sweeper position and key frame serial number).
Further, step S103 includes:
searching a target key frame identifier corresponding to the target key frame in the key frame identifier;
determining target position information corresponding to the target key frame identification according to preset position information corresponding to each key frame identification stored in the key frame library;
identifying the target position information as real-time position information of the robot.
Specifically, each target keyframe in the keyframe library in this embodiment has a target keyframe identifier corresponding thereto, so as to query the keyframe through the target keyframe identifier, for example, by storing the correspondence between all the location information and the keyframes, that is, the locations pouton 1 (sweeper location 1, keyframe number 1), the locations 2 (sweeper location 2, keyframe number 2) and the … Postion (sweeper location n, keyframe number n), so as to form the keyframe library, where the keyframe identifiers are the keyframe number 1, the keyframe number 2, and the keyframe number n. After the target key frame is determined, determining a target key frame identifier according to the target key frame, determining position information corresponding to the target key frame identifier according to a corresponding relation pre-stored in a key frame library, and finally identifying the position information corresponding to the target key frame identifier as real-time position information of the robot.
According to the scheme, the real-time image frames shot through the camera are obtained; identifying a key frame matched with the real-time image frame as a target key frame from key frames in a preset key frame library; and determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library, and identifying the target position information as real-time position information of the robot. The method comprises the steps of comparing a real-time image frame shot in real time with key frames in a preset key frame library to determine a target key frame matched with the real-time image frame, and determining real-time position information corresponding to the target key frame according to the corresponding relation between position information prestored in the key frame library and the key frames, so that the inaccurate positioning caused by the fact that no reference object exists when the position is determined through synchronous positioning and mapping is avoided, and the accuracy and the reliability of the robot positioning are improved.
Referring to fig. 2, fig. 2 is a flowchart of a laser radar relocation method according to a second embodiment of the present application. The main execution body of the laser heavy radar positioning method in the embodiment is a robot. The lidar relocation method as shown in the figure may comprise the steps of:
s201: and acquiring a real-time image frame shot by a camera.
The robot in this embodiment is provided with an accessory having a camera function, and is configured to operate to a certain position, and capture a corresponding image or video at the certain position, so as to determine the current position of the robot according to the image or video as a reference. It should be noted that, in this embodiment, the robot may acquire an image or a video, where when the acquired data is an image, the image is a real-time image frame, and when the acquired data is a video, the image frame in the video is extracted as the real-time image frame. The real-time image frame in the embodiment may include an object in any area of the environment where the robot is located, such as a floor, a sofa, a wall, and the like, which is not limited herein.
S202: acquiring a key frame and corresponding preset position information thereof; the preset position information is determined by the position of the robot when shooting the key frame.
In this embodiment, a key frame library is preset, where the key frame library includes a plurality of key frames, each key frame has corresponding preset position information, and a one-to-one correspondence relationship is formed between the key frames and the corresponding preset position information, that is, the corresponding position information when the key frame is shot can be determined by the current key frame, so that the current real-time image frame shot determines the current accurate position of the robot.
Illustratively, the keyframes in the keyframe library and their corresponding preset location information may be represented by a location (sweeper location, keyframe number), where the sweeper location is used to represent the preset location information, and the keyframe number is used to represent the keyframe identifier. When the key frame is extracted every time, the position of the sweeper is also stored, and the position of the sweeper can be given through a laser slam algorithm.
Further, step S202 includes:
acquiring position information of positions where the robot can pass, and acquiring image frames corresponding to the positions;
calculating the contact ratio between corresponding image frames at adjacent positions;
and identifying the adjacent image frames with the coincidence degrees within a preset coincidence degree interval as the key frames.
Specifically, the key frames in this embodiment are extracted according to images in a video stream, the video stream has a sequence, the extracted key frames also have a sequence, when the key frames are extracted, a first image is used as a first key frame image, and a subsequent video image is used as a next frame key frame when the intersection part of a feature point and a feature point of the first image is 45% -55%, and so on. The position of the sweeper and the sequence number of the keyframe are determined simultaneously each time the keyframe is saved to determine the position of the sweeper in return when the real-time image frame is determined to be a point within the keyframe.
For example, the number of features detected by ORB features from a captured image is at least greater than a preset number of features, for example, 500, the number of features may be determined according to actual conditions, and a previous frame is extracted as a key frame, and a next frame as a key frame must satisfy a preset condition that the number of features of the next frame after matching with an adjacent key frame satisfies, for example, the number of matched features accounts for 45% to 55% of the total number of features, and then the next frame can be extracted as a key frame, otherwise, the extraction as a key frame is abandoned.
S203: and storing the key frame and the corresponding preset position information in the key frame library in a correlation manner.
The robot in this embodiment can confirm the current position of the sweeper through the laser slam, and the key frame serial numbers of the key frames shot by the camera are in one-to-one correspondence, so that the current position of the sweeper is also recorded when the key frames are stored each time, and the correspondence between the position information and the key frames in this embodiment may be: and the Poston (sweeper position, key frame number) stores the corresponding relation between all the position information and the key frames, namely the Poston 1 (sweeper position 1, key frame number 1), the Poston 2 (sweeper position 2, key frame number 2) … Poston (sweeper position n, key frame number n), so as to form a key frame library. And when the real-time image frame is successfully matched with the front frame and the rear frame of the key frame library, the position of the sweeper can be determined according to the position of the sweeper in the key frame library.
S204: and identifying the key frame matched with the real-time image frame as a target key frame from key frames in a preset key frame library.
After the real-time image frame is acquired, determining a target key frame matched with the real-time key frame from key frames in a preset key frame library, wherein the specific matching is as follows: extracting a first characteristic point of a key frame and a second characteristic point of a real-time image frame; the similarity between the pixel value of the first feature point and the pixel value of the second feature point is calculated as the coincidence degree between the key frame and the real-time image frame, and the coincidence degree in this embodiment may be obtained by matching the real-time image frame with two adjacent key frames in the key frame library, that is, by calculating the coincidence degree between the real-time image frame and two adjacent key frames in the key frame library, respectively, so as to ensure the accuracy of frame matching, and determine the corresponding position information through the two key frames. And if the contact ratio is greater than or equal to a preset contact ratio threshold value, identifying the key frame corresponding to the contact ratio as a target key frame.
S205: and determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library, and identifying the target position information as real-time position information of the robot.
In the key frame library of this embodiment, the position information corresponds to the key frames through the corresponding relationship between the position information and the key frames, each key frame has its corresponding position information, and the key frames and their corresponding position information form a one-to-one correspondence relationship, that is, the corresponding position information when the key frame is shot can be determined through the current key frame, so that the current real-time image frame is used to determine the current accurate position of the robot.
According to the scheme, the real-time image frames shot through the camera are obtained; identifying a key frame matched with the real-time image frame as a target key frame from key frames in a preset key frame library; acquiring a key frame and corresponding position information thereof; the position information comprises information of the position of the robot when shooting the key frame; and storing the key frame and the corresponding preset position information in the key frame library in a correlation manner. And determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library, and identifying the target position information as real-time position information of the robot. The method comprises the steps of comparing a real-time image frame shot in real time with key frames in a preset key frame library to determine a target key frame matched with the real-time image frame, and determining real-time position information corresponding to the target key frame according to the corresponding relation between position information prestored in the key frame library and the key frames, so that the inaccurate positioning caused by the fact that no reference object exists when the position is determined through synchronous positioning and mapping is avoided, and the accuracy and the reliability of the robot positioning are improved.
Referring to fig. 3, fig. 3 is a schematic view of a robot according to a third embodiment of the present application. The robot 300 may be a mobile terminal such as a smart phone or a tablet computer. The robot 300 of the present embodiment includes units for performing the steps in the embodiment corresponding to fig. 1, please refer to fig. 1 and the related description in the embodiment corresponding to fig. 1, which are not repeated herein. The robot 300 of the present embodiment includes:
an acquiring unit 301, configured to acquire a real-time image frame captured by a camera;
an identifying unit 302, configured to identify, from key frames in a preset key frame library, a key frame that matches the real-time image frame as a target key frame;
a positioning unit 303, configured to determine, according to preset position information corresponding to each key frame stored in the key frame library, target position information corresponding to the target key frame, and identify the target position information as real-time position information of the robot.
Further, the robot 300 further includes:
the first acquisition unit is used for acquiring the key frame and the corresponding preset position information thereof; the preset position information is determined by the position of the robot when shooting the key frame;
and the library establishing unit is used for storing the key frames and the corresponding preset position information in the key frame library in an associated manner.
Further, the first obtaining unit includes:
the second acquisition unit is used for acquiring position information of each position on a preset moving route of the robot and acquiring an image frame corresponding to each position;
a calculation unit for calculating a degree of coincidence between two image frames photographed at adjacent positions;
and the first identification unit is used for identifying two image frames shot at corresponding adjacent positions when the contact ratio is within a preset contact ratio interval as the key frame.
Further, the calculation unit includes:
a first extraction unit configured to extract feature points in two image frames photographed at adjacent positions; the characteristic points are used for representing pixel points with obvious gray value changes in the image frame;
a first calculation unit configured to calculate a similarity of feature points in two image frames captured at the adjacent positions, and identify the similarity as the degree of coincidence.
Further, a key frame identifier corresponding to the key frame is preset in the key frame library; the corresponding relation comprises the corresponding relation between the position information and the key frame identification;
further, the positioning unit 303 includes:
the searching unit is used for searching a target key frame identifier corresponding to the target key frame in the key frame identifier;
the identification determining unit is used for determining target position information corresponding to the target key frame identification according to preset position information corresponding to each key frame identification stored in the key frame library;
a second recognition unit for recognizing the target position information as real-time position information of the robot.
Further, the identification unit 302 includes:
the second extraction unit is used for extracting first feature points of the key frames and second feature points of the real-time image frames;
the second calculating unit is used for calculating the similarity between the pixel value of the first characteristic point and the pixel value of the second characteristic point to obtain the coincidence degree between the key frame and the real-time image frame;
and the second identification unit is used for identifying the key frame corresponding to the contact ratio as the target key frame if the contact ratio is greater than or equal to a preset contact ratio threshold value.
Further, the robot 300 further includes:
and if the contact ratio is smaller than a preset contact ratio threshold value, judging that the robot is failed to reposition.
According to the scheme, the real-time image frames shot through the camera are obtained; identifying a key frame matched with the real-time image frame as a target key frame from key frames in a preset key frame library; and determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library, and identifying the target position information as real-time position information of the robot. The method comprises the steps of comparing a real-time image frame shot in real time with key frames in a preset key frame library to determine a target key frame matched with the real-time image frame, and determining real-time position information corresponding to the target key frame according to the corresponding relation between position information prestored in the key frame library and the key frames, so that the inaccurate positioning caused by the fact that no reference object exists when the position is determined through synchronous positioning and mapping is avoided, and the accuracy and the reliability of the robot positioning are improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Referring to fig. 4, fig. 4 is a schematic view of a robot according to a fourth embodiment of the present disclosure. The robot 400 in the present embodiment as shown in fig. 4 may include: a processor 401, a memory 402, and a computer program 403 stored in the memory 402 and executable on the processor 401. The steps in the various laser radar relocation method embodiments described above are implemented when processor 401 executes computer program 403. The memory 402 is used to store a computer program comprising program instructions. Processor 401 is operative to execute program instructions stored in memory 402. Wherein the processor 401 is configured to call the program instruction to perform the following operations:
the processor 401 is configured to:
acquiring a real-time image frame shot by a camera;
identifying a key frame matched with the real-time image frame as a target key frame from key frames in a preset key frame library;
and determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library, and identifying the target position information as real-time position information of the robot.
Further, the processor 401 is specifically configured to:
acquiring a key frame and corresponding preset position information thereof; the preset position information is determined by the position of the robot when shooting the key frame;
and storing the key frame and the corresponding preset position information in the key frame library in a correlation manner.
Further, the processor 401 is specifically configured to:
acquiring position information of each position on a preset moving route of the robot, and acquiring an image frame corresponding to each position;
calculating the contact ratio between two image frames shot at adjacent positions;
and identifying two image frames shot at corresponding adjacent positions when the contact ratio is within a preset contact ratio interval as the key frame.
Further, the processor 401 is specifically configured to:
extracting feature points in two image frames shot at adjacent positions; the characteristic points are used for representing pixel points with obvious gray value changes in the image frame;
and calculating the similarity of the feature points in the two image frames shot at the adjacent positions, and identifying the similarity as the contact ratio.
Further, a key frame identifier corresponding to the key frame is preset in the key frame library; the corresponding relation comprises the corresponding relation between the position information and the key frame identification;
further, the processor 401 is specifically configured to:
searching a target key frame identifier corresponding to the target key frame in the key frame identifier;
determining target position information corresponding to the target key frame identification according to preset position information corresponding to each key frame identification stored in the key frame library;
identifying the target position information as real-time position information of the robot.
Further, the processor 401 is specifically configured to:
extracting a first characteristic point of the key frame and a second characteristic point of the real-time image frame;
calculating the similarity between the pixel value of the first characteristic point and the pixel value of the second characteristic point to obtain the coincidence degree between the key frame and the real-time image frame;
and if the coincidence degree is greater than or equal to a preset coincidence degree threshold value, identifying the key frame corresponding to the coincidence degree as the target key frame.
According to the scheme, the real-time image frames shot through the camera are obtained; identifying a key frame matched with the real-time image frame as a target key frame from key frames in a preset key frame library; and determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library, and identifying the target position information as real-time position information of the robot. The method comprises the steps of comparing a real-time image frame shot in real time with key frames in a preset key frame library to determine a target key frame matched with the real-time image frame, and determining real-time position information corresponding to the target key frame according to the corresponding relation between position information prestored in the key frame library and the key frames, so that the inaccurate positioning caused by the fact that no reference object exists when the position is determined through synchronous positioning and mapping is avoided, and the accuracy and the reliability of the robot positioning are improved.
It should be understood that, in the embodiment of the present Application, the Processor 401 may be a Central Processing Unit (CPU), and the Processor may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 402 may include both read-only memory and random access memory, and provides instructions and data to the processor 401. A portion of the memory 402 may also include non-volatile random access memory. For example, the memory 402 may also store device type information.
In a specific implementation, the processor 401, the memory 402, and the computer program 403 described in this embodiment may execute the implementation manners described in the first embodiment and the second embodiment of the laser radar relocation method provided in this embodiment, and may also execute the implementation manners of the terminal described in this embodiment, which is not described herein again.
In another embodiment of the present application, a computer-readable storage medium is provided, the computer-readable storage medium storing a computer program comprising program instructions that when executed by a processor implement:
acquiring a real-time image frame shot by a camera;
identifying a key frame matched with the real-time image frame as a target key frame from key frames in a preset key frame library;
and determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library, and identifying the target position information as real-time position information of the robot.
Further, the computer program when executed by the processor further implements:
acquiring a key frame and corresponding preset position information thereof; the preset position information is determined by the position of the robot when shooting the key frame;
and storing the key frame and the corresponding preset position information in the key frame library in a correlation manner.
Further, the computer program when executed by the processor further implements:
acquiring position information of each position on a preset moving route of the robot, and acquiring an image frame corresponding to each position;
calculating the contact ratio between two image frames shot at adjacent positions;
and identifying two image frames shot at corresponding adjacent positions when the contact ratio is within a preset contact ratio interval as the key frame.
Further, the computer program when executed by the processor further implements:
extracting feature points in two image frames shot at adjacent positions; the characteristic points are used for representing pixel points with obvious gray value changes in the image frame;
and calculating the similarity of the feature points in the two image frames shot at the adjacent positions, and identifying the similarity as the contact ratio.
Further, a key frame identifier corresponding to the key frame is preset in the key frame library; the corresponding relation comprises the corresponding relation between the position information and the key frame identification;
further, the computer program when executed by the processor further implements:
searching a target key frame identifier corresponding to the target key frame in the key frame identifier;
determining target position information corresponding to the target key frame identification according to preset position information corresponding to each key frame identification stored in the key frame library;
identifying the target position information as real-time position information of the robot.
Further, the computer program when executed by the processor further implements:
extracting a first characteristic point of the key frame and a second characteristic point of the real-time image frame;
calculating the similarity between the pixel value of the first characteristic point and the pixel value of the second characteristic point to obtain the coincidence degree between the key frame and the real-time image frame;
and if the coincidence degree is greater than or equal to a preset coincidence degree threshold value, identifying the key frame corresponding to the coincidence degree as the target key frame.
According to the scheme, the real-time image frames shot through the camera are obtained; identifying a key frame matched with the real-time image frame as a target key frame from key frames in a preset key frame library; and determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library, and identifying the target position information as real-time position information of the robot. The method comprises the steps of comparing a real-time image frame shot in real time with key frames in a preset key frame library to determine a target key frame matched with the real-time image frame, and determining real-time position information corresponding to the target key frame according to the corresponding relation between position information prestored in the key frame library and the key frames, so that the inaccurate positioning caused by the fact that no reference object exists when the position is determined through synchronous positioning and mapping is avoided, and the accuracy and the reliability of the robot positioning are improved.
The computer readable storage medium may be an internal storage unit of the terminal according to any of the foregoing embodiments, for example, a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the terminal. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially or partially contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. A laser radar repositioning method is applied to a robot and is characterized by comprising the following steps:
acquiring a real-time image frame shot by a camera;
identifying a key frame matched with the real-time image frame as a target key frame from key frames in a preset key frame library;
determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library, and identifying the target position information as real-time position information of the robot;
the determining, according to preset position information corresponding to each of the key frames stored in the key frame library, target position information corresponding to the target key frame, and identifying the target position information as real-time position information of the robot, includes:
searching a target key frame identifier corresponding to the target key frame in a key frame identifier, wherein the key frame identifier is a key frame serial number;
determining target position information corresponding to the target key frame identification according to preset position information corresponding to each key frame stored in the key frame library;
identifying the target position information as real-time position information of the robot;
the key frames are shot by the camera and correspond to the key frame serial numbers one by one; the key frame library includes a Position, the Position is preset Position information and a key frame sequence number, and the preset Position information of the robot is confirmed through a laser slam when the camera shoots key frames.
2. The lidar relocation method of claim 1, wherein said identifying a keyframe matching said real-time image frame from among keyframes in a predetermined keyframe library as a target keyframe, further comprises:
acquiring a key frame and corresponding preset position information thereof; the preset position information is determined by the position of the robot when shooting the key frame;
and storing the key frame and the corresponding preset position information in the key frame library in a correlation manner.
3. The lidar repositioning method of claim 2, wherein the obtaining of the keyframe and the corresponding predetermined position information thereof comprises:
acquiring position information of the robot moving on a preset moving route, and acquiring an image frame corresponding to each position information;
calculating the contact ratio between two image frames shot at adjacent positions;
and identifying two image frames shot at corresponding adjacent positions when the contact ratio is within a preset contact ratio interval as the key frame.
4. The lidar repositioning method of claim 3, wherein the calculating of the degree of coincidence between two image frames captured at adjacent locations comprises:
extracting feature points in two image frames shot at adjacent positions; the characteristic points are used for representing pixel points with obvious gray value changes in the image frame;
and calculating the similarity between the feature points in the two image frames shot at the adjacent positions, and identifying the similarity as the contact ratio.
5. The lidar relocation method of any of claims 1-4, wherein said identifying a keyframe from a preset keyframe library that matches said real-time image frame as a target keyframe comprises:
extracting a first characteristic point of the key frame and a second characteristic point of the real-time image frame;
calculating the similarity between the pixel value of the first characteristic point and the pixel value of the second characteristic point to obtain the coincidence degree between the key frame and the real-time image frame;
and if the coincidence degree is greater than or equal to a preset coincidence degree threshold value, identifying the key frame corresponding to the coincidence degree as the target key frame.
6. The lidar repositioning method according to claim 5, wherein after calculating the similarity between the pixel value of the first feature point and the pixel value of the second feature point as the degree of coincidence between the key frame and the real-time image frame, further comprising:
and if the contact ratio is smaller than a preset contact ratio threshold value, judging that the robot is failed to reposition.
7. A robot, comprising:
the acquisition unit is used for acquiring real-time image frames shot by the camera;
the identification unit is used for identifying key frames matched with the real-time image frames from key frames in a preset key frame library as target key frames;
the positioning unit is used for determining target position information corresponding to the target key frames according to preset position information corresponding to each key frame stored in the key frame library and identifying the target position information as real-time position information of the robot;
the positioning unit includes:
the searching unit is used for searching a target key frame identifier corresponding to the target key frame in the key frame identifiers, wherein the key frame identifier is a key frame serial number;
the identification determining unit is used for determining target position information corresponding to the target key frame identification according to preset position information corresponding to each key frame stored in the key frame library;
a second recognition unit for recognizing the target position information as real-time position information of the robot;
the key frames are shot by the camera and correspond to the key frame serial numbers one by one; the key frame library includes a Position, the Position is preset Position information and a key frame sequence number, and the preset Position information of the robot is confirmed through a laser slam when the camera shoots key frames.
8. A robot comprising a memory, a processor and a computer program stored in the memory and running on the processor, characterized in that the steps of the method according to any of claims 1 to 6 are implemented when the computer program is executed by the processor.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201910707269.6A 2019-08-01 2019-08-01 Laser radar repositioning method and robot Active CN110561416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910707269.6A CN110561416B (en) 2019-08-01 2019-08-01 Laser radar repositioning method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910707269.6A CN110561416B (en) 2019-08-01 2019-08-01 Laser radar repositioning method and robot

Publications (2)

Publication Number Publication Date
CN110561416A CN110561416A (en) 2019-12-13
CN110561416B true CN110561416B (en) 2021-03-02

Family

ID=68774277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910707269.6A Active CN110561416B (en) 2019-08-01 2019-08-01 Laser radar repositioning method and robot

Country Status (1)

Country Link
CN (1) CN110561416B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392163B (en) * 2020-03-12 2024-02-06 广东博智林机器人有限公司 Data processing method, action simulation method, device, equipment and medium
CN111383424A (en) * 2020-04-15 2020-07-07 广东小天才科技有限公司 Emergency help-seeking method, electronic equipment and computer-readable storage medium
CN113662476B (en) * 2020-05-14 2023-04-04 杭州萤石软件有限公司 Method and system for improving cleaning coverage rate of movable cleaning robot
CN112198878B (en) * 2020-09-30 2021-09-28 深圳市银星智能科技股份有限公司 Instant map construction method and device, robot and storage medium
CN112595323A (en) * 2020-12-08 2021-04-02 深圳市优必选科技股份有限公司 Robot and drawing establishing method and device thereof
US20240029300A1 (en) * 2020-12-25 2024-01-25 Intel Corporation Re-localization of robot
JP7484758B2 (en) * 2021-02-09 2024-05-16 トヨタ自動車株式会社 Robot Control System
CN113103232B (en) * 2021-04-12 2022-05-20 电子科技大学 Intelligent equipment self-adaptive motion control method based on feature distribution matching
CN113733166B (en) * 2021-11-08 2022-04-15 深圳市普渡科技有限公司 Robot positioning method, device, robot and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747065A (en) * 2013-12-27 2014-04-23 河海大学 Client HTTP retrieval full-index container format media resource time slice method
CN108924646A (en) * 2018-07-18 2018-11-30 北京奇艺世纪科技有限公司 A kind of audio-visual synchronization detection method and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108072370A (en) * 2016-11-18 2018-05-25 中国科学院电子学研究所 Robot navigation method based on global map and the robot with this method navigation
CN108038139B (en) * 2017-11-10 2021-08-13 未来机器人(深圳)有限公司 Map construction method and device, robot positioning method and device, computer equipment and storage medium
CN108717710B (en) * 2018-05-18 2022-04-22 京东方科技集团股份有限公司 Positioning method, device and system in indoor environment
CN109947886B (en) * 2019-03-19 2023-01-10 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747065A (en) * 2013-12-27 2014-04-23 河海大学 Client HTTP retrieval full-index container format media resource time slice method
CN108924646A (en) * 2018-07-18 2018-11-30 北京奇艺世纪科技有限公司 A kind of audio-visual synchronization detection method and system

Also Published As

Publication number Publication date
CN110561416A (en) 2019-12-13

Similar Documents

Publication Publication Date Title
CN110561416B (en) Laser radar repositioning method and robot
CN109325964B (en) Face tracking method and device and terminal
CN111145214A (en) Target tracking method, device, terminal equipment and medium
US11205276B2 (en) Object tracking method, object tracking device, electronic device and storage medium
CN109035304B (en) Target tracking method, medium, computing device and apparatus
CN110705405B (en) Target labeling method and device
CN108038176B (en) Method and device for establishing passerby library, electronic equipment and medium
CN109141393B (en) Relocation method, relocation apparatus and storage medium
CN110442120B (en) Method for controlling robot to move in different scenes, robot and terminal equipment
CN109426785B (en) Human body target identity recognition method and device
CN111512317A (en) Multi-target real-time tracking method and device and electronic equipment
CN109116129B (en) Terminal detection method, detection device, system and storage medium
CN109325406B (en) Method and device for evaluating detection performance of detection algorithm to be evaluated and computer equipment
CN108229232B (en) Method and device for scanning two-dimensional codes in batch
CN112116657B (en) Simultaneous positioning and mapping method and device based on table retrieval
US20110216939A1 (en) Apparatus and method for tracking target
CN111291749B (en) Gesture recognition method and device and robot
CN112906483A (en) Target re-identification method and device and computer readable storage medium
US11094049B2 (en) Computing device and non-transitory storage medium implementing target object identification method
US20200013170A1 (en) Information processing apparatus, rebar counting apparatus, and method
US20220327861A1 (en) Method for recognizing masked faces, device for recognizing masked faces, and computer storage medium
CN110728249B (en) Cross-camera recognition method, device and system for target pedestrian
CN112286780B (en) Method, device, equipment and storage medium for testing recognition algorithm
CN110084157B (en) Data processing method and device for image re-recognition
CN109375187B (en) Method and device for determining radar target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 518000 1701, building 2, Yinxing Zhijie, No. 1301-72, sightseeing Road, Xinlan community, Guanlan street, Longhua District, Shenzhen, Guangdong Province

Patentee after: Shenzhen Yinxing Intelligent Group Co.,Ltd.

Address before: 518000 building A1, Yinxing hi tech Industrial Park, Guanlan street, Longhua District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Silver Star Intelligent Technology Co.,Ltd.

CP03 Change of name, title or address