CN113592946A - Pose positioning method and device, intelligent robot and storage medium - Google Patents

Pose positioning method and device, intelligent robot and storage medium Download PDF

Info

Publication number
CN113592946A
CN113592946A CN202110852886.2A CN202110852886A CN113592946A CN 113592946 A CN113592946 A CN 113592946A CN 202110852886 A CN202110852886 A CN 202110852886A CN 113592946 A CN113592946 A CN 113592946A
Authority
CN
China
Prior art keywords
image
pose
intelligent robot
feature points
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110852886.2A
Other languages
Chinese (zh)
Inventor
郑权
钟智渊
洪泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zbeetle Intelligent Co Ltd
Original Assignee
Shenzhen Zbeetle Intelligent Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zbeetle Intelligent Co Ltd filed Critical Shenzhen Zbeetle Intelligent Co Ltd
Priority to CN202110852886.2A priority Critical patent/CN113592946A/en
Publication of CN113592946A publication Critical patent/CN113592946A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a pose positioning method and device, an intelligent robot and a storage medium, and relates to the technical field of intelligent robots. The pose positioning method is applied to the intelligent robot and comprises the following steps: acquiring an environment image of the current environment of the intelligent robot; extracting image feature points in the environment image as first feature points; when the first characteristic point meets a preset processing condition, preprocessing the environment image to obtain a target image, wherein the preset processing condition is used for representing that the accuracy of the pose positioned according to the first characteristic point is smaller than the preset accuracy, and the preprocessing is used for improving the image quality; extracting image feature points in the target image as second feature points; and determining the current pose of the intelligent robot according to the second characteristic point. Therefore, when the image quality of the environment image is poor, the environment image can be preprocessed, the image quality of the environment image is greatly improved, and accurate positioning of the intelligent robot is achieved.

Description

Pose positioning method and device, intelligent robot and storage medium
Technical Field
The application relates to the technical field of intelligent robots, in particular to a pose positioning method and device, an intelligent robot and a storage medium.
Background
With the continuous development of the intelligent technology, the intelligent robot can be applied to a home scene to perform functions of sweeping, mopping and the like, so that in order to achieve the functions, the intelligent robot needs to position the self-running pose.
In the related technology, an intelligent robot acquires images of the environment around the intelligent robot through a depth camera, then calculates And optimizes geometric information between the images through a characteristic point method or an optical flow method based on a Visual Simultaneous Localization And Mapping (VSLAM) algorithm, And calculates the operating pose of the intelligent robot. However, the accuracy of the pose obtained by the above method needs to be improved.
Disclosure of Invention
In view of this, the present application provides a pose positioning method, a pose positioning apparatus, an intelligent robot, and a storage medium.
In a first aspect, an embodiment of the present application provides a pose positioning method, which is applied to an intelligent robot, and the method includes: acquiring an environment image of the current environment of the intelligent robot; extracting image feature points in the environment image as first feature points; when the first feature point meets a preset processing condition, preprocessing the environment image to obtain a target image, wherein the preset processing condition is used for representing that the accuracy of the pose positioned according to the first feature point is smaller than a preset accuracy, and the preprocessing is used for improving the image quality; extracting image feature points in the target image as second feature points; and determining the current pose of the intelligent robot according to the second characteristic point.
In a second aspect, an embodiment of the present application provides a pose positioning apparatus, which is applied to an intelligent robot, and the apparatus includes: the device comprises an image acquisition module, a first feature extraction module, an image processing module, a second feature extraction module and a pose determination module. The image acquisition module is used for acquiring an environment image of the current environment where the intelligent robot is located; the first feature extraction module is used for extracting image feature points in the environment image as first feature points; the image processing module is used for preprocessing the environment image to obtain a target image when the first characteristic points meet preset processing conditions, wherein the preprocessing conditions are that the number of the first characteristic points is within a preset number threshold value, and the preprocessing is used for improving the image quality; the second feature extraction module is used for extracting image feature points in the target image as second feature points; and the pose determining module is used for determining the current pose of the intelligent robot according to the second characteristic point.
In a third aspect, an embodiment of the present application provides an intelligent robot, including: one or more processors; a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the pose positioning method provided by the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code may be invoked by a processor to execute the pose positioning method provided in the first aspect.
According to the scheme provided by the application, firstly, an environment image of the current environment where the intelligent robot is located is obtained, and image feature points in the environment image are extracted and serve as first feature points; when the first characteristic point meets a preset processing condition, preprocessing the environment image to obtain a target image, wherein the preset processing condition is used for representing that the accuracy of the pose positioned according to the first characteristic point is smaller than the preset accuracy, and the preprocessing is used for improving the image quality; then extracting image characteristic points in the target image as second characteristic points; and finally, determining the current pose of the intelligent robot according to the second characteristic point. Therefore, the problem that the calculated current pose of the intelligent robot is inaccurate due to poor image quality of the acquired environment image can be solved; that is to say, when the image quality of the environment image is poor, the environment image can be preprocessed, the image quality of the environment image is greatly improved, so that more accurate feature points can be obtained, the current pose of the intelligent robot can be more accurately positioned, and namely, the intelligent robot can be accurately positioned.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic flow chart of a pose positioning method provided in an embodiment of the present application.
Fig. 2 shows a gray scale of an environment image under a dark light condition according to an embodiment of the present application.
Fig. 3 shows a gray scale of an environment image after image enhancement according to an embodiment of the present application.
Fig. 4 shows a schematic flowchart of a pose positioning method according to another embodiment of the present application.
Fig. 5 shows a schematic flowchart of a pose positioning method according to yet another embodiment of the present application.
Fig. 6 shows a schematic flowchart of a pose positioning method according to still another embodiment of the present application.
Fig. 7 shows a schematic flowchart of a pose positioning method according to yet another embodiment of the present application.
Fig. 8 shows a schematic flowchart of a pose positioning method according to yet another embodiment of the present application.
Fig. 9 is a schematic flow chart illustrating a pose positioning method according to still another embodiment of the present application.
Fig. 10 is a schematic flow chart illustrating a pose positioning method according to still another embodiment of the present application.
Fig. 11 is a block diagram of a pose positioning apparatus according to an embodiment of the present application.
Fig. 12 is a block diagram of an intelligent robot for executing a pose positioning method according to an embodiment of the present application.
Fig. 13 is a storage unit for storing or carrying program code that implements a pose positioning method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In the related art, the accurate pose can be obtained only under the condition of good illumination in the visual positioning of the intelligent robot, the positioning accuracy can be rapidly reduced under the conditions of dim light, strong light, rapid rotation, white wall and the like, and particularly under the dim light condition, the number, quality and the like of feature points on an image cannot meet the accurate positioning condition easily, so the accuracy of the visual positioning of the robot can be greatly influenced.
In order to solve the above problems, the inventor proposes a pose positioning method, a pose positioning apparatus, an intelligent robot, and a storage medium, which can improve the image quality of an environment image by preprocessing the environment image whose first feature points do not satisfy a preset processing condition, and determine the current pose of the intelligent robot according to the second feature points of the preprocessed environment image. This is described in detail below.
Referring to fig. 1, fig. 1 is a schematic flow chart of a pose positioning method according to an embodiment of the present application, and the pose positioning method is applied to an intelligent robot. The pose positioning method provided by the embodiment of the present application will be described in detail below with reference to fig. 1. The pose positioning method can comprise the following steps:
step S110: and acquiring an environment image of the current environment of the intelligent robot.
In this embodiment, when the intelligent robot moves in an unknown environment, in order to ensure that the intelligent robot can travel to each position accessible in the unknown environment without being obstructed, an environment image of the current environment can be acquired through an image acquisition device carried by the intelligent robot, and based on the environment image and a VSLAM algorithm, the intelligent robot performs self-positioning according to the position and the map of the intelligent robot in the moving process, so that the autonomous positioning and navigation of the intelligent robot are realized. For example, the environment image acquired by the depth camera is a depth image, and the intelligent robot can calculate and determine the motion of the camera and the surrounding environment condition through a continuously moving depth image, that is, can determine the motion of the intelligent robot (i.e., position and pose of the intelligent robot in real time) and the surrounding environment condition.
Step S120: and extracting image feature points in the environment image as first feature points.
In this embodiment, the image feature point refers to a point where the gray value of the image changes dramatically or a point with a large curvature on the edge of the image (i.e. an intersection of two edges), and the image feature point may reflect the essential features of the image and can identify different entity objects in the image. Therefore, it is possible to identify different entity objects existing in the current environment and the positions where they are located by extracting image feature points in the environment image.
The image feature point extraction method comprises the following steps of extracting image feature points, wherein the image feature points can be extracted in various modes, and the feature points can be detected through a parameter model or a template; feature points can also be detected based on an edge method, and vertices of a polygon or points on edges of an entity object with large curvature change are taken as feature points, that is, the feature points of the acquired image are a set of the edges of the entity object; the image characteristic points can be obtained based on a gray level method, and the derivative of the gray level around the pixel points is obtained by utilizing differential operation, so that the positions of the characteristic points are obtained; the image feature points can be acquired based on a space transformation method, feature points with characteristics which are easy to identify are acquired by using space transformation, then extreme points are detected in a transformation space, and the point with the minimum or maximum searched absolute value is used as the feature point. Optionally, the extracting of the image feature points in the environment image may be performed by a Scale-invariant feature transform (SIFT) algorithm, an accelerated version Robust Features (Surf) algorithm, a FAST algorithm, and the like, which is not limited in this embodiment.
Step S130: and when the first characteristic point meets a preset processing condition, preprocessing the environment image to obtain a target image, wherein the preset processing condition is used for representing that the precision of the pose positioned according to the first characteristic point is smaller than a preset precision, and the preprocessing is used for improving the image quality.
The preset processing condition is used as a judgment basis for judging the first feature point so as to determine whether the environmental image needs to be preprocessed or not. Based on this, after the first feature point of the environment image is obtained, whether the first feature point meets the preset processing condition or not can be detected, and then whether the current environment image can meet the condition for accurately positioning the intelligent robot or not can be detected, and it can be understood that whether the quality of the environment image meets the requirement or not can be detected by detecting whether the first feature point of the environment image meets the preset processing condition or not, that is, whether the environment image is clear or not can be detected, if the first feature point does not meet the preset processing condition, it is determined that the quality of the environment image of the current environment is poor, the environment image is unclear, a sufficient number of or better distributed image feature points can not be obtained from the environment image, and then the accuracy of calculating the pose of the intelligent robot according to the image feature points in the subsequent process is reduced. Therefore, the environment image can be preprocessed, the image quality of the environment image is improved, the definition of the environment image is improved, the first characteristic points with enough quantity can be extracted from the environment image, and the accuracy of the current pose positioning of the intelligent robot is guaranteed.
In practical applications, the reasons for the unclear and poor quality of the captured environmental image may include various reasons, for example, the capturing environment is dark light, strong light, a white wall, or a situation that the intelligent robot is rotating rapidly, which is not limited in this embodiment. For the situation that the quality of the environment image is poor due to different reasons, the preprocessing manner may also be different, for example, if the shooting environment is dark light, as shown in fig. 2, the brightness of the environment image is low, the preprocessing may be to perform low-illumination enhancement processing on the environment image by using algorithms such as LIME and mblen, and perform image processing operations such as denoising and image contrast improvement on the environment image to obtain the target image, as shown in fig. 3, the brightness of the target image is greater than that of the original environment image. If because the shooting environment is caused by the highlight, the preprocessing can be to reduce the brightness of the environment image, and then to denoise the environment image and promote image processing operations such as image contrast, and this embodiment does not limit this.
Optionally, it may be determined whether the first feature point meets a preset processing condition, that is, it may obtain accuracy of positioning the pose of the intelligent robot according to the first feature point, and determine whether the accuracy is smaller than the preset accuracy, if the accuracy is smaller than the preset accuracy, it represents that the first feature point meets the preset processing condition, and preprocesses the environment image at this time, so as to improve image quality of the environment image, and further improve accuracy of positioning the pose of the intelligent robot according to the image feature point in the environment image, so as to achieve the preset accuracy and meet the preset processing condition. Whether the first feature points meet the preset processing condition can be judged by judging whether the number of the first feature points reaches a preset number threshold value, and if the number of the first feature points is smaller than the preset number threshold value, the accuracy of the position and pose positioned according to the first feature points is judged to be smaller than the preset accuracy, so that the first feature points meet the preset processing condition; or, whether the first feature points in each sub-region in the environment image are uniformly distributed or not is judged, that is, whether the number of the first feature points in each sub-region in the environment image is within a specified number interval or not is judged, and whether the first feature points meet the preset processing condition or not is judged, and if the number of the first feature points in each sub-region in the environment image is not within the specified number interval and is smaller than the minimum threshold value of the specified number interval, it is judged that the accuracy of the position and pose positioned according to the first feature points at the moment is smaller than the preset accuracy, and it is judged that the first feature points meet the preset processing condition.
Step S140: and extracting image feature points in the target image as second feature points.
In the embodiment of the present application, the principle of extracting the image feature points in the target image is similar to that of extracting the image feature points in the environment image, and the content of step S120 in the foregoing embodiment may be referred to, and details are not described herein again.
Step S150: and determining the current pose of the intelligent robot according to the second characteristic point.
Based on the above, the second feature point in the target image can be input into the VSLAM workflow, wherein the information input into the VSLAM workflow may further include data information of the intelligent robot acquired by other sensors (such as wheel odometer, IMU, and galvanometer), the current pose of the intelligent robot is determined, and the current pose of the intelligent robot is optimized through modules in the VSLAM workflow, such as back-end optimization, closed-loop detection, and mapping, so as to obtain a globally consistent map, motion trajectory, and more accurate current pose of the intelligent robot.
In some embodiments, when the first feature point does not meet the preset processing condition, the current pose of the intelligent robot may be determined directly according to the first feature point in the environment image, which represents that no steps such as image processing are performed on the environment image at this time.
In this embodiment, when the image quality of the environment image is relatively poor, the environment image can be preprocessed, and the image quality of the environment image is greatly improved, so that relatively accurate image feature points can be obtained, and then the current pose of the intelligent robot can be more accurately positioned, that is, the intelligent robot can be accurately positioned.
Referring to fig. 4, fig. 4 is a schematic flowchart of a pose positioning method according to another embodiment of the present application, and the pose positioning method is applied to an intelligent robot. The pose positioning method provided by the embodiment of the present application will be described in detail below with reference to fig. 4. The pose positioning method can comprise the following steps:
step S210: and acquiring an environment image of the current environment of the intelligent robot.
Step S220: and extracting image feature points in the environment image as first feature points.
In the embodiment of the present application, steps S210 to S220 may refer to the contents in the foregoing embodiments, and are not described herein again.
Step S230: and judging whether the number of the first characteristic points is smaller than the preset number threshold value.
Step S240: and if the number is smaller than the preset number threshold, judging that the first characteristic point meets the preset processing condition.
In this embodiment, if the number of the first feature points in the environment image is too small, the problem that the current pose of the intelligent robot calculated based on the first feature points is inaccurate in the subsequent process may be caused. Therefore, the number of the first feature points can be obtained, wherein the number is positively correlated with the accuracy of the pose positioned according to the first feature points, the preset number threshold is set according to the preset accuracy, and based on this, if the number is smaller than the preset number threshold, the preset number threshold is set according to the preset accuracy, therefore, it can be determined that the accuracy of the pose positioned according to the first feature points is smaller than the preset accuracy, and it can be determined that the first feature points meet the preset processing condition, that is, the quality of the environment image is poor at this time, and a sufficient number of image feature points cannot be extracted from the image feature points to calculate the current pose of the intelligent robot. The preset number threshold may be preset in advance, or may be adjusted according to different requirements of different application scenarios on the accuracy of pose positioning, for example, when the requirement on the positioning accuracy is low, the preset number threshold may be set to a smaller value (e.g., 600); when the requirement for the positioning accuracy is high, the preset number threshold may be set to a large value (e.g., 1000), which is not limited in this embodiment.
In some embodiments, the positioning accuracy and the preset number threshold may be in a relation of a preset ratio, and when the positioning accuracy is adjusted to the target positioning accuracy, the preset number threshold may also be adjusted to the target preset ratio threshold according to the preset ratio. Illustratively, the positioning accuracy ranges from 0.2 to 0.8, and the predetermined ratio is 1:3000, so that when the positioning accuracy is 0.2, the corresponding predetermined number threshold is 600, and if the positioning accuracy 0.2 is adjusted to the target positioning accuracy 0.4, the corresponding predetermined number threshold can be adjusted to the target predetermined number threshold 1200 according to the predetermined ratio 1: 3000.
Step S250: and preprocessing the environment image to obtain a target image, wherein the preset processing condition is used for representing that the precision of the pose positioned according to the first characteristic point is smaller than the preset precision, and the preprocessing is used for improving the image quality.
Step S260: and extracting image feature points in the target image as second feature points.
Step S270: and determining the current pose of the intelligent robot according to the second characteristic point.
In the embodiment of the present application, steps S250 to S270 may refer to the contents in the foregoing embodiments, and are not described herein again.
Step S280: and if the number is greater than or equal to the preset number threshold, judging that the first feature point does not accord with the preset processing condition.
Optionally, when the number of the first feature points is greater than or equal to the preset number threshold, it may be determined that the first feature points do not meet the preset processing condition, that is, at this time, a sufficient number of image feature points may be extracted from the environment image for subsequently calculating the current pose of the intelligent robot, and further image processing operation is not required on the environment image. Further, after the first feature point is judged not to accord with the preset processing condition, the current pose of the intelligent robot can be determined directly according to the first feature point.
In this embodiment, it may be determined whether the first feature point meets the preset processing condition, that is, whether further image processing on the environment image is required, by determining whether the number of the first feature points in the environment image reaches a preset number threshold. Therefore, when the number of the image feature points in the environment image is not enough, the environment image can be processed, the image quality of the environment image is improved, so that a sufficient number of image feature points can be obtained, and the current pose of the intelligent robot can be more accurately positioned; in addition, the preset quantity threshold value can be adjusted according to practical application scenes, so that the current pose of the intelligent robot can be accurately positioned under different positioning accuracy requirements.
Referring to fig. 5, fig. 5 is a schematic flowchart of a pose positioning method according to another embodiment of the present application, and the pose positioning method is applied to an intelligent robot. The pose positioning method provided by the embodiment of the present application will be described in detail below with reference to fig. 5. The pose positioning method can comprise the following steps:
step S301: and acquiring an environment image of the current environment of the intelligent robot.
Step S302: and extracting image feature points in the environment image as first feature points.
In the embodiment of the present application, steps S301 to S302 may refer to the contents in the foregoing embodiments, and are not described herein again.
Step S303: the ambient image is divided into a plurality of sub-regions.
In this embodiment, in addition to the number of the first feature points in the environment image, the accuracy of pose positioning of the subsequent intelligent robot may be affected, and the density (i.e., distribution) of the first feature points may also affect the accuracy of pose positioning of the subsequent intelligent robot, that is, if the number of the first feature points reaches a number threshold, if the distribution of the first feature points is not uniform, the accuracy of pose positioning may also be affected. For example, a total of 1000 first feature points are extracted from the environment image, but 900 first feature points are all concentrated in a small region in the environment image, and other regions only include 100 feature points, at this time, it can be seen that the first feature points in the environment image are not uniformly distributed, and if the current pose of the intelligent robot is calculated directly according to the first feature points which are not uniformly distributed at present, the problem of inaccurate positioning is also caused. Therefore, in order to prevent the first feature points in the environment image from being unevenly distributed, which causes the problem of inaccurate positioning, the environment image may be divided into a plurality of sub-regions, and the distribution of the first feature points of each sub-region is detected to ensure that the extracted first feature points are evenly distributed in the environment image.
The environment image is divided into a plurality of sub-regions, which can be equally divided, that is, the areas of the sub-regions are the same; the sub-regions may also be divided according to the types of the entity objects in the environment image, each entity object is divided into one sub-region, for example, the environment image includes a table, a person, and a ground surface, the environment image may be divided into three sub-regions, and the table, the person, and the ground surface each correspond to one sub-region. There are various ways to divide the sub-regions, and this embodiment does not limit this.
Step S304: acquiring the number of the first feature points contained in each sub-region.
Step S305: and judging whether the quantity is in a specified quantity interval.
Step S306: and if the number of the first characteristic points contained in any sub-area is not within a specified number interval, judging that the first characteristic points conform to the preset processing condition.
Based on this, after the environment image is divided into the sub-regions, the number of the first feature points included in each sub-region can be acquired, and whether the number is within the specified number interval or not is judged, so as to judge whether the first feature points meet the preset processing condition or not. The specified number interval can be preset in advance, and can also be adjusted according to different requirements of different application scenes on the pose positioning accuracy, for example, when the positioning accuracy requirement is low, the specified number interval can be set to be a small value, and the interval range is large, such as 100- > 200; when the requirement on the positioning accuracy is high, the preset number threshold may be set to be a large value, and the range of the interval is small, for example: 300-350, this embodiment is not limited thereto.
In some embodiments, the maximum threshold of the positioning accuracy and the specified number of intervals may be in a first preset proportion, and the minimum threshold may be in a second preset proportion, and when the positioning accuracy is adjusted to the target positioning accuracy, the maximum threshold and the minimum threshold may also be adjusted to the target maximum threshold and the target minimum threshold according to the first preset proportion and the second preset proportion. Illustratively, the range of the positioning accuracy is 0.2-0.8, the first predetermined ratio is 1:500, and the second predetermined ratio is 1:1000, so that when the positioning accuracy is 0.2, the corresponding specified number interval is 100-.
In other embodiments, because the texture complexity of different sub-regions is different, and the number of image feature points included in each sub-region is also different, the specified number intervals corresponding to different sub-regions may also be set differently, that is, each sub-region may have one specified number interval corresponding thereto. The greater the texture complexity of the image, the greater the number of image feature points contained in the image. Therefore, in the sub-region with a large texture complexity, the preset number threshold may be set to a large value, and the interval range may also be set to be relatively large, such as: 500-600; in the sub-region with smaller texture complexity, the preset number threshold may be set to a smaller value, and the range may also be set to be relatively smaller, such as 100-.
Optionally, when the number of the first feature points included in any sub-region is not within the specified number interval, it represents that the number of the first feature points in a certain sub-region does not meet the condition that the intelligent robot can be accurately positioned. That is, the number distribution of the first feature points in the environment image at this time is not uniform, and therefore, it can be determined that the first feature points meet the preset processing condition. In this case, it is indicated that the environmental image needs to be preprocessed, so that the first feature points in the environmental image are uniformly distributed, and the accuracy of subsequently calculating the current pose of the intelligent robot according to the image feature points in the environmental image is improved.
Step S307: and preprocessing the environment image to obtain a target image, wherein the preset processing condition is used for representing that the precision of the pose positioned according to the first characteristic point is smaller than the preset precision, and the preprocessing is used for improving the image quality.
Step S308: and extracting image feature points in the target image as second feature points.
Step S309: and determining the current pose of the intelligent robot according to the second characteristic point.
In the embodiment of the present application, steps S307 to S309 may refer to the contents in the foregoing embodiment, and are not described herein again.
Step S310: and if the number of the first feature points contained in each sub-area is within the specified number interval, judging that the first feature points do not meet the preset processing condition.
Optionally, when the number of the first feature points included in each sub-region is within the specified number of intervals, it may be determined that the first feature points do not meet the preset processing condition, that is, at this time, the first feature points in the environment image may be uniformly distributed, and further image processing operation on the environment image is not required. Further, after the first feature point is judged not to accord with the preset processing condition, the current pose of the intelligent robot can be determined directly according to the first feature point.
In this embodiment, whether the first feature points meet the preset processing condition, that is, whether further image processing on the environment image is required, may be determined by determining whether the number of the first feature points in each sub-region of the environment image is within the specified number of intervals. Therefore, when the first characteristic points in the environment image are not uniformly distributed, the environment image can be subjected to image processing, the image quality of the environment image is improved, the uniformly distributed image characteristic points can be obtained, and the current pose of the intelligent robot can be more accurately positioned; in addition, the specified number of intervals can be adjusted according to practical application scenes, so that the current pose of the intelligent robot can be accurately positioned under different positioning accuracy requirements.
Referring to fig. 6, fig. 6 is a schematic flowchart of a pose positioning method according to still another embodiment of the present application, and the pose positioning method is applied to an intelligent robot. The pose positioning method provided by the embodiment of the present application will be described in detail below with reference to fig. 6. The pose positioning method can comprise the following steps:
step S410: and acquiring an environment image of the current environment of the intelligent robot.
Step S420: and extracting image feature points in the environment image as first feature points.
In the embodiment of the present application, steps S410 to S420 may refer to the contents in the foregoing embodiments, and are not described herein again.
Step S430: and when the first characteristic point meets a preset processing condition, acquiring the image brightness of the environment image.
In this embodiment, when it is determined that the number of the first feature points does not reach the preset number threshold or the first feature points are unevenly distributed, that is, the quality of the first feature points in the environment image is not good at this time, it may be determined that the first feature points meet the preset processing condition. Further, the reason for the poor quality of the first feature points may be detected, which may be due to a dark light condition of a shooting environment, resulting in a low image brightness of an environment image, and thus failing to acquire a sufficient number or uniform distribution of the first feature points.
Based on this, when the first feature point meets a preset processing condition, the image brightness of the environment image can be acquired. The image brightness of the environment image may be obtained by dividing the environment image into a plurality of sub-image regions, obtaining a brightness value of each sub-image region, calculating an average brightness value based on the brightness value of each sub-image region, and taking the average brightness value as the image brightness of the environment image. The image brightness can be understood as the brightness of the image, the brightness value interval of the image is 0-255, the closer the brightness value is to 0, the darker the representative image is, the closer the brightness value is to 255, the brighter the representative image is, that is, the brightness of the image and the brightness value are in positive correlation, and the larger the brightness value is, the brighter the image is correspondingly. The brightness value and the image brightness can be in a preset proportional relationship, the maximum value of the image brightness value corresponds to a maximum brightness percentage, if the image brightness percentage of the environment image is to be obtained, the ratio of the brightness value of the environment image to the length of the brightness value interval can be obtained, and then the product of the ratio and the maximum brightness percentage is obtained, namely the image brightness of the environment image. For example, if the maximum value 255 of the luminance interval corresponds to a maximum luminance percentage of 100% and the luminance value of the environment image is 5, it may be determined that the image luminance of the environment image is 1.96%.
Step S440: and judging whether the image brightness is smaller than a first brightness threshold value.
Step S450: and if the image brightness is smaller than the first brightness threshold value, preprocessing the environment image to obtain a target image.
Optionally, after the image brightness of the environment image is obtained, it may be determined whether the image brightness is smaller than a first brightness threshold, and then it may be determined whether the environment image is a low-illumination image, that is, it is determined whether the reason that the quality of the first feature point is not good is that the image brightness of the captured environment image is too low due to too dark light. The first brightness threshold may be preset, or may be adjusted according to practical applications, which is not limited in this embodiment. And if the image brightness is smaller than the first brightness threshold value, preprocessing the environment image to obtain a target image.
For example, if the acquired image brightness is 150 and the first brightness threshold is 180, it may be determined that the image brightness is smaller than the first brightness threshold at this time, which represents that the environment image at this time is a low-illumination image, and preprocessing needs to be performed on the environment image, such as image processing operations of image enhancement, filtering, and contrast improvement, which is not limited in this embodiment.
Step S460: and extracting image feature points in the target image as second feature points.
Step S470: and determining the current pose of the intelligent robot according to the second characteristic point.
In the embodiment of the present application, steps S450 to S470 may refer to the contents in the foregoing embodiments, and are not described herein again.
In this embodiment, under the condition that the first feature point meets the preset processing condition, by acquiring the image brightness of the environment image, further analyzing whether the cause of poor quality of the first feature point in the environment image is caused by the image brightness; when the image brightness of the environment image is determined to be smaller than the first brightness threshold, the environment image is judged to be a low-illumination image, the environment image is preprocessed, the brightness of the environment image is enhanced, and the image quality is improved, so that accurate image feature points can be obtained, the current pose of the intelligent robot can be accurately positioned, and the intelligent robot can be accurately positioned.
Referring to fig. 7, fig. 7 is a schematic flowchart of a pose positioning method according to yet another embodiment of the present application, and the pose positioning method is applied to an intelligent robot. The pose positioning method provided by the embodiment of the present application will be described in detail below with reference to fig. 7. The pose positioning method can comprise the following steps:
step S510: and acquiring an environment image of the current environment of the intelligent robot.
Step S520: and extracting image feature points in the environment image as first feature points.
Step S530: and when the first characteristic point meets a preset processing condition, acquiring the image brightness of the environment image.
In the embodiment of the present application, steps S510 to S530 may refer to the contents in the foregoing embodiments, and are not described herein again.
Step S540: and when the image brightness is smaller than a first brightness threshold value, judging whether the image brightness is smaller than a second brightness threshold value, wherein the second brightness threshold value is smaller than the first brightness threshold value.
In this embodiment, the second brightness threshold may be preset, and an image below the second brightness threshold may be regarded as an image with very dark brightness, and even after image processing operations such as image enhancement, filtering, and contrast enhancement, the quality of the obtained image is still poor. Therefore, after the image brightness is judged to be smaller than the first brightness threshold value and the environment image is the low-illumination image, whether the image brightness of the environment image is lower than the second brightness threshold value or not can be further judged, if the image brightness is not smaller than the second brightness threshold value, the environment image can be preprocessed, so that the image quality of the environment image can be improved, namely, the environment image is preprocessed to obtain a target image, image feature points in the target image are extracted to serve as second feature points, and then the current pose of the intelligent robot is determined according to the second feature points.
Step S550: and if the image brightness is smaller than the second brightness threshold value, starting the light supplementing device.
Step S560: and re-shooting an environment image of the current environment of the intelligent robot, wherein the re-shot environment image is used for determining the current pose of the intelligent robot.
Alternatively, when the image brightness is less than the second brightness threshold, the brightness of the environment image at this time is very dark, that is, even if the brightness of the environment image is increased by the image processing operation, the finally obtained environment image may be distorted seriously, and therefore, the image quality is poor, and a sufficient number of first feature points cannot be extracted or the distribution of the extracted first feature points is not uniform. Therefore, the problem that the quality of the extracted first characteristic point is poor due to low brightness of the original environment image is effectively solved by starting the light supplementing device (such as a flash lamp), re-shooting the environment image of the current environment of the intelligent robot, determining the current pose of the intelligent robot according to the re-shot environment image and re-shooting the environment image by starting the light supplementing device.
In this embodiment, whether the light supplement device needs to be turned on to shoot the environment image again is determined by determining whether the image brightness of the environment image is lower than the second brightness threshold. Therefore, the problem of image distortion caused by the fact that the image brightness is still improved by using an image processing mode under the condition that the image brightness is extremely low can be avoided, the problem that the quality of the extracted first characteristic point is poor due to the fact that the original environment image brightness is low is effectively solved, the environment image with the image brightness meeting the requirement is shot again by starting the light supplementing device, the current pose of the intelligent robot can be accurately positioned, and namely, the intelligent robot is accurately positioned.
Referring to fig. 8, fig. 8 is a schematic flowchart of a pose positioning method according to still another embodiment of the present application, and the pose positioning method is applied to an intelligent robot. The pose positioning method provided by the embodiment of the present application will be described in detail below with reference to fig. 8. The pose positioning method can comprise the following steps:
step S610: and acquiring an environment image of the current environment of the intelligent robot.
Step S620: and extracting image feature points in the environment image as first feature points.
Step S630: and when the first characteristic point meets a preset processing condition, acquiring the image brightness of the environment image.
In the embodiment of the present application, steps S610 to S630 may refer to the contents in the foregoing embodiments, and are not described herein again.
Step S640: and when the image brightness is greater than or equal to a first brightness threshold value, judging whether the image brightness is greater than a third brightness threshold value, wherein the third brightness threshold value is greater than the first brightness threshold value.
In this embodiment, the third brightness threshold may be preset, and an image with an image brightness greater than the third brightness threshold may be regarded as an image with a very bright brightness, that is, an image with a high exposure, and if the brightness or contrast of the environmental image is adjusted to be low through an image processing technique, distortion of the environmental image may also be caused, so as to cause problems such as insufficient number of image feature points extracted from the environmental image or uneven distribution, that is, the quality of the extracted image feature points is not good, thereby affecting the accuracy of subsequent positioning of the position of the intelligent robot.
Therefore, when the image brightness of the environment image is greater than or equal to the first brightness threshold, it may be further determined whether the image brightness is greater than the third brightness threshold, that is, whether the currently acquired environment image is suitable for reducing the brightness or contrast thereof by the image processing technology.
Step S650: and if the image brightness is greater than a third brightness threshold, performing parameter adjustment on imaging parameters of the image acquisition device, wherein the parameter adjustment is used for reducing the brightness of the image shot by the image acquisition device.
Therefore, the brightness of the image shot by the image acquisition device can be reduced by adjusting the parameters of the imaging parameters of the image acquisition device. The imaging parameter may be sensitivity, aperture, shutter time, etc., and the parameter adjustment may be to reduce sensitivity, use a small aperture, or reduce shutter time, which may all be implemented to reduce the brightness of the currently captured environmental image, and this embodiment is not limited thereto.
Optionally, if the brightness of the image is greater than or equal to the first brightness threshold and not greater than the third brightness threshold, the brightness of the environment image may be directly adjusted by an image processing technique to obtain a target image, and an image feature point in the target image is extracted as a second feature point; and determining the current pose of the intelligent robot according to the second characteristic point.
Step S660: and re-shooting an environment image of the current environment of the intelligent robot, wherein the re-shot environment image is used for determining the current pose of the intelligent robot.
In the embodiment of the present application, step S660 may refer to the contents in the foregoing embodiments, and is not described herein again.
In this embodiment, it is determined whether parameter adjustment of an imaging parameter of the image capturing device is required by determining whether the image brightness of the environment image is higher than a third brightness threshold, and the environment image is re-captured. Therefore, the problem of image distortion caused by reduction of image brightness by using an image processing mode under the condition of extremely high image brightness can be avoided, the problem of poor quality of extracted first characteristic points caused by high brightness of original environment images is effectively solved, the environment images with the image brightness meeting requirements are shot again by adjusting the parameters of the imaging parameters of the image acquisition device, and the current pose of the intelligent robot can be more accurately positioned, namely, the intelligent robot is accurately positioned.
Referring to fig. 9, fig. 9 is a schematic flowchart of a pose positioning method according to still another embodiment of the present application, and the pose positioning method is applied to an intelligent robot. The pose positioning method provided by the embodiment of the present application will be described in detail below with reference to fig. 9. The pose positioning method can comprise the following steps:
step S710: and acquiring an environment image of the current environment of the intelligent robot.
Step S720: and extracting image feature points in the environment image as first feature points.
In the embodiment of the present application, steps S710 to S720 may refer to the contents in the foregoing embodiments, and are not described herein again.
Step S730: and when the first feature point does not accord with the preset processing condition, acquiring the texture complexity of the environment image.
In this embodiment, the factors affecting the number and distribution of the first feature points of the environment image may further include the texture complexity of the environment image, and the more complex the texture in the environment image is, the greater the number of image feature points included in the environment image is, the denser the distribution is. Therefore, when the first feature point in the environment image does not accord with the preset processing condition, the texture complexity of the environment image can be further obtained, and the texture complexity is analyzed to judge whether the currently extracted first feature point can reach the condition of accurately positioning the intelligent robot. The texture feature extraction algorithm may include linear back projection algorithm (LBP) and Gaussian Markov Random Field algorithm (GMRF), and the like, and this embodiment does not limit this.
Step S740: and when the texture complexity is larger than a preset texture threshold, dividing the environment image into a plurality of sub-images.
In this embodiment, the preset texture threshold may be preset, or may be adjusted according to an actual application scenario, which is not limited in this embodiment. When the texture complexity is greater than the preset texture threshold, it can be determined that the texture in the environment image is more complex at the moment, that is, the number of the extracted first feature points may be more or the extracted first feature points are not uniformly distributed, which may affect the accuracy of calculating the current pose of the intelligent robot according to the first feature points in the subsequent process, and thus the accuracy of positioning may be reduced. Therefore, when the texture complexity is greater than the preset texture threshold, the environment image can be divided into a plurality of sub-images, wherein the division of the environment image into the plurality of sub-images can be equal, namely the areas of the sub-images are the same; the sub-images may also be divided according to the types of the entity objects in the environment image, each entity object is divided into one sub-image, and there are various ways to divide the sub-images, which is not limited in this embodiment.
Step S750: extracting a partial number of image feature points from each of the plurality of sub-images as the second feature points.
Based on this, after the sub-images of the environment image are divided, because the texture of the environment image is complex, the number of image feature points contained in each sub-image is also large, and in order to extract a proper number of image feature points to calculate the current pose of the intelligent robot, only a part of the number of image feature points can be extracted from each sub-image to serve as second feature points.
In some embodiments, when the environmental image is divided equally to obtain a plurality of sub-images with equal areas, the partial number may be a specific value set according to the number of image feature points included in each sub-region, where the specific value is smaller than the total number of image feature points included in each sub-region, and the image feature points with the specific value are extracted from each sub-region as the second feature points.
For example, the environment image is divided into A, B, C sub-images, where the sub-image a includes 200 feature points, the sub-image B includes 80 image feature points, and the sub-image C includes 70 image feature points, and at this time, the specified value may be set to 60 image feature points, that is, 60 image feature points are extracted from each sub-image as the second feature points. Therefore, the image feature points with the specified numerical values are extracted from each sub-image, and the number of the image feature points can be ensured, and the uniform distribution of the image feature points can be ensured.
In other embodiments, if the sub-images are divided according to the types of the solid objects when the environmental image is divided, the number ratio of the image feature points extracted from each sub-image may be determined according to the area ratio of each sub-image, where the area ratio and the number ratio are the same, and the number of the extracted image feature points in each sub-image may be determined according to the number ratio.
Illustratively, the environment image is divided into A, B, C three parts of sub-images, the sub-image a includes 200 feature points with an area of 15 square centimeters, the sub-image B includes 150 image feature points with an area of 10 square centimeters, and the sub-image C includes 70 image feature points with an area of 5 square centimeters, that is, the area ratio of the sub-image a, the sub-image B, and the sub-image C is 3:2:1, and correspondingly, the number ratio of the image feature points extracted from the sub-image a, the sub-image B, and the sub-image C also needs to be 3:2:1, and if 50 image feature points are extracted from the sub-image C at this time, 100 image feature points can be extracted from the sub-image B, and 150 image feature points can be extracted from the sub-image a.
Step S760: and determining the current pose of the intelligent robot according to the second characteristic point.
In the embodiment of the present application, step S760 may refer to the contents in the foregoing embodiments, and is not described herein again.
In this embodiment, whether feature points in the environment image need to be extracted again or not can be determined by determining the texture complexity of the environment image, so as to prevent the problem that the texture in the environment image is complex, which causes the problem that the number of extracted first image feature points is too large or the distribution is not uniform, which further causes the problem that the accuracy of subsequent intelligent robot positioning is reduced, or the problem that the pose calculation time is long due to the large number of image feature points. Therefore, when the texture of the environment image is judged to be complex, the image feature points with proper quantity and quality can be obtained by dividing the sub-images and re-extracting the second feature points, and the current pose of the intelligent robot can be calculated more accurately, namely, the intelligent robot is accurately positioned.
Referring to fig. 10, fig. 10 is a schematic flowchart of a pose positioning method according to still another embodiment of the present application, and the pose positioning method is applied to an intelligent robot. The pose positioning method provided by the embodiment of the present application will be described in detail below with reference to fig. 10. The pose positioning method can comprise the following steps:
the intelligent robot comprises a plurality of sensors for determining pose information of the intelligent robot and an image acquisition device
Step S810: and acquiring an environment image of the current environment of the intelligent robot.
Step S820: and extracting image feature points in the environment image as first feature points.
Step S830: and when the first characteristic point meets a preset processing condition, preprocessing the environment image to obtain a target image, wherein the preset processing condition is used for representing that the precision of the pose positioned according to the first characteristic point is smaller than a preset precision, and the preprocessing is used for improving the image quality.
Step S840: and extracting image feature points in the target image as second feature points.
In the embodiment of the present application, steps S810 to S840 may refer to the contents in the foregoing embodiments, and are not described herein again.
Step S850: and acquiring pose data acquired by each sensor in the multiple sensors.
In this embodiment, the intelligent robot includes multiple sensors for determining pose information of the intelligent robot, where each sensor may acquire pose information of the intelligent robot, and the multiple sensors may include a wheel odometer, an Inertial Measurement Unit (IMU), a light flow meter, and the like, which is not limited in this embodiment. The wheel odometer can acquire the information of the mileage traveled by the intelligent robot, namely the distance traveled in which direction the intelligent robot travels can be acquired; the optical flowmeter can collect the flow change of light, namely can collect the light reflected by the solid object into the optical flowmeter camera and the movement direction of the solid object; the inertial measurement unit may measure the three-axis attitude angle (or angular velocity) and acceleration of the intelligent robot.
Step S860: and determining a plurality of pose information of the intelligent robot according to the pose data acquired by each sensor.
Based on the method, a plurality of pose information of the intelligent robot can be determined according to the pose data acquired by each sensor. Specifically, the pose information of the intelligent robot can be determined according to the mileage information collected by the wheel odometer and used as first pose information, wherein the pose information can comprise position coordinate information, handover information, advancing speed and turning speed; determining pose information of the intelligent robot according to the change of the light flow acquired by the optical flow meter, and taking the pose information as second pose information; and using the pose information determined according to the angular rate and the acceleration acquired by the inertial measurement unit as third pose information.
Step S870: and determining target pose information of the intelligent robot according to the second characteristic point.
In the embodiment of the present application, step S870 may refer to the contents in the foregoing embodiments, and is not described herein again.
Step S880: acquiring a weight corresponding to the image acquisition device as a first weight, and acquiring a weight corresponding to each pose sensor as a second weight, wherein the first weight is used for representing the importance degree of the environment image acquired by the image acquisition device to the current pose of the intelligent robot, and the second weight is used for representing the importance degree of the pose data acquired by the pose sensors to the current pose of the intelligent robot.
Step S890: and performing information fusion on the plurality of pose information and the target pose information based on the first weight and a second weight corresponding to each pose sensor to obtain the current pose of the intelligent robot.
In this embodiment, after the weight corresponding to the image capturing device is obtained as the first weight and the weight corresponding to each pose sensor is obtained as the second weight, the target pose information, the first pose information, the second pose information, and the third pose information may be fused according to the first weight corresponding to the image capturing device and the second weight corresponding to each sensor, where the information fusion may be performed in a manner of performing weighted summation on the four obtained pose information based on the weight corresponding to the image capturing device and the weight corresponding to each sensor, so as to obtain the current pose of the intelligent robot.
For example, the coordinates of the object pose information are (x)1,y1) The coordinate of the first attitude information is (x)2,y2) The coordinate of the second attitude information is (x)3,y3) And the coordinate of the third posture information is (x)4,y4) The weight corresponding to the image acquisition device is 0.4, the weight corresponding to the wheel odometer is 0.3, the weight corresponding to the optical flow meter is 0.3, and the weight corresponding to the inertial measurement unit is 0.2, at this time, the coordinate of the current pose of the intelligent robot can be calculated to be (0.4 x)1+0.3x2+0.3x3+0.2x4,0.4y1+0.3y2+0.3y3+0.2y4)。
In some embodiments, after the environmental image is preprocessed, if the image quality of the obtained target image is still poor, before the information of the plurality of poses is fused, the weight corresponding to the image acquisition device may be reduced to a designated weight, that is, the influence of the environmental image on the calculation result is reduced when the current pose of the intelligent robot is subsequently calculated; and then, based on the assigned weight and the weight corresponding to each sensor, performing information fusion on the plurality of pose information and the target pose information to obtain the current pose of the intelligent robot.
In this embodiment, when the image quality of the environment image is poor, the weight corresponding to the image acquisition device may be adjusted to reduce the influence of the environment image on the calculation of the current pose of the intelligent robot, and improve the accuracy of positioning the current pose of the intelligent robot, that is, to better achieve accurate positioning of the intelligent robot.
Referring to fig. 11, a block diagram of a pose positioning apparatus 900 according to an embodiment of the present application is shown, and is applied to an intelligent robot. The apparatus 900 may include: an image acquisition module 910, a first feature extraction module 920, an image processing module 930, a second feature extraction module 940, and a pose determination module 950.
The image acquisition module 910 is configured to acquire an environment image of an environment where the intelligent robot is currently located;
the first feature extraction module 920 is configured to extract image feature points in the environment image as first feature points;
the image processing module 930 is configured to, when the first feature point meets a preset processing condition, perform preprocessing on the environment image to obtain a target image, where the preprocessing condition is that the number of the first feature point is within a preset number threshold, and the preprocessing is used to improve the image quality;
the first feature extraction module 940 is configured to extract image feature points in the target image as second feature points;
the pose determination module 950 is configured to determine the current pose of the intelligent robot according to the second feature point.
In some embodiments, the pose positioning apparatus 900 may further include: and a quantity judgment module. The quantity judging module is used for judging whether the quantity of the first characteristic points is smaller than a preset quantity threshold value before the environment image is preprocessed to obtain a target image when the first characteristic points meet preset processing conditions, wherein the quantity is positively correlated with the accuracy of the pose positioned according to the first characteristic points, and the preset quantity threshold value is set according to the preset accuracy; if the number is smaller than the preset number threshold, judging that the first feature point meets the preset processing condition; and if the number is greater than or equal to the preset number threshold, judging that the first feature point does not accord with the preset processing condition.
In other embodiments, the pose positioning apparatus 900 may further include: the device comprises an image dividing module, a characteristic point quantity obtaining module and a characteristic point quantity judging module. The image dividing module may be configured to divide the environment image into a plurality of sub-regions before the environment image is preprocessed to obtain the target image when the first feature point meets a preset processing condition. The number of feature points acquisition module may be configured to acquire the number of the first feature points included in each sub-region. The characteristic point quantity judging module can be used for judging whether the quantity is in a specified quantity interval or not; if the number of the first feature points contained in each sub-area is within the specified number interval, judging that the first feature points do not meet the preset processing condition; and if the number of the first characteristic points contained in any sub-area is not within a specified number interval, judging that the first characteristic points conform to the preset processing condition.
In some embodiments, the pose positioning apparatus 900 may further include: the device comprises a brightness acquisition module and a first judgment module. The brightness obtaining module may be configured to obtain image brightness of the environment image before the environment image is preprocessed to obtain the target image. The first determining module may be configured to determine whether the image brightness is smaller than a first brightness threshold, and if the image brightness is smaller than the first brightness threshold, execute the preprocessing on the environment image to obtain a target image.
In this manner, the intelligent robot includes a light supplement device, and the pose positioning device 900 may include: the second judging module and the light supplementing opening module. The second determining module may be configured to determine whether the image brightness is smaller than a second brightness threshold value when the image brightness is smaller than the first brightness threshold value after the determining whether the image brightness is smaller than the first brightness threshold value, where the second brightness threshold value is smaller than the first brightness threshold value. The light supplement starting module may be configured to start the light supplement device if the image brightness is less than the second brightness threshold. The image capturing module 910 may be specifically configured to capture an environment image of an environment where the intelligent robot is currently located again, where the captured environment image is used to determine the current pose of the intelligent robot.
In this mode, the intelligent robot includes an image capturing device, and the pose positioning device 900 may include: a third judging module and a parameter adjusting module. The third determining module may be configured to determine whether the image brightness is greater than a third brightness threshold value when the image brightness is greater than or equal to the first brightness threshold value after the determining whether the image brightness is less than the first brightness threshold value, where the third brightness threshold value is greater than the first brightness threshold value. The parameter adjusting module may be configured to perform parameter adjustment on an imaging parameter of the image capturing device if the image brightness is greater than a third brightness threshold, where the parameter adjustment is used to reduce the brightness of an image captured by the image capturing device. The image capturing module 910 may be specifically configured to capture an environment image of an environment where the intelligent robot is currently located again, where the captured environment image is used to determine a current pose of the intelligent robot.
In some embodiments, the pose positioning apparatus 900 may include: the device comprises a texture acquisition module, a subimage division module and a third feature extraction module. The texture obtaining module may be configured to obtain the texture complexity of the environment image when the first feature point does not meet the preset processing condition after determining that the first feature point does not meet the preset processing condition if the number is greater than or equal to the preset number threshold. The sub-image dividing module may be configured to divide the environment image into a plurality of sub-images when the texture complexity is greater than a preset texture threshold. The third feature extraction module may be configured to extract a partial number of image feature points from each of the plurality of sub-images as the second feature points. The pose determination module 950 may be specifically configured to determine the current pose of the intelligent robot according to the second feature point.
In other embodiments, the intelligent robot includes an image capturing device for capturing the environment image and a plurality of pose sensors for capturing pose data, and the pose positioning device 900 may include: the device comprises a data acquisition module and a first information determination module. The data acquisition module can be used for acquiring the pose data acquired by each sensor in the plurality of sensors before the current pose of the intelligent robot is determined according to the second feature point. The first information determination module may be configured to determine a plurality of pose information of the intelligent robot from the pose data collected by each sensor. Pose determination module 950 may be specifically configured to: determining target pose information of the intelligent robot according to the second feature points; acquiring a weight corresponding to the image acquisition device as a first weight, and acquiring a weight corresponding to each pose sensor as a second weight, wherein the first weight is used for representing the importance degree of the environment image acquired by the image acquisition device to the current pose of the intelligent robot, and the second weight is used for representing the importance degree of the pose data acquired by the pose sensors to the current pose of the intelligent robot; and performing information fusion on the plurality of pose information and the target pose information based on the first weight and a second weight corresponding to each pose sensor to obtain the current pose of the intelligent robot.
In this manner, the pose positioning apparatus 900 may further include: and a weight adjusting module. The weight adjusting module may be configured to reduce the weight corresponding to the image capturing device to a specified weight before the information fusion is performed on the pose information and the target pose information based on the weight corresponding to the image capturing device and the weight corresponding to each sensor to obtain the current pose of the intelligent robot. The pose determining unit may be specifically configured to perform information fusion on the plurality of pose information and the target pose information based on the assigned weights and the weights corresponding to each sensor, so as to obtain a current pose of the intelligent robot.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
In summary, in the scheme provided by the application, firstly, an environment image of the current environment where the intelligent robot is located is obtained, and image feature points in the environment image are extracted as first feature points; when the first characteristic point meets a preset processing condition, preprocessing the environment image to obtain a target image, wherein the preset processing condition is used for representing that the accuracy of the pose positioned according to the first characteristic point is smaller than the preset accuracy, and the preprocessing is used for improving the image quality; then extracting image characteristic points in the target image as second characteristic points; and finally, determining the current pose of the intelligent robot according to the second characteristic point. Therefore, the problem that the calculated current pose of the intelligent robot is inaccurate due to poor image quality of the acquired environment image can be solved; that is to say, when the image quality of the environment image is poor, the environment image can be preprocessed, the image quality of the environment image is greatly improved, so that more accurate feature points can be obtained, the current pose of the intelligent robot can be more accurately positioned, and namely, the intelligent robot can be accurately positioned.
An intelligent robot provided by the present application will be described with reference to the drawings.
Referring to fig. 12, fig. 12 is a block diagram illustrating a structure of an intelligent robot 1000 according to an embodiment of the present application, where the pose positioning method according to the embodiment of the present application can be executed by the intelligent robot 1000.
The intelligent robot 1000 in the embodiment of the present application may include one or more of the following components: a processor 1001, a memory 1002, and one or more applications, wherein the one or more applications may be stored in the memory 1002 and configured to be executed by the one or more processors 1001, the one or more programs configured to perform the methods as described in the aforementioned method embodiments.
Processor 1001 may include one or more processing cores. The processor 1001 connects various parts throughout the intelligent robot 1000 using various interfaces and lines, and performs various functions of the intelligent robot 1000 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1002, and calling data stored in the memory 1002. Alternatively, the processor 1001 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may be integrated into the processor 1001, and implemented by a single communication chip.
The Memory 1002 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 1002 may be used to store instructions, programs, code sets, or instruction sets. The memory 1002 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the intelligent robot 1000 in use (such as the various correspondences described above), and the like.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 13, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 1100 has stored therein program code that can be called by a processor to perform the method described in the above-described method embodiments.
The computer-readable storage medium 1100 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 1000 includes a non-transitory computer-readable storage medium. The computer readable storage medium 1100 has storage space for program code 1110 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 1110 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A pose positioning method is applied to an intelligent robot, and comprises the following steps:
acquiring an environment image of the current environment of the intelligent robot;
extracting image feature points in the environment image as first feature points;
when the first feature point meets a preset processing condition, preprocessing the environment image to obtain a target image, wherein the preset processing condition is used for representing that the accuracy of the pose positioned according to the first feature point is smaller than a preset accuracy, and the preprocessing is used for improving the image quality;
extracting image feature points in the target image as second feature points;
and determining the current pose of the intelligent robot according to the second characteristic point.
2. The method according to claim 1, wherein before the preprocessing the environment image to obtain the target image when the first feature point meets a preset processing condition, the method further comprises:
judging whether the number of the first feature points is smaller than a preset number threshold, wherein the number is positively correlated with the accuracy of the pose positioned according to the first feature points, and the preset number threshold is set according to the preset accuracy;
if the number is smaller than the preset number threshold, judging that the first feature point meets the preset processing condition;
and if the number is greater than or equal to the preset number threshold, judging that the first feature point does not accord with the preset processing condition.
3. The method according to claim 2, wherein after determining that the first feature point does not meet the predetermined processing condition if the number is greater than or equal to the predetermined number threshold, the method further comprises:
acquiring the texture complexity of the environment image;
when the texture complexity is larger than a preset texture threshold, dividing the environment image into a plurality of sub-images;
extracting a partial number of image feature points from each of the plurality of sub-images as the second feature points;
and executing the second characteristic point to determine the current pose of the intelligent robot.
4. The method according to claim 1, wherein before the preprocessing the environment image to obtain the target image when the first feature point meets a preset processing condition, the method further comprises:
dividing the environment image into a plurality of sub-regions;
acquiring the number of the first feature points contained in each sub-region;
judging whether the quantity is in a specified quantity interval or not;
if the number of the first feature points contained in each sub-area is within the specified number interval, judging that the first feature points do not meet the preset processing condition;
and if the number of the first characteristic points contained in any sub-area is not within a specified number interval, judging that the first characteristic points conform to the preset processing condition.
5. The method of claim 1, wherein prior to said pre-processing the environmental image to obtain the target image, the method further comprises:
acquiring the image brightness of the environment image;
judging whether the image brightness is smaller than a first brightness threshold value;
and if the image brightness is smaller than the first brightness threshold value, executing the preprocessing of the environment image to obtain a target image.
6. The method of claim 5, wherein the intelligent robot comprises a fill-in light device, and wherein after the determining whether the image brightness is less than a first brightness threshold, the method further comprises:
when the image brightness is smaller than the first brightness threshold, judging whether the image brightness is smaller than a second brightness threshold, wherein the second brightness threshold is smaller than the first brightness threshold;
if the image brightness is smaller than the second brightness threshold, the light supplementing device is started;
and re-shooting an environment image of the current environment of the intelligent robot, wherein the re-shot environment image is used for determining the current pose of the intelligent robot.
7. The method of claim 5, wherein the intelligent robot comprises an image capture device, and wherein after the determining whether the image brightness is less than a first brightness threshold, the method further comprises:
when the image brightness is greater than or equal to the first brightness threshold, judging whether the image brightness is greater than a third brightness threshold, wherein the third brightness threshold is greater than the first brightness threshold;
if the image brightness is larger than a third brightness threshold, performing parameter adjustment on imaging parameters of the image acquisition device, wherein the parameter adjustment is used for reducing the brightness of an image shot by the image acquisition device;
and re-shooting an environment image of the current environment of the intelligent robot, wherein the re-shot environment image is used for determining the current pose of the intelligent robot.
8. The method according to any one of claims 1-7, wherein the intelligent robot comprises an image acquisition device for acquiring the environment image and a plurality of pose sensors for acquiring pose data, and before the determining the current pose of the intelligent robot according to the second feature point, the method comprises:
acquiring pose data acquired by each sensor in the multiple sensors;
determining a plurality of pose information of the intelligent robot according to the pose data acquired by each sensor;
determining the current pose of the intelligent robot according to the second feature point, wherein the determining the current pose of the intelligent robot comprises the following steps:
determining target pose information of the intelligent robot according to the second feature points;
acquiring a weight corresponding to the image acquisition device as a first weight, and acquiring a weight corresponding to each pose sensor as a second weight, wherein the first weight is used for representing the importance degree of the environment image acquired by the image acquisition device to the current pose of the intelligent robot, and the second weight is used for representing the importance degree of the pose data acquired by the pose sensors to the current pose of the intelligent robot;
and performing information fusion on the plurality of pose information and the target pose information based on the first weight and a second weight corresponding to each pose sensor to obtain the current pose of the intelligent robot.
9. The method of claim 8, wherein prior to the information fusing the plurality of pose information and the target pose information based on the weights corresponding to the image capture devices and the weights corresponding to each sensor to obtain the current pose of the intelligent robot, the method further comprises:
reducing the weight corresponding to the image acquisition device to a designated weight;
the information fusion of the pose information and the target pose information is performed based on the weight corresponding to the image acquisition device and the weight corresponding to each sensor, so as to obtain the current pose of the intelligent robot, and the method comprises the following steps:
and performing information fusion on the plurality of pose information and the target pose information based on the designated weight and the weight corresponding to each sensor to obtain the current pose of the intelligent robot.
10. A position appearance positioner, its characterized in that is applied to intelligent robot, the device includes:
the image acquisition module is used for acquiring an environment image of the current environment where the intelligent robot is located;
the first feature extraction module is used for extracting image feature points in the environment image as first feature points;
the image processing module is used for preprocessing the environment image to obtain a target image when the first characteristic points meet preset processing conditions, wherein the preprocessing conditions are that the number of the first characteristic points is within a preset number threshold value, and the preprocessing is used for improving the image quality;
the second feature extraction module is used for extracting image feature points in the target image as second feature points;
and the pose determining module is used for determining the current pose of the intelligent robot according to the second characteristic point.
CN202110852886.2A 2021-07-27 2021-07-27 Pose positioning method and device, intelligent robot and storage medium Pending CN113592946A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110852886.2A CN113592946A (en) 2021-07-27 2021-07-27 Pose positioning method and device, intelligent robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110852886.2A CN113592946A (en) 2021-07-27 2021-07-27 Pose positioning method and device, intelligent robot and storage medium

Publications (1)

Publication Number Publication Date
CN113592946A true CN113592946A (en) 2021-11-02

Family

ID=78250752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110852886.2A Pending CN113592946A (en) 2021-07-27 2021-07-27 Pose positioning method and device, intelligent robot and storage medium

Country Status (1)

Country Link
CN (1) CN113592946A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113608524A (en) * 2021-06-16 2021-11-05 深圳甲壳虫智能有限公司 Automatic walking device, control method and device thereof, and storage medium
CN113804222A (en) * 2021-11-16 2021-12-17 浙江欣奕华智能科技有限公司 Positioning accuracy testing method, device, equipment and storage medium
CN114750147A (en) * 2022-03-10 2022-07-15 深圳甲壳虫智能有限公司 Robot space pose determining method and device and robot

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113608524A (en) * 2021-06-16 2021-11-05 深圳甲壳虫智能有限公司 Automatic walking device, control method and device thereof, and storage medium
CN113608524B (en) * 2021-06-16 2024-04-16 深圳甲壳虫智能有限公司 Automatic walking device, control method thereof, control device and storage medium
CN113804222A (en) * 2021-11-16 2021-12-17 浙江欣奕华智能科技有限公司 Positioning accuracy testing method, device, equipment and storage medium
CN113804222B (en) * 2021-11-16 2022-03-04 浙江欣奕华智能科技有限公司 Positioning accuracy testing method, device, equipment and storage medium
CN114750147A (en) * 2022-03-10 2022-07-15 深圳甲壳虫智能有限公司 Robot space pose determining method and device and robot
CN114750147B (en) * 2022-03-10 2023-11-24 深圳甲壳虫智能有限公司 Space pose determining method and device of robot and robot

Similar Documents

Publication Publication Date Title
CN113592946A (en) Pose positioning method and device, intelligent robot and storage medium
EP3330925B1 (en) Method for 3d reconstruction of an environment of a mobile device, corresponding computer program product and device
US10373380B2 (en) 3-dimensional scene analysis for augmented reality operations
US9679384B2 (en) Method of detecting and describing features from an intensity image
CN112287860B (en) Training method and device of object recognition model, and object recognition method and system
CN111862201B (en) Deep learning-based spatial non-cooperative target relative pose estimation method
KR20180044279A (en) System and method for depth map sampling
CN108171715B (en) Image segmentation method and device
US20170278258A1 (en) Method Of Detecting And Describing Features From An Intensity Image
US20220301277A1 (en) Target detection method, terminal device, and medium
JPWO2019021569A1 (en) Information processing apparatus, information processing method, and program
JP6817742B2 (en) Information processing device and its control method
CN112361990A (en) Laser pattern extraction method and device, laser measurement equipment and system
JP6758263B2 (en) Object detection device, object detection method and object detection program
CN113689365B (en) Target tracking and positioning method based on Azure Kinect
WO2021056501A1 (en) Feature point extraction method, movable platform and storage medium
CN111553342B (en) Visual positioning method, visual positioning device, computer equipment and storage medium
CN107767366B (en) A kind of transmission line of electricity approximating method and device
US20230290061A1 (en) Efficient texture mapping of a 3-d mesh
CN111915632B (en) Machine learning-based method for constructing truth database of lean texture target object
WO2019165626A1 (en) Methods and apparatus to match images using semantic features
JP5051671B2 (en) Information processing apparatus, information processing method, and program
CN114359915A (en) Image processing method, device and readable storage medium
TWI749365B (en) Motion image integration method and motion image integration system
CN117523428B (en) Ground target detection method and device based on aircraft platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination