CN112614181A - Robot positioning method and device based on highlight target - Google Patents

Robot positioning method and device based on highlight target Download PDF

Info

Publication number
CN112614181A
CN112614181A CN202011383266.0A CN202011383266A CN112614181A CN 112614181 A CN112614181 A CN 112614181A CN 202011383266 A CN202011383266 A CN 202011383266A CN 112614181 A CN112614181 A CN 112614181A
Authority
CN
China
Prior art keywords
positioning
highlight
target
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011383266.0A
Other languages
Chinese (zh)
Other versions
CN112614181B (en
Inventor
李昂
郭盖华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen LD Robot Co Ltd
Original Assignee
Shenzhen LD Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen LD Robot Co Ltd filed Critical Shenzhen LD Robot Co Ltd
Priority to CN202011383266.0A priority Critical patent/CN112614181B/en
Publication of CN112614181A publication Critical patent/CN112614181A/en
Application granted granted Critical
Publication of CN112614181B publication Critical patent/CN112614181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The application is suitable for the field of intelligent robots and provides a robot positioning method and device based on a highlight target, wherein the positioning method comprises the following steps: acquiring a plurality of positioning images containing highlight targets; identifying corresponding coordinate information of the highlight target in each positioning image respectively; determining positioning information of the robot based on a plurality of the coordinate information. The positioning method provided by the application can be used for positioning the robot based on the indoor high-brightness target, the indoor high-brightness target is generally fixed relative to an indoor scene, and the problem that the position of the indoor robot cannot be accurately positioned due to movement of an obstacle in the prior art is solved.

Description

Robot positioning method and device based on highlight target
Technical Field
The application belongs to the field of intelligent robots, and particularly relates to a robot positioning method and device based on a highlight target.
Background
With the continuous development of the robot field, robots are applied more and more in daily life, such as sweeping robots and the like. In the application of the robot, whether the position of the robot can be accurately positioned is very important for path planning of the robot, and in the application of daily life, most of the robots work in an indoor environment, so that the positioning of the indoor robot is the key for realizing self-service navigation of the robot, and the positioning method has important significance for improving the automation level of the robot.
In the existing indoor positioning technology, the information of indoor obstacles is generally detected by an optical sensor and a visual sensor, so that an indoor map where the robot is located is built inside the robot, once the obstacles move, great deviation is generated in positioning, and the indoor robot cannot be accurately positioned.
Disclosure of Invention
The embodiment of the application provides a robot positioning method and device based on a highlight target, which can be used for positioning a robot based on an indoor highlight target, wherein the indoor highlight target is generally fixed relative to an indoor scene, and the problem that the position of the indoor robot cannot be accurately positioned due to movement of an obstacle in the prior art is solved.
In a first aspect, an embodiment of the present application provides a robot positioning method based on a highlight target, including: acquiring a plurality of positioning images containing highlight targets; identifying corresponding coordinate information of the highlight target in each positioning image respectively; determining positioning information of the robot based on a plurality of the coordinate information.
In a second aspect, an embodiment of the present application provides a robot positioning device based on a highlight object, including: the positioning image acquisition module is used for acquiring a plurality of positioning images containing highlight targets; the highlight target identification module is used for identifying the corresponding coordinate information of the highlight target in each positioning image; and the positioning information determining module is used for determining the positioning information of the robot based on the coordinate information.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method of any of the above first aspects when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, including: the computer readable storage medium stores a computer program which, when executed by a processor, implements the method of any of the first aspects described above.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the method of any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that: compared with the prior art, the method can be used for positioning the robot based on the indoor high-brightness target, the indoor high-brightness target is generally fixed relative to an indoor scene, and the problem that the position of the indoor robot cannot be accurately positioned due to movement of an obstacle in the prior art is solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of a positioning method according to a first embodiment of the present application;
fig. 2 is a flowchart of an implementation of a positioning method according to a second embodiment of the present application;
fig. 3 is a flowchart of an implementation of a positioning method according to a third embodiment of the present application;
fig. 4 is a flowchart of an implementation of a positioning method according to a fourth embodiment of the present application;
fig. 5 is a flowchart of an implementation of a positioning method according to a fifth embodiment of the present application;
fig. 6 is a flowchart of an implementation of a positioning method according to a sixth embodiment of the present application;
FIG. 7 is a schematic structural diagram of a positioning device according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In the embodiment of the present application, the main execution body of the flow is a terminal device. The terminal devices include but are not limited to: the positioning method comprises the following steps of enabling the server, the computer, the smart phone, the tablet computer and the like to execute the positioning method. Preferably, the terminal device is a robot device, and the terminal device can acquire the positioning image through a visual sensor or the like. Fig. 1 shows a flowchart of an implementation of the positioning method provided in the first embodiment of the present application, which is detailed as follows:
in S101, a plurality of positioning images including a highlight target are acquired.
In this embodiment, the obtaining of the plurality of positioning images including the highlight object may specifically be: the positioning images containing the highlight targets are acquired through the vision sensor installed on the robot equipment, exemplarily, the vision sensor can acquire images pointing to any space in an environment room where the robot is located, and specifically, the vision sensor can be an omnibearing camera.
In S102, the coordinate information of the highlight object in each of the positioning images is identified.
In the present embodiment, the coordinate information refers to coordinates of the highlight object in the positioning image, for example, coordinates of a lower left corner are (0,0), and coordinates of an upper right corner are (40,10), that is, the positioning image is divided into 40 units of coordinates in the horizontal direction, and 10 units of coordinates in the vertical direction.
In a possible implementation manner, the identifying the coordinate information of the highlight object respectively corresponding to each of the positioning images may specifically be: and performing gray scale transformation on the positioning image to obtain a gray scale image, determining the area of the highlight object according to the gray scale value of each pixel in the gray scale image, specifically, identifying the pixel with the gray scale value higher than a preset gray scale threshold value as a highlight pixel, identifying the area formed by continuous adjacent highlight pixels as the area of the highlight object, and identifying the coordinates of a preset point of the area of the highlight object as the coordinate information of the highlight object in the positioning image. The preset point can be the geometric center, the gravity center or the gray scale mass center of the area where the highlight target is located; the preset point of the area where the highlight target is located may be calculated by a clustering algorithm, and the clustering algorithm in the prior art may be referred to, which is not described herein again.
In S103, positioning information of the robot is determined based on a plurality of pieces of the coordinate information.
In the present embodiment, the positioning information includes a spatial horizontal position and an attitude rotation angle of the robot. Each positioning image is acquired by the robot at each position, and generally, the position of the highlight object in the space is not changed, so that the coordinate information in each positioning image reflects the spatial horizontal position and the attitude rotation angle (i.e., the facing direction of the vision sensor) of the robot at each position.
In a possible implementation manner, the determining the positioning information of the robot based on the plurality of pieces of coordinate information may specifically be: presetting a standard positioning image, wherein the standard positioning image corresponds to standard positioning information of the robot and standard coordinate information of each highlight target; and performing coordinate transformation on the coordinate information of the same highlight target and the standard coordinate information through a coordinate transformation method, so as to determine coordinate system transformation parameters of the standard positioning image and the positioning image, wherein the coordinate system transformation parameters represent the difference between the positioning information of the robot when acquiring the positioning image and the standard positioning information, and the positioning information can be determined according to the coordinate system transformation parameters and the standard positioning information.
In another possible implementation manner, the determining the positioning information of the robot based on the coordinate information may specifically be: presetting a positioning model, inputting the coordinate information into the positioning model, and outputting the positioning information of the robot; the positioning model is determined according to a plurality of training positioning images, and the training positioning images comprise training coordinate information and corresponding training positioning information; namely, the training coordinate information is used as input, the training positioning information is used as output, and the positioning model is obtained through training, wherein the positioning model can be a deep learning model.
It should be understood that the positioning method implemented in the present embodiment is continuous, that is, the indoor environment of the robot is constantly monitored by the above-mentioned vision sensor, and the positioning images are continuously acquired at preset time intervals, for example, at 30 frames per second. The shooting angle of the vision sensor is fixed, and the vision sensor can be arranged at the top of the robot or at the side of the robot to form a fixed inclined angle with the robot.
In the embodiment, the robot can be positioned based on the indoor highlight target, the indoor highlight target is generally fixed relative to an indoor scene, and the problem that the position of the indoor robot cannot be accurately positioned due to movement of an obstacle in the prior art is solved.
Fig. 2 shows a flowchart of an implementation of the positioning method according to the second embodiment of the present application. Referring to fig. 2, in comparison with the embodiment shown in fig. 1, the positioning method S102 provided in this embodiment includes S201 to S203, which are detailed as follows:
further, the identifying the corresponding coordinate information of the highlight object in each of the positioning images includes:
in S201, a binarization process is performed on the positioning image to obtain a binarized image.
In this embodiment, in order to more accurately identify the highlight object in the positioning image, based on a digital image processing technique, the positioning image is subjected to binarization processing to obtain the binarized image.
In S202, a highlight region in the binarized image, in which the pixel value is greater than a preset pixel threshold value, is selected, and the highlight region is used as a target region of the highlight target.
In this embodiment, the pixel threshold may be preset, or may be adjusted according to the environment where the robot is located. Selecting a highlight area with a pixel value larger than a preset pixel threshold value in the binarized image, and taking the highlight area as a target area of the highlight target, specifically: and identifying pixels with pixel values higher than a preset pixel threshold value as highlight pixels, and identifying highlight areas formed by continuous adjacent highlight pixels as target areas of the highlight targets.
In S203, the coordinates of the preset point of the target region in the positioning image are identified as the coordinate information.
In this embodiment, the preset point may be a geometric center, a center of gravity, or a pixel center of mass of the region where the highlight target is located. In a possible implementation manner, the identifying, as the coordinate information, the coordinate of the central point of the target area in the positioning image may specifically be: and determining the pixel centroid of the target area through a clustering algorithm, determining the coordinate of the pixel centroid in the positioning image, and identifying the coordinate as the coordinate information. Specifically, reference may be made to a clustering algorithm in the prior art, which is not described herein again.
In this embodiment, the positioning image is processed by the binary digital image, so that the highlight target in the positioning image can be accurately identified, and the accuracy of the coordinate information is improved.
Further, the positioning method S202 provided in this embodiment further includes S2021 to S2022, which are described in detail as follows:
the selecting a highlight area with a pixel value larger than a preset pixel threshold value in the binarized image, and taking the highlight area as a target area of the highlight target comprises the following steps:
in S2021, if the highlight region has axial symmetry, identifying the highlight region as a target region of the highlight target;
in this embodiment, in order to avoid some temporarily appearing highlighted targets from interfering with the positioning method of this embodiment, a certain limitation needs to be performed on the highlighted targets. The temporary high-brightness object which may cause interference may be some electronic devices (especially mobile devices), and the high-brightness object in the positioning method of the embodiment should be some high-brightness objects which do not move in space, such as a lamp tube and a bulb. Therefore, in this embodiment, it is defined that the highlight region for the subsequent step has axial symmetry, that is, if the highlight region has axial symmetry, the highlight region is identified as the target region of the highlight target.
In S2022, if the highlight region does not have axial symmetry, the highlight region is identified as an undetermined region.
In this embodiment, similarly, if the highlight region does not have axial symmetry, it is considered that the highlight region may be a region where a highlight target that may cause interference temporarily appears, and therefore such highlight region is not used in the subsequent step, that is, if the highlight region does not have axial symmetry, the highlight region is identified as an undetermined region.
In this embodiment, the highlighted target is defined with axial symmetry, so as to avoid interference that some temporarily appearing highlighted targets may cause to subsequently determine the positioning information of the robot, and improve the accuracy of subsequently positioning the robot according to the highlighted area.
Fig. 3 shows a flowchart of an implementation of the positioning method according to the third embodiment of the present application. Referring to fig. 3, with respect to the embodiment shown in fig. 1, before S103, the positioning method provided in this embodiment further includes S301 to S304, which are detailed as follows:
in this embodiment, the positioning image includes at least two highlighted objects.
Further, before determining the positioning information of the robot based on the plurality of pieces of coordinate information, the method further includes:
in S301, the positioning images are sorted based on the order of the acquisition time, and the first N positioning images are selected from all the positioning images as comparison images.
In this embodiment, N is a positive integer greater than 1. The positioning method is continuous, namely positioning images are continuously acquired according to a preset time interval, namely the positioning images are sequenced according to the acquisition time when being acquired. In order to enable subsequent determination of the positioning information of the robot, a reference is required. I.e. the reference image and the reference positioning information of the robot at the moment of acquisition of the reference image are needed. The former N positioning images are selected to determine the reference as early as possible so as to realize the subsequent determination of the positioning information of the robot.
In S302, contrast positioning information corresponding to the robot at the acquisition time corresponding to each contrast image is acquired, and a positioning coordinate system is established based on first contrast positioning information corresponding to a first contrast image.
In this embodiment, the first comparison image is the subject image whose acquisition time is the earliest in the subject images.
In a possible implementation manner, the obtaining of the contrast positioning information corresponding to the robot at the acquisition time corresponding to each contrast image may specifically be: the comparison image is obtained when the robot finishes the action path by presetting the action path of the robot, so as to determine the comparison positioning information corresponding to the comparison image. And instructing the robot to complete the action path, specifically, monitoring whether the robot moves according to the action path through a component such as a speedometer, a gyroscope, a laser radar, a distance sensor, an optical sensor or a visual sensor.
In a possible implementation manner, the establishing a positioning coordinate system based on the first contrast positioning information corresponding to the first contrast image may specifically be: taking the first contrast positioning information corresponding to the first contrast image as the center of the positioning coordinate system, where the positioning coordinate system includes the spatial horizontal coordinate and the attitude rotation angle of the robot, that is, the spatial horizontal coordinate where the robot is located at the time of acquiring the first contrast image is (0,0), and the attitude rotation angle of the robot is 0, that is, the positioning information of the robot is (0,0, 0) at this time.
It should be understood that the spatial coordinates in the subsequent steps are referenced to the positioning coordinate system.
In S303, the contrast coordinates of the highlight object in each of the contrast images are determined.
In this embodiment, the above determining the contrast coordinates of the highlight object in each contrast image may specifically refer to the above description of S102, and is not repeated herein.
In S304, based on the contrast positioning information and the contrast coordinates, spatial position information of the highlight target is calculated.
In this embodiment, the positioning image includes a first highlight object and a second highlight object. The calculating the spatial position information of the highlight target in the positioning coordinate system based on the comparison positioning information and the comparison coordinates may specifically be: taking a first contrast image and a second contrast image as an example, determining a first external reference matrix based on the first contrast positioning information corresponding to the first contrast image, and determining a second external reference matrix based on the second contrast positioning information corresponding to the second contrast image; acquiring internal parameters of the acquisition equipment of the comparison image, and generating an internal parameter matrix; and determining the coordinates of the first highlighted target and the second highlighted target in the first contrast image and the second contrast image respectively, and constructing a contrast space equation to solve the space three-dimensional coordinates of the first highlighted target and the second highlighted target. The comparison space equation is specifically as follows:
{a1=K*M1*A;a2=K*M2*A
{b1=K*M1*B;b2=K*M2*B
wherein, a1The contrast coordinates of the first highlight target in the first contrast image; b1For the contrasting of the second highlighted object in the first contrast imageMarking; a is2The contrast coordinates of the first highlight target in the second contrast image; b2The contrast coordinates of the second highlight target in the second contrast image are obtained; k is an internal reference matrix; m1Is a first external parameter matrix; m2Is a second appearance parameter matrix; a is space position information of the first highlight target, namely a space three-dimensional coordinate; b is spatial position information of the second highlight object.
It is to be understood that the horizontal coordinate in the spatial three-dimensional coordinate system is based on the above-mentioned positioning coordinate system, and the height in the spatial three-dimensional coordinate system is also based on the unit coordinate of the positioning coordinate system.
In this embodiment, spatial position information of at least two highlighted targets is determined for subsequent determination of positioning information of the robot.
Fig. 4 shows a flowchart of an implementation of the positioning method according to the fourth embodiment of the present application. Referring to fig. 4, with respect to the embodiment described in fig. 3, the positioning method S103 provided in this embodiment includes S1031 to S1032, which are detailed as follows:
further, the determining the positioning information of the robot based on the plurality of coordinate information comprises:
in S1031, a target spatial equation is constructed based on the coordinate information and the spatial position information.
In this embodiment, the target space equation is as follows:
{at=K*Mt*A;
{bt=K*Mt*B;
wherein, atCoordinate information of a first highlight target in the positioning image; btTarget coordinates of a second highlight target in the positioning image; k is an internal parameter matrix constructed based on preset internal parameters; mtThe external parameter matrix of the robot at the acquisition time of the positioning image is obtained; a is space position information of the first highlight target in the positioning coordinate system, namely space coordinates; and B is the spatial position information of the second highlight object in the positioning coordinate system.
In S1032, the objective space equation is solved to obtain the external reference matrix, and the positioning information of the robot in the positioning coordinate system is determined based on the external reference matrix.
In this embodiment, the external reference matrix represents the spatial horizontal coordinate and the attitude rotation angle (based on the positioning coordinate system).
In this embodiment, the positioning information of the robot can be determined according to the mathematical relationship by at least two highlighted targets.
Fig. 5 shows a flowchart of an implementation of the positioning method provided in the fifth embodiment of the present application. Referring to fig. 5, with respect to the embodiment shown in fig. 1, the positioning method S103 provided in this embodiment includes S501 to S503, which are detailed as follows:
in this embodiment, the positioning image only includes one highlighted target, and the positioning information includes a spatial horizontal coordinate and an attitude rotation angle.
Further, the determining the positioning information of the robot based on the plurality of coordinate information comprises:
in S501, the attitude rotation angle is determined based on the assist sensor.
In this embodiment, the auxiliary sensor may be a gyroscope to record the rotation of the robot, that is, to determine a rotation variation value between an initial time when the robot acquires the positioning image and a time when the robot determines the positioning information, that is, the attitude rotation angle.
In S502, spatial position information of the highlight target is determined.
In this embodiment, the above-mentioned determining the spatial position information of the highlight object may refer to the above-mentioned related description of S304, and is not described herein again. It should be noted that in this embodiment, only the spatial position of one highlight object needs to be determined.
In S503, the spatial horizontal coordinates of the robot are calculated based on the coordinate information and the spatial position information of the highlight target.
In this embodiment, since the attitude rotation angle does not need to be calculated, the spatial horizontal coordinate can be calculated by constructing only the equation of the spatial horizontal coordinate. For specific calculation, reference may be made to the description of S1031, which is not repeated herein, and it should be noted that, in this embodiment, only one highlight target is provided, and the external parameters of this embodiment do not include the attitude rotation angle, so that one unknown parameter is reduced, and only one equation is needed to solve the spatial horizontal coordinate of the robot, instead of an equation set formed by two equations.
In the embodiment, the attitude rotation angle of the robot is determined according to the auxiliary sensor so as to reduce the calculation amount in the subsequent determination of the positioning information of the robot, and the positioning method capable of determining the positioning information of the robot by only one highlight target is provided.
Fig. 6 shows a flowchart of an implementation of a positioning method according to a sixth embodiment of the present application. Referring to fig. 6, in comparison with the embodiment shown in fig. 1, the positioning method provided in this embodiment further includes S601 to S604, which are detailed as follows:
further, the positioning method further includes:
in S601, any two of the positioning images that are continuously acquired are selected and identified as a first test image and a second test image.
In this embodiment, the selecting any two of the positioning images obtained continuously to identify as the first test image and the second test image may specifically be: and sequencing the positioning images based on the sequence of the acquisition time, and selecting any two adjacent positioning images as the first test image and the second test image. It should be understood that the positioning method is continuous, that is, the positioning images are continuously acquired at preset time intervals, that is, the positioning images are sequenced according to the acquisition time when being acquired
In S602, a first test image coordinate in the first test image and a second test image coordinate in the second test image of the highlighted target are determined.
In this embodiment, the above determining the first test image coordinate of the highlight object in the first test image and the second test image coordinate in the second test image may specifically refer to the above description of S102, and details are not repeated here.
In S603, the first test image coordinates and the second test image coordinates of each highlight target are compared to obtain a test difference value.
In this embodiment, the test difference value is used to indicate the position change of the highlight object in the first test image and the second test image, so as to further determine whether the position of the highlight object in the space has changed.
In a possible implementation manner, the comparing the first test image coordinate and the second test image coordinate of each highlight target to obtain a test difference value may specifically be: and calculating the difference value of the first test image coordinate and the second test image coordinate in each dimension, and identifying the sum of all the difference values as the test difference value.
In S604, if the test difference value is greater than a preset difference threshold, an error report or a prompt message is generated, and the determination of the positioning information of the robot based on the coordinate information is stopped or the robot is switched to another positioning mode.
In this embodiment, the difference threshold may be specifically determined according to the preset time interval. If the test difference value is greater than the difference threshold value, it indicates that the highlighted target has a position change in space (or the highlighted target disappears due to the light being turned off), at this time, the robot should not be positioned continuously, error reporting information or reminding information needs to be generated to inform a user, and the ongoing step of determining the positioning information of the robot based on the coordinate information is stopped, so that the robot restarts to execute the positioning method provided by the embodiment. In particular, if the highlighted target cannot be continuously identified in the acquired positioning image, the robot cannot continuously execute the positioning method provided by the embodiment, and at this time, the robot needs to be switched to another positioning mode.
In this embodiment, the positioning method is continuous, that is, the positioning images are acquired at preset time intervals, and in the process of continuously acquiring the positioning images, the highlight targets can be tracked in real time by performing the test on the first test image and the second test image, so that different highlight targets can be distinguished in each acquired positioning image.
Fig. 7 shows a schematic structural diagram of a positioning device provided in an embodiment of the present application, corresponding to the method described in the above embodiment, and only shows a part related to the embodiment of the present application for convenience of description.
Referring to fig. 7, the positioning apparatus includes: the positioning image acquisition module is used for acquiring a plurality of positioning images containing highlight targets; the highlight target identification module is used for identifying the corresponding coordinate information of the highlight target in each positioning image; and the positioning information determining module is used for determining the positioning information of the robot based on the coordinate information.
Optionally, the highlighted target identification module includes: the image processing module is used for carrying out binarization processing on the positioning image to obtain a binarization image; a target area determining module, configured to select a highlight area in the binarized image, where a pixel value of the highlight area is greater than a preset pixel threshold, and use the highlight area as a target area of the highlight target; and the coordinate information determining module is used for identifying the coordinates of the central point of the target area in the positioning image as the coordinate information.
Optionally, the target area determining module is further configured to, if the highlight area has axial symmetry, identify the highlight area as a target area of the highlight target; if the highlight area does not have axial symmetry, identifying the highlight area as an undetermined area.
Optionally, the positioning image includes at least two highlighted targets; the positioning device further comprises: the comparison image determining module is used for sequencing the positioning images based on the sequence of the acquisition time and selecting the first N positioning images from all the positioning images as comparison images; n is a positive integer greater than 1; the positioning coordinate system establishing module is used for acquiring contrast positioning information corresponding to the robot at the acquisition time corresponding to each contrast image and establishing a positioning coordinate system based on first contrast positioning information corresponding to a first contrast image; the first contrast image is the object image with the earliest acquisition time in the object images; the contrast coordinate determination module is used for determining contrast coordinates of the highlight target in each contrast image; and the spatial position information calculation module is used for calculating the spatial position information of the highlight target in the positioning coordinate system based on the comparison positioning information and the comparison coordinates.
Optionally, the positioning information determining module includes: a target space equation constructing module, configured to construct a target space equation based on the coordinate information and the spatial position information, where the target space equation is as follows: { at=K*Mt*A;{bt=K*MtB; wherein, atCoordinate information of a first highlight target in the positioning image; btTarget coordinates of a second highlight target in the positioning image; k is an internal parameter matrix constructed based on preset internal parameters; mtThe external parameter matrix of the robot at the acquisition time of the positioning image is obtained; a is space position information of the first highlight target in the positioning coordinate system, namely space coordinates; b is the spatial position information of the second highlight target in the positioning coordinate system; and the external parameter matrix solving module is used for solving the target space equation to obtain the external parameter matrix and determining the positioning information of the robot in the positioning coordinate system based on the external parameter matrix.
Optionally, the positioning image only includes one highlighted target, and the positioning information includes a spatial horizontal coordinate and an attitude rotation angle; the positioning information determination module further comprises: an attitude rotation angle determination module for determining the attitude rotation angle based on an auxiliary sensor; the spatial position information determining module is used for determining the spatial position information of the highlight target; and the space horizontal coordinate calculation module is used for calculating the space horizontal coordinate of the robot based on the coordinate information and the space position information of the highlight target.
Optionally, the positioning device further includes: the test image selecting module is used for selecting any two continuously acquired positioning images and identifying the positioning images into a first test image and a second test image;
a test image coordinate determination module for determining first test image coordinates of the highlight target in the first test image and second test image coordinates in the second test image;
the test difference value comparison module is used for comparing the first test image coordinate and the second test image coordinate of each highlight target to obtain a test difference value; and if the test difference value is larger than a preset difference threshold value, generating error reporting information and stopping determining the positioning information of the robot based on the coordinate information.
It should be noted that, for the information interaction, the execution process, and other contents between the above-mentioned apparatuses, the specific functions and the technical effects of the embodiments of the method of the present application are based on the same concept, and specific reference may be made to the section of the embodiments of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 8 shows a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 8, the terminal device 8 of this embodiment includes: at least one processor 80 (only one processor is shown in fig. 8), a memory 81, and a computer program 82 stored in the memory 81 and executable on the at least one processor 80, the processor 80 implementing the steps in any of the various method embodiments described above when executing the computer program 82.
The terminal device 8 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices, and establishes a communication connection with the robot so that the robot can implement the positioning method provided in this embodiment; or may be the robotic device itself. The terminal device may include, but is not limited to, a processor 80, a memory 81. Those skilled in the art will appreciate that fig. 8 is merely an example of the terminal device 8, and does not constitute a limitation of the terminal device 8, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, and the like.
The Processor 80 may be a Central Processing Unit (CPU), and the Processor 80 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may in some embodiments be an internal storage unit of the terminal device 8, such as a hard disk or a memory of the terminal device 8. In other embodiments, the memory 81 may also be an external storage device of the terminal device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the terminal device 8. The memory 81 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 81 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A robot positioning method based on a highlight target is applied to a robot, and is characterized by comprising the following steps:
acquiring a plurality of positioning images containing highlight targets;
identifying corresponding coordinate information of the highlight target in each positioning image respectively;
determining positioning information of the robot based on a plurality of the coordinate information.
2. The method according to claim 1, wherein the identifying the corresponding coordinate information of the highlight object in each of the positioning images comprises:
carrying out binarization processing on the positioning image to obtain a binarized image;
selecting a highlight area with a pixel value larger than a preset pixel threshold value in the binarized image, and taking the highlight area as a target area of the highlight target;
and identifying the coordinates of the preset points of the target area in the positioning image as the coordinate information.
3. The method as claimed in claim 2, wherein said selecting a highlight region in the binarized image whose pixel value is greater than a preset pixel threshold value, and using the highlight region as a target region of the highlight target comprises:
if the highlight area has axial symmetry, identifying the highlight area as a target area of the highlight target;
if the highlight area does not have axial symmetry, identifying the highlight area as an undetermined area.
4. The localization method according to claim 1, wherein at least two highlighted objects are contained in the localization image; before determining the positioning information of the robot based on the plurality of pieces of coordinate information, the method further includes:
sequencing the positioning images based on the sequence of the acquisition time, and selecting the first N positioning images from all the positioning images as comparison images; n is a positive integer greater than 1;
acquiring contrast positioning information corresponding to the robot at the acquisition time corresponding to each contrast image, and establishing a positioning coordinate system based on first contrast positioning information corresponding to a first contrast image; the first contrast image is the object image with the earliest acquisition time in the object images;
determining contrast coordinates of the highlighted target in each contrast image;
and calculating the spatial position information of the highlight target based on the contrast positioning information and the contrast coordinates.
5. The positioning method according to claim 4, wherein said determining the positioning information of the robot based on a plurality of the coordinate information comprises:
constructing a target space equation based on the coordinate information and the spatial position information, wherein the target space equation is as follows:
{at=K*Mt*A;
{bt=K*Mt*B;
wherein, atCoordinate information of a first highlight target in the positioning image; btTarget coordinates of a second highlight target in the positioning image; k is an internal parameter matrix constructed based on preset internal parameters; mtThe external parameter matrix of the robot at the acquisition time of the positioning image is obtained; a is space position information of the first highlight target in the positioning coordinate system, namely space coordinates; b is the space of the second highlight object in the positioning coordinate systemLocation information;
and solving the target space equation to obtain the external parameter matrix, and determining the positioning information of the robot in the positioning coordinate system based on the external parameter matrix.
6. The positioning method according to claim 1, wherein the positioning image contains only one highlighted target, and the positioning information includes spatial horizontal coordinates and attitude rotation angles; the determining positioning information of the robot based on a plurality of the coordinate information includes:
determining the attitude rotation angle based on an auxiliary sensor;
determining spatial location information of the highlight object;
and calculating the space horizontal coordinate of the robot based on the coordinate information and the space position information of the highlight target.
7. The positioning method according to any one of claims 1 to 6, further comprising:
selecting any two continuously acquired positioning images, and identifying the positioning images into a first test image and a second test image;
determining first test image coordinates of the highlighted target in the first test image and second test image coordinates in the second test image;
comparing the first test image coordinate and the second test image coordinate of each highlight target to obtain a test difference value;
and if the test difference value is larger than a preset difference threshold value, generating error reporting or reminding information, and stopping determining the positioning information of the robot based on the coordinate information or switching to other positioning modes.
8. A robot positioning device based on a highlight target, comprising:
the positioning image acquisition module is used for acquiring a plurality of positioning images containing highlight targets;
the highlight target identification module is used for identifying the corresponding coordinate information of the highlight target in each positioning image;
and the positioning information determining module is used for determining the positioning information of the robot based on the coordinate information.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202011383266.0A 2020-12-01 2020-12-01 Robot positioning method and device based on highlight target Active CN112614181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011383266.0A CN112614181B (en) 2020-12-01 2020-12-01 Robot positioning method and device based on highlight target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011383266.0A CN112614181B (en) 2020-12-01 2020-12-01 Robot positioning method and device based on highlight target

Publications (2)

Publication Number Publication Date
CN112614181A true CN112614181A (en) 2021-04-06
CN112614181B CN112614181B (en) 2024-03-22

Family

ID=75228364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011383266.0A Active CN112614181B (en) 2020-12-01 2020-12-01 Robot positioning method and device based on highlight target

Country Status (1)

Country Link
CN (1) CN112614181B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001269881A (en) * 2000-03-24 2001-10-02 Hitachi Zosen Corp Moving path generating method and device for work robot
WO2017042971A1 (en) * 2015-09-11 2017-03-16 株式会社安川電機 Processing system and robot control method
CN106570904A (en) * 2016-10-25 2017-04-19 大连理工大学 Multi-target relative posture recognition method based on Xtion camera
US20170129101A1 (en) * 2015-11-06 2017-05-11 Canon Kabushiki Kaisha Robot control apparatus and robot control method
CN107065871A (en) * 2017-04-07 2017-08-18 东北农业大学 It is a kind of that dining car identification alignment system and method are walked based on machine vision certainly
CN107622502A (en) * 2017-07-28 2018-01-23 南京航空航天大学 The path extraction of robot vision leading system and recognition methods under the conditions of complex illumination
CN108981672A (en) * 2018-07-19 2018-12-11 华南师范大学 Hatch door real-time location method based on monocular robot in conjunction with distance measuring sensor
CN208848089U (en) * 2018-09-28 2019-05-10 深圳乐动机器人有限公司 Sweeping robot
CN109993790A (en) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 Marker, the forming method of marker, localization method and device
CN110120074A (en) * 2019-05-10 2019-08-13 清研同创机器人(天津)有限公司 A kind of hot line robot cable localization method under complex environment
CN112686951A (en) * 2020-12-07 2021-04-20 深圳乐动机器人有限公司 Method, device, terminal and storage medium for determining robot position

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001269881A (en) * 2000-03-24 2001-10-02 Hitachi Zosen Corp Moving path generating method and device for work robot
WO2017042971A1 (en) * 2015-09-11 2017-03-16 株式会社安川電機 Processing system and robot control method
US20170129101A1 (en) * 2015-11-06 2017-05-11 Canon Kabushiki Kaisha Robot control apparatus and robot control method
CN106570904A (en) * 2016-10-25 2017-04-19 大连理工大学 Multi-target relative posture recognition method based on Xtion camera
CN107065871A (en) * 2017-04-07 2017-08-18 东北农业大学 It is a kind of that dining car identification alignment system and method are walked based on machine vision certainly
CN107622502A (en) * 2017-07-28 2018-01-23 南京航空航天大学 The path extraction of robot vision leading system and recognition methods under the conditions of complex illumination
CN109993790A (en) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 Marker, the forming method of marker, localization method and device
CN108981672A (en) * 2018-07-19 2018-12-11 华南师范大学 Hatch door real-time location method based on monocular robot in conjunction with distance measuring sensor
CN208848089U (en) * 2018-09-28 2019-05-10 深圳乐动机器人有限公司 Sweeping robot
CN110120074A (en) * 2019-05-10 2019-08-13 清研同创机器人(天津)有限公司 A kind of hot line robot cable localization method under complex environment
CN112686951A (en) * 2020-12-07 2021-04-20 深圳乐动机器人有限公司 Method, device, terminal and storage medium for determining robot position

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANG LI等: "Review of vision-based Simultaneous Localization and Mapping", 《2019 IEEE 3RD INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC)》, 6 June 2019 (2019-06-06), pages 117 - 123 *
RYOTA YOSHIMURA等: "Highlighted Map for Mobile Robot Localization and Its Generation Based on Reinforcement Learning", 《IEEE ACCESS》, vol. 8, 3 November 2020 (2020-11-03), pages 201527 - 201544, XP011820662, DOI: 10.1109/ACCESS.2020.3035725 *
徐丽娟: "基于图结构的视觉场景表达及其应用研究", 《中国博士学位论文全文数据库 信息科技辑》, no. 08, 15 August 2020 (2020-08-15), pages 138 - 20 *

Also Published As

Publication number Publication date
CN112614181B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN111079619B (en) Method and apparatus for detecting target object in image
US20200082561A1 (en) Mapping objects detected in images to geographic positions
US20180174038A1 (en) Simultaneous localization and mapping with reinforcement learning
CN111814752B (en) Indoor positioning realization method, server, intelligent mobile device and storage medium
CN110470333B (en) Calibration method and device of sensor parameters, storage medium and electronic device
CN109806585B (en) Game display control method, device, equipment and storage medium
WO2018022393A1 (en) Model-based classification of ambiguous depth image data
CN110942474B (en) Robot target tracking method, device and storage medium
Mojtahedzadeh Robot obstacle avoidance using the Kinect
US10902610B2 (en) Moving object controller, landmark, and moving object control method
CN112668428A (en) Vehicle lane change detection method, roadside device, cloud control platform and program product
CN113907663A (en) Obstacle map construction method, cleaning robot and storage medium
CN111382637A (en) Pedestrian detection tracking method, device, terminal equipment and medium
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN111709988A (en) Method and device for determining characteristic information of object, electronic equipment and storage medium
CN112686951A (en) Method, device, terminal and storage medium for determining robot position
JP7351892B2 (en) Obstacle detection method, electronic equipment, roadside equipment, and cloud control platform
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
KR20220058846A (en) Robot positioning method and apparatus, apparatus, storage medium
CN103903253A (en) Mobile terminal positioning method and system
CN111157012B (en) Robot navigation method and device, readable storage medium and robot
US20230224576A1 (en) System for generating a three-dimensional scene of a physical environment
Kumar Rath et al. Real‐time moving object detection and removal from 3D pointcloud data for humanoid navigation in dense GPS‐denied environments
Kawanishi et al. Parallel line-based structure from motion by using omnidirectional camera in textureless scene
CN117132649A (en) Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518000 room 1601, building 2, Vanke Cloud City phase 6, Tongfa South Road, Xili community, Xili street, Nanshan District, Shenzhen City, Guangdong Province (16th floor, block a, building 6, Shenzhen International Innovation Valley)

Applicant after: Shenzhen Ledong robot Co.,Ltd.

Address before: 518000 room 1601, building 2, Vanke Cloud City phase 6, Tongfa South Road, Xili community, Xili street, Nanshan District, Shenzhen City, Guangdong Province (16th floor, block a, building 6, Shenzhen International Innovation Valley)

Applicant before: SHENZHEN LD ROBOT Co.,Ltd.

GR01 Patent grant
GR01 Patent grant