CN112710308A - Positioning method, device and system of robot - Google Patents

Positioning method, device and system of robot Download PDF

Info

Publication number
CN112710308A
CN112710308A CN201911025754.1A CN201911025754A CN112710308A CN 112710308 A CN112710308 A CN 112710308A CN 201911025754 A CN201911025754 A CN 201911025754A CN 112710308 A CN112710308 A CN 112710308A
Authority
CN
China
Prior art keywords
robot
steering
parameter
positioning
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911025754.1A
Other languages
Chinese (zh)
Inventor
张明明
左星星
陈一鸣
李名杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201911025754.1A priority Critical patent/CN112710308A/en
Publication of CN112710308A publication Critical patent/CN112710308A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Abstract

The application discloses a positioning method, a positioning device and a positioning system of a robot. Wherein, the method comprises the following steps: acquiring a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; in the robot steering process, steering parameters of the robot are obtained through a constraint function; and determining a positioning parameter based on the steering parameter and a steering model acquired in advance, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for expressing the relation between the positioning parameter and the steering parameter. The positioning method and the positioning device solve the technical problem that the positioning is inaccurate when the robot turns in the prior art.

Description

Positioning method, device and system of robot
Technical Field
The application relates to the field of robot control, in particular to a positioning method, device and system of a robot.
Background
The sliding steering robot is a robot capable of steering by changing the speed of left and right wheels or tracks, and has simple structure and flexible movement because no special steering mechanism is needed, thus being widely applied to the work and scientific exploration of outdoor environment.
In the prior art, when the skid-steer robot is positioned, data acquired by a Global Positioning System (GPS) is generally used for Positioning the skid-steer robot, however, in an area without GPS data (for example, indoors) or an area with weak GPS signals (for example, an area shielded by trees or buildings outdoors), since the GPS data cannot be acquired or only a part of the GPS data can be acquired, the skid-steer robot cannot be accurately positioned.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a positioning method, a positioning device and a positioning system of a robot, and aims to at least solve the technical problem of inaccurate positioning when the robot turns in the prior art.
According to an aspect of an embodiment of the present application, there is provided a positioning method of a robot, including: acquiring a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; in the robot steering process, steering parameters of the robot are obtained through a constraint function; and determining a positioning parameter based on the steering parameter and a steering model acquired in advance, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for expressing the relation between the positioning parameter and the steering parameter.
According to another aspect of the embodiments of the present application, there is also provided a positioning method of a robot, including: acquiring image information in the robot steering process, and determining a visual re-projection error according to the image information; determining a minimum objective function corresponding to the robot according to the vision reprojection error; estimating the steering parameters by solving a minimum objective function to obtain the steering parameters of the robot; and determining a positioning parameter based on the steering parameter and a steering model acquired in advance, and positioning the robot according to the positioning parameter, wherein the steering model is used for expressing the relation between the positioning parameter and the steering parameter.
According to another aspect of the embodiments of the present application, there is also provided a positioning apparatus for a robot, including: the acquisition module is used for acquiring a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; the processing module is used for obtaining the steering parameters of the robot through a constraint function in the steering process of the robot; the determining module is used for determining a positioning parameter based on the steering parameter and a steering model acquired in advance, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for expressing the relation between the positioning parameter and the steering parameter.
According to another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program, wherein when the program runs, an apparatus on which the storage medium is located is controlled to execute the above-mentioned positioning method for a robot.
According to another aspect of the embodiments of the present application, there is also provided a processor for executing a program, where the program executes the positioning method of the robot described above.
According to another aspect of the embodiments of the present application, there is also provided a positioning system of a robot, including: a processor; and a memory coupled to the processor for providing instructions to the processor for processing the following processing steps: acquiring a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; in the robot steering process, steering parameters of the robot are obtained through a constraint function; and determining a positioning parameter based on the steering parameter and a steering model acquired in advance, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for expressing the relation between the positioning parameter and the steering parameter.
In the embodiment of the application, a visual positioning mode is adopted, firstly, a visual reprojection error is determined according to image information acquired by a robot, a constraint function corresponding to the robot is determined according to the visual reprojection error, then, in the robot steering process, steering parameters of the robot are obtained through the constraint function, finally, positioning parameters are determined based on the steering parameters and a steering model obtained in advance, and positioning information of the robot is obtained according to the positioning parameters.
According to the method, the positioning information of the robot is obtained by processing the image information acquired by the robot, and the positioning information of the robot is accurately determined by combining the robot and the visual positioning in the process. In addition, because the visual positioning is not influenced by the strength of the GPS signal, the scheme provided by the application can realize accurate positioning of the robot in a scene with a weak GPS signal or without the GPS signal, and the application range of the robot is expanded.
Therefore, the purpose of positioning the robot is achieved by the scheme provided by the application, the technical effect of improving the positioning precision of the robot is achieved, and the technical problem that the positioning of the robot in the prior art is inaccurate when the robot turns is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a block diagram of an alternative computing device hardware architecture according to embodiments of the present application;
FIG. 2 is a flow chart of a method of positioning a robot according to an embodiment of the present application;
FIG. 3 is a schematic view of an alternative robot chassis according to embodiments of the present application;
FIG. 4 is a schematic diagram illustrating an alternative determination of visual reprojection errors according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an alternative determination of visual reprojection errors according to an embodiment of the present application;
FIG. 6 is a flow chart of a method of positioning a robot according to an embodiment of the present application;
FIG. 7 is a schematic view of a positioning device of a robot according to an embodiment of the present application;
FIG. 8 is a block diagram of an alternative computing device according to embodiments of the present application;
FIG. 9 is a block flow diagram of an alternative robot-based positioning method according to an embodiment of the present application;
FIG. 10 is a block flow diagram of an alternative robot-based positioning method according to an embodiment of the present application; and
fig. 11 is a schematic view of an alternative scenario of a robot-based positioning method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
There is also provided in accordance with an embodiment of the present application an embodiment of a method for positioning a robot, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computing device, or a similar computing device. Fig. 1 shows a hardware configuration block diagram of a computing device (or mobile device) for implementing a positioning method of a robot. As shown in fig. 1, computing device 10 (or mobile device 10) may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), memory 104 for storing data, and transmission device 106 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, computing device 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuitry may be a single, stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computing device 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the positioning method of the robot in the embodiment of the present application, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, so as to implement the positioning method of the robot described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 104 may further include memory located remotely from processor 102, which may be connected to computing device 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of such networks may include wireless networks provided by a communications provider of computing device 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computing device 10 (or mobile device).
It should be noted here that in some alternative embodiments, the computer device (or mobile device) shown in fig. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in the computer device (or mobile device) described above.
In the above operating environment, the present application provides a positioning method of a robot as shown in fig. 2. Fig. 2 is a flowchart of a positioning method of a robot according to a first embodiment of the present application, as shown in fig. 2, the method includes:
step S202, a constraint function corresponding to the robot is obtained, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to the image information acquired by the robot.
In step S202, the robot may be, but is not limited to, a skid steer robot, wherein the chassis of the skid steer robot may be a track chassis or a wheel chassis, and optionally, in this application, the robot is a skid steer robot with a four-wheel chassis.
The constraint function is an objective function related to a decision variable, in the above embodiment of the present application, the decision variable is a steering parameter of the robot, and the constraint function is an objective function related to the steering parameter. And solving the constraint function to obtain the steering parameters of the robot. In an alternative embodiment, the constraint function is a minimum objective function, i.e. the steering parameter is the optimal solution when the constraint function takes its minimum value.
In an alternative embodiment, the robot has an image capturing device, which may be but is not limited to a camera, and further has a processor, which may acquire image information captured by the image capturing device, process the image information to obtain a visual re-projection error, and then determine a corresponding constraint function of the robot based on the visual re-projection error.
Optionally, fig. 11 shows a scene schematic diagram of a positioning method based on a robot, in fig. 11, the robot is only used for acquiring image information, after the robot acquires the image information, the image information may also be sent to a computing device, and the computing device processes the image information, so that a visual re-projection error may be obtained. In addition, the computing equipment can also acquire a detection value detected by the inertia measuring instrument and determine inertia measurement constraint according to the detection value; the computing equipment can obtain odometer data by reading the detection value of the odometer; the computing device obtains historical image information from the database and obtains prior information through the historical image information. And finally, the computing equipment can obtain a constraint function through the vision reprojection error, the inertia measurement constraint, the odometer data and the prior information.
As can be seen from fig. 11, the constraint function includes the visual reprojection error, the inertial measurement constraint, the odometry data, and the a priori information. In the method, the four parameters are mainly optimized to generate an estimation result of the chassis model of the robot, and then the positioning information of the robot is accurately determined according to the estimation result.
It should be noted that, in computer vision, for example, when calculating a planar homography matrix and a projection matrix, a cost function is constructed by using a visual reprojection error and is subjected to a minimization process to optimize the homography matrix or the projection matrix. In the process, in the process of constructing the cost function by the visual re-projection error, not only the calculation error of the homography matrix but also the measurement error of the image point are considered, so that the measurement accuracy can be improved by using the visual re-projection error in computer vision.
And step S204, obtaining the steering parameters of the robot through a constraint function in the steering process of the robot.
Optionally, as shown in fig. 11, in the process of performing the steering operation on the robot, the computing device may obtain the steering parameter by solving the constraint function.
In step S204, the robot includes a left tire and a right tire, as shown in fig. 3 for a schematic view of a robot chassis, in fig. 3, the left tire of the robot includes two tires, and the right tire also includes two tires. Wherein, the steering parameter of robot includes: a first coordinate parameter of the instantaneous rotation center of the left tire during steering and a second coordinate parameter of the instantaneous rotation center of the right tire during steering; a first scale parameter of the left tire for representing coefficients superimposed on the left tire and a second scale parameter of the right tire for representing coefficients superimposed on the right tire.
It should be noted that the left tire and the right tire correspond to different instantaneous centers of rotation, and as shown in fig. 3, the instantaneous center of rotation of the left tire during steering is ICRlThe instantaneous center of rotation of the right tire in turning is ICRr. In addition, ICRvFor the instantaneous center of rotation of the robot as a whole when turning the robot, ICR in FIG. 3vThe instantaneous rotation center corresponding to the whole robot when the robot turns to the left at the angular velocity omega.
Optionally, the first coordinate parameter is a coordinate value of an instantaneous rotation center of the left tire during turning, that is, an ICRlCan be represented by (X)l,Yl) Is represented by, wherein XlDenotes the abscissa, YlIndicating the ordinate. The second coordinate parameter is a coordinate value of an instantaneous center of rotation of the right tire in turning, i.e., ICRrCan be represented by (X)r,Yr) Is represented by, wherein XrDenotes the abscissa, YrIndicating the ordinate.
In the present application, the X-axis coordinates of the instantaneous centers of rotation of the left and right tires during steering are both X-axis coordinates, because the X-axis coordinates of the instantaneous centers of rotation of the left and right tires during steering are the samevRepresents; since the Y-axis coordinates of the instantaneous rotation centers of the left and right tires at the time of turning are different, Y is adopted in this applicationlAnd YrAnd Y-axis coordinates representing instantaneous centers of rotation of the left and right tires when turning. Thus, the steering parameter may be represented by:
Figure BDA0002248580020000071
in the above formula, xi is a steering parameter, αlAnd alpharRespectively representing a first dimension parameter for the left tire and a second dimension parameter for the right tire. Wherein alpha islAnd alpharThe numerical value of the coefficient is related to the material and the inflation state of the left wheel and the right wheel respectively, and the specific numerical value can be obtained by solving a constraint function. In addition, the first coordinate parameter of the instantaneous rotation center of the left tire during steering and the second coordinate parameter of the instantaneous rotation center of the right tire during steering are related to the type of the ground contacted by the tires (including the left tire and the right tire) of the robot and the size of the robot, and different ground types and sizes of the robot correspond to different coordinate parameters of the instantaneous rotation center, namely, the first coordinate parameter of the instantaneous rotation center of the left tire during steering and the second coordinate parameter of the instantaneous rotation center of the right tire during steering are different under different ground types and sizes of the robot.
And S206, determining a positioning parameter based on the steering parameter and a steering model acquired in advance, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for expressing the relation between the positioning parameter and the steering parameter.
Optionally, as shown in fig. 11, after the steering parameter is obtained, the computing device determines the positioning parameter through a steering model representing a relationship between the positioning parameter and the steering parameter, and then outputs the positioning information of the robot to the display device according to the positioning parameter, where the display device may be a device in a management platform used by a robot manager, and the robot manager may determine whether the robot reaches the work site through the positioning information displayed by the display device, or the robot manager may send a control instruction to the robot according to the positioning information displayed by the display device. In addition, the management platform can also send a control instruction to the robot according to the positioning information so as to achieve the purpose of remotely controlling the robot.
In step S206, the positioning parameters include: angular velocity of robot when turningDegree and linear velocity at the time of robot turning, ω in fig. 3 being the angular velocity of the entire robot at the time of robot turning, voThe linear velocity of the whole robot when the robot turns, wherein v can be calculated convenientlyoDecomposing to obtain the components of the linear velocity of the whole robot in the X-axis and Y-axis directions, i.e. voxAnd voy. Alternatively, the positioning parameters may be represented in the form of a matrix as follows:
Figure BDA0002248580020000072
in addition, in FIG. 3, olAnd orLinear velocities of the left tire and the right tire are respectively shown, wherein the size of a line segment corresponding to an arrow indicates the linear velocity of the left tire and the right tire, and in fig. 3, the linear velocity of the left tire is smaller than that of the right tire.
Alternatively, the positioning parameter and the steering parameter may be represented by the following formula:
Figure BDA0002248580020000081
wherein the function g (-) represents the steering model.
In addition, in step S206, the positioning information of the robot includes the position and posture of the robot, wherein the position of the robot may be determined by
Figure BDA00022485800200000810
The pose of the robot can be represented by
Figure BDA0002248580020000082
And (4) showing. And the processor of the robot determines the position and the posture of the robot at the current moment according to the angular velocity when the robot turns and the linear velocity when the robot turns.
Optionally, the processor may obtain the position and the posture of the robot at the current time through the transfer function G and the positioning parameters obtained by solving the steering model. Wherein, the processor of the robot can estimate the position and the attitude of the current moment according to the position and the attitude of the robot of the last moment. Namely, the position and the posture of the robot at the current moment satisfy the following formula:
Figure BDA0002248580020000083
Figure BDA0002248580020000084
for discrete estimation, the above equation can be transformed to the following equation:
Figure BDA0002248580020000085
Figure BDA0002248580020000086
Figure BDA0002248580020000087
Figure BDA0002248580020000088
in the above four formulas, Δ t is the time difference between the previous time and the current time,
Figure BDA0002248580020000089
the corresponding transfer matrix is shown below:
Figure BDA0002248580020000091
the transfer function of the covariance matrix is shown as follows:
Figure BDA0002248580020000092
wherein Q isd∈R9×9Correspond to
Figure BDA0002248580020000093
Of noise information of
Figure BDA0002248580020000094
Wherein G is the transfer function.
Based on the schemes defined in steps S202 to S206, it can be known that, in the embodiment of the present application, a visual positioning manner is adopted, a visual reprojection error is determined according to image information acquired by the robot, a constraint function corresponding to the robot is determined according to the visual reprojection error, then, in a robot steering process, a steering parameter of the robot is obtained through the constraint function, and finally, a positioning parameter is determined based on the steering parameter and a steering model acquired in advance, and positioning information of the robot is acquired according to the positioning parameter.
It is easy to notice that the positioning information of the robot is obtained by processing the image information collected by the robot, and the positioning information of the robot is accurately determined by combining the robot and the visual positioning in the process. In addition, because the visual positioning is not influenced by the strength of the GPS signal, the scheme provided by the application can realize accurate positioning of the robot in a scene with a weak GPS signal or without the GPS signal, and the application range of the robot is expanded.
Therefore, the purpose of positioning the robot is achieved by the scheme provided by the application, the technical effect of improving the positioning precision of the robot is achieved, and the technical problem that the positioning of the robot in the prior art is inaccurate when the robot turns is solved.
In an optional embodiment, the visual reprojection error is expressed according to a steering parameter, wherein after obtaining a constraint function corresponding to the robot, in a steering process of the robot, a processor of the robot determines that the constraint function is a minimum objective function, and then solves the minimum objective function to obtain a steering parameter, that is, a function corresponding to a minimum output result corresponding to the solved constraint function is the minimum objective function, and at this time, the parameter corresponding to the minimum constraint function is the steering parameter.
Wherein the constraint function can be represented by the following formula:
C=Cproj+CIMU+Codom+Cprior
in the above formula, C is a constraint function, CprojFor visual reprojection errors, CIMUFor inertial measurement constraints, CodomAs odometer data, CpriorIs a priori information. As can be seen from the above equation, the constraint function includes, in addition to the visual reprojection error, the inertial measurement constraint, the odometry data, and the a priori information.
Optionally, for the visual reprojection error, the robot first acquires image information of consecutive multiple frames collected by the robot, and then determines the visual reprojection error of the robot based on the image information of the consecutive multiple frames.
Specifically, after obtaining image information of continuous multiple frames, the robot extracts feature points in the image information, tracks the feature points in the image information of the continuous multiple frames to obtain track information corresponding to the feature points, and then determines position information of the feature points in the three-dimensional space according to the track information. When the image information acquired by the robot at present comprises the feature points, the feature points are projected onto a two-dimensional plane corresponding to the image information acquired by the robot at present according to the position information of the feature points in the three-dimensional space to obtain projection points, and finally, the vision re-projection error is determined according to the positions of the feature points and the positions of the projection points in the image information acquired by the robot at present.
In the above process, the feature point in the image information may be a corner point in the image, where the corner point may be an isolated point with the greatest or smallest intensity on some property, or an end point of a line segment, and the corner point may be a connection point of an object contour line in the image (for example, a corner of a house). In addition, the feature points in the image information may also be points in which colors are prominent in the image.
Optionally, the robot may track the feature points in each frame of image by using a KIT algorithm, so as to obtain trajectory information of the feature points, and then calculate position information of the feature points in the three-dimensional space by using a triangulation algorithm. The triangulation algorithm is a positioning algorithm, and in the application, the positions of the feature points are determined by applying a triangle geometry principle according to the track information of the feature points.
Further, when the image acquisition device of the robot acquires the image again, and the acquired image includes the feature point, the robot projects the three-dimensional position of the feature point (i.e. the position information of the feature point in the three-dimensional space) onto the two-dimensional plane to obtain a projection point, and finally calculates the visual re-projection error according to the position of the projection point and the position of the feature point. For example, in fig. 4, the observations P1 and P2 are the projections of the same spatial point P, and the projection P2' of P has a certain distance e from the observation P2, which is the visual reprojection error.
In addition, as shown in fig. 5, x and x' are projection points corresponding to the feature points in the image,
Figure BDA0002248580020000101
is an estimate of the x-ray intensity,
Figure BDA0002248580020000102
for an estimate of x ', the visual reprojection error for x and x' satisfies the following equation:
Figure BDA0002248580020000103
in the above equation,. epsilon.is the visual reprojection error, and x' satisfy the following equation:
x'=Hx
Figure BDA0002248580020000111
and
Figure BDA0002248580020000112
satisfies the following formula:
Figure BDA0002248580020000113
wherein the content of the first and second substances,
Figure BDA0002248580020000114
is an estimate of H, which is a homography matrix.
From the above formula of the visual reprojection error, there is an error between the feature point and the estimated value in the image, so the coordinates of the estimated value need to be estimated again, and the new estimated value obtained by estimation satisfies the homography, for example, in fig. 5, the sum of d and d' is the visual reprojection error,
Figure BDA0002248580020000115
is composed of
Figure BDA0002248580020000116
Is turned upside down.
Optionally, the constraint function further includes: and inertia measurement constraint, wherein the inertia measurement constraint is irrelevant to the steering parameter and is detected by an inertia measuring instrument. Specifically, the robot first acquires a detection value of an inertial measurement unit provided in the robot, and then determines an inertial measurement constraint based on the detection value of the inertial measurement unit. Wherein the detection value includes: acceleration and angular velocity of the robot.
It should be noted that the Inertial Measurement Unit may be an IMU (Inertial Measurement Unit), where the Inertial Measurement Unit has a gyroscope and an accelerometer, and the gyroscope is a three-axis gyroscope for detecting an angular velocity of the robot; the accelerometer comprises an accelerometer in three directions (namely x, y and z directions) in space and is used for respectively detecting the acceleration of the robot in the three directions.
In addition, after obtaining the detection value of the inertia detector, the processor of the robot may perform an integral calculation on the detection value, thereby obtaining the inertia measurement constraint.
In an alternative embodiment, the constraint function further comprises: odometer data, wherein the odometer data is obtained by reading a detection value of the odometer. Among them, the odometer is a method of estimating a change in the position of an object with time using data obtained from a motion sensor.
As can be seen from the above, the inertial measurement constraints and odometer data allow the position and angle of the robot to be observed based on different detection devices.
In an alternative embodiment, the constraint function further comprises: a priori information, wherein the a priori information includes edge information in the historical image information. Optionally, the processor may obtain an acquisition duration of an image acquired by the image acquisition device, sort the images acquired by the image acquisition device according to the acquisition duration, delete a history image whose acquisition duration is greater than a preset duration to obtain a target history image, and finally calculate an edge distribution of the target history image, so as to obtain the prior information.
Further, after determining the visual reprojection error, the inertial measurement constraint, the odometry data, and the prior information, a constraint function may be obtained. And then the processor obtains the steering parameters of the robot through a constraint function in the steering process of the robot, further determines positioning parameters based on the steering parameters and a steering model acquired in advance, and acquires the positioning information of the robot according to the positioning parameters. Wherein the processor of the robot needs to create a steering model before determining the positioning parameters.
Specifically, the processor may first acquire an intermediate variable, wherein the intermediate variable is determined by a difference in the vertical direction between the center of rotation of the left tire and the center of rotation of the right tire. Then, acquiring an intermediate matrix, wherein the intermediate matrix comprises: the linear speed measuring device comprises a first matrix, a second matrix and a third matrix, wherein the first matrix is composed of an abscissa parameter of the first coordinate parameter, a vertical coordinate parameter of the first coordinate parameter and a vertical coordinate parameter of the second coordinate parameter, the second matrix is composed of the first scale parameter and the second scale parameter, and the third matrix is composed of the linear speed of the tire on the left side and the linear speed of the tire on the right side. And finally, determining a corresponding relation between a target matrix formed by the positioning parameters and the intermediate variable and the intermediate matrix as a steering model.
Alternatively, the steering model g (-) can be expressed as:
Figure BDA0002248580020000121
in the above formula, W is an intermediate matrix, wherein,
Figure BDA0002248580020000122
the first matrix is
Figure BDA0002248580020000123
The second matrix is
Figure BDA0002248580020000124
The third matrix is
Figure BDA0002248580020000125
In addition, since the abscissa parameter of the first coordinate parameter is the same as the abscissa parameter of the second coordinate parameter, in the above formula, Y is usedvAn abscissa parameter, Y, representing a first and a second coordinate parameterlVertical coordinate parameter, Y, being a first coordinate parameterrA vertical coordinate parameter that is a second coordinate parameter. Further, αlIs a first scale parameter, αrIs a second scale parameter, olLinear velocity of left tire, orThe linear velocity of the right tire.
In the above formula, Δ Y is an intermediate variable, where Δ Y satisfies the following formula: y representsl-Yr
It should be noted that after the steering model is obtained, the processor may determine the positioning parameters through the steering model, and then obtain the positioning information of the robot according to the positioning parameters.
In addition, on the basis of the positioning method for the robot provided by the application, the positioning information of the robot can be determined by combining with the GPS data of the robot acquired by the GPS, so that the positioning accuracy of the robot is improved.
In an alternative embodiment, fig. 9 shows a flow chart of a positioning method based on a robot, and as can be seen from fig. 9, after an image acquisition device in the robot acquires image information, the image information is sent to a computing device, wherein the computing device may be a processor in the robot. And processing the image information of continuous multiple frames by the computing equipment to obtain a constraint function, wherein the constraint function at least comprises a visual reprojection error, an inertial measurement constraint, mileage calculation data and prior information. In the process of steering operation of the robot, the computing equipment can obtain steering parameters by solving the constraint function. And then, the computing equipment determines the positioning parameters through a steering model representing the relation between the positioning parameters and the steering parameters, and finally, the positioning information of the robot is output according to the positioning parameters.
In another alternative embodiment, fig. 10 shows a flowchart of a positioning method based on a robot, and in fig. 10, the robot is applied in a scene of outdoor environment work, wherein a processor in the robot can process image information collected by the robot, so as to obtain positioning information. As shown in fig. 10, when the robot performs outdoor operation, the image acquisition device of the robot may acquire image information of a continuous multi-frame operation environment, and send the acquired image information to the processor of the robot, and the processor processes the image information of the continuous multi-frame operation environment to obtain a constraint function, where the constraint function at least includes a visual reprojection error, an inertial measurement constraint, mileage calculation data, and prior information. In the process of steering operation of the robot, the processor can obtain steering parameters by solving the constraint function, then determine positioning parameters by a steering model representing the relation between the positioning parameters and the steering parameters, and finally output the positioning information of the robot according to the positioning parameters. After obtaining the positioning information, the robot sends the positioning information to a computing device, where the computing device may be a robot management platform. The computing device may analyze the positioning information of the robot, determine whether the robot has arrived at the job site, whether the robot is beginning to work, and the like. For example, if the computing device determines that the robot does not reach the work site based on the positioning information transmitted by the robot, the computing device determines the moving direction of the robot based on the positioning information of the robot and the position information of the work site, generates a control command for the moving direction, and transmits the control command to the robot so that the robot moves in the moving direction. For another example, after determining that the robot has reached the work site, the computing device controls the robot to perform the work, e.g., in a scenario where a soil specimen is collected, the computing device sends a control instruction to start the work to the robot, and after receiving the control instruction, a part (e.g., a manipulator) of the robot that collects the soil specimen starts performing the work.
According to the scheme, the chassis model of the robot is fused with the visual positioning and the IMU observation, and the chassis model of the robot can be estimated on line in real time to adapt to different terrain conditions. In addition, the scheme provided by the application is integrated with vision rather than GPS, so that the problem that GPS data cannot be acquired when GPS signals are weak or no GPS signals is avoided, the application range of the robot is expanded, and the positioning precision of the robot is improved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the positioning method of the robot according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
Example 2
According to an embodiment of the present application, there is also provided a positioning method of a robot, as shown in fig. 6, the method including the steps of:
step S602, collecting image information in the robot steering process, and determining a visual reprojection error according to the image information.
In step S602, the robot may be, but is not limited to, a skid steer robot, wherein the chassis of the skid steer robot may be a track chassis or a wheel chassis, and optionally, in this application, the robot is a skid steer robot with a four-wheel chassis.
In an alternative embodiment, the robot has an image capturing device, which may be but is not limited to a camera, and a processor, which may acquire and process image information captured by the image capturing device, so as to obtain the visual re-projection error.
Specifically, the robot firstly acquires continuous multiframe image information acquired by the robot, then extracts feature points in the image information, tracks the feature points in the continuous multiframe image information to obtain track information corresponding to the feature points, and determines position information of the feature points in a three-dimensional space according to the track information. When the image information currently acquired by the robot comprises the feature points, the feature points are projected onto a two-dimensional plane corresponding to the image information currently acquired by the robot according to the position information of the feature points in the three-dimensional space to obtain projection points, and finally, the vision re-projection error is determined according to the positions of the feature points and the positions of the projection points in the image information currently acquired by the robot.
It should be noted that, in computer vision, for example, when calculating a planar homography matrix and a projection matrix, a cost function is constructed by using a visual reprojection error and is subjected to a minimization process to optimize the homography matrix or the projection matrix. In the process, in the process of constructing the cost function by the visual re-projection error, not only the calculation error of the homography matrix but also the measurement error of the image point are considered, so that the measurement accuracy can be improved by using the visual re-projection error in computer vision.
And step S604, determining a minimum objective function corresponding to the robot according to the vision re-projection error.
In step S604, the minimum objective function is an objective function related to a decision variable, in the above embodiment of the present application, the decision variable is a steering parameter of the robot, and the minimum objective function is an objective function related to the steering parameter. And solving the minimum objective function to obtain the steering parameters of the robot. In an alternative embodiment, the steering parameter is the optimal solution when the minimum objective function is taken to its minimum.
In an alternative embodiment, the minimum objective function at least includes the visual reprojection error, and therefore, after the visual reprojection error is obtained, the minimum objective function corresponding to the robot can be obtained.
In another alternative embodiment, the minimum objective function includes at least: visual reprojection errors, inertial measurement constraints, odometry data, and a priori information. The robot first acquires a detection value of an inertial measurement unit provided in the robot, and then determines an inertial measurement constraint based on the detection value of the inertial measurement unit. Wherein the detection value includes: acceleration and angular velocity of the robot. The robot obtains odometer data by reading the detection value of the odometer. The processor of the robot can acquire the acquisition time of the images acquired by the image acquisition equipment, sort the images acquired by the acquisition equipment according to the acquisition time, delete the historical images with the acquisition time longer than the preset time to obtain the target historical images, and finally calculate the edge distribution of the target historical images to obtain the prior information.
After the vision reprojection error, the inertia measurement constraint, the odometer data and the prior information are obtained, the vision reprojection error, the inertia measurement constraint, the odometer data and the prior information are subjected to summation operation to obtain a minimum objective function.
And step S606, estimating the steering parameters by solving the minimum objective function to obtain the steering parameters of the robot.
In step S606, the robot includes a left tire and a right tire, as shown in fig. 3 for a schematic view of a robot chassis, in fig. 3, the left tire of the robot includes two tires, and the right tire also includes two tires. Wherein, the steering parameter of robot includes: a first coordinate parameter of the instantaneous rotation center of the left tire during steering and a second coordinate parameter of the instantaneous rotation center of the right tire during steering; a first scale parameter of the left tire for representing coefficients superimposed on the left tire and a second scale parameter of the right tire for representing coefficients superimposed on the right tire.
Optionally, the processor of the robot may determine that the preset function is a minimum objective function, and obtain the steering parameter by solving the minimum objective function.
Step S608, determining a positioning parameter based on the steering parameter and a steering model obtained in advance, and positioning the robot according to the positioning parameter, wherein the steering model is used for representing a relationship between the positioning parameter and the steering parameter.
In step S608, the positioning parameters include: angular velocity at the time of robot turning and linear velocity at the time of robot turning, ω in fig. 3 is the angular velocity of the entire robot at the time of robot turning, voThe linear velocity of the whole robot when the robot turns, wherein v can be calculated convenientlyoDecomposing to obtain the linear speed of the whole robot in the X-axis and Y-axis directionsComponent of (a) or (v)oxAnd voy. Alternatively, the positioning parameters may be represented in the form of a matrix as follows:
Figure BDA0002248580020000161
in addition, in FIG. 3, olAnd orLinear velocities of the left tire and the right tire are respectively shown, wherein the size of a line segment corresponding to an arrow indicates the linear velocity of the left tire and the right tire, and in fig. 3, the linear velocity of the left tire is smaller than that of the right tire.
Alternatively, the positioning parameter and the steering parameter may be represented by the following formula:
Figure BDA0002248580020000162
wherein the function g (-) represents the steering model.
Further, after the positioning parameters are obtained, the processor of the robot determines the position and the posture of the robot at the current moment according to the angular velocity when the robot turns and the linear velocity when the robot turns, that is, the positioning information of the robot is obtained, and thus, the positioning of the robot is completed.
It is easy to notice that the positioning information of the robot is obtained by processing the image information collected by the robot, and the positioning information of the robot is accurately determined by combining the robot and the visual positioning in the process. In addition, because the visual positioning is not influenced by the strength of the GPS signal, the scheme provided by the application can realize accurate positioning of the robot in a scene with a weak GPS signal or without the GPS signal, and the application range of the robot is expanded.
Therefore, the purpose of positioning the robot is achieved by the scheme provided by the application, the technical effect of improving the positioning precision of the robot is achieved, and the technical problem that the positioning of the robot in the prior art is inaccurate when the robot turns is solved.
Example 3
According to an embodiment of the present application, there is also provided a positioning apparatus for a robot for implementing the positioning method for a robot, as shown in fig. 7, the apparatus 70 including: an obtaining module 701, a processing module 703 and a determining module 705.
The acquiring module 701 is configured to acquire a constraint function corresponding to the robot, where the constraint function at least includes a visual reprojection error, and the visual reprojection error is determined according to image information acquired by the robot; the processing module 703 is configured to obtain a steering parameter of the robot through a constraint function in a steering process of the robot; the determining module 705 is configured to determine a positioning parameter based on the steering parameter and a steering model obtained in advance, and obtain positioning information of the robot according to the positioning parameter, where the steering model is used to represent a relationship between the positioning parameter and the steering parameter.
It should be noted here that the acquiring module 701, the processing module 703 and the determining module 705 correspond to steps S202 to S206 in embodiment 1, and the two modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in the first embodiment. It should be noted that the modules described above as part of the apparatus may be run in the computing device 10 provided in the first embodiment.
Optionally, the robot includes a left tire and a right tire, and the steering parameters include: a first coordinate parameter of the instantaneous rotation center of the left tire during steering and a second coordinate parameter of the instantaneous rotation center of the right tire during steering; a first scale parameter of the left tire for representing coefficients superimposed on the left tire and a second scale parameter of the right tire for representing coefficients superimposed on the right tire.
In an alternative embodiment, the positioning parameters include: angular velocity when the robot turns to and linear velocity when the robot turns to, wherein, the module of confirming includes: the first determining module is used for determining the position and the posture of the robot at the current moment according to the angular velocity of the robot during steering and the linear velocity of the robot during steering.
In an alternative embodiment, the visual reprojection error is expressed in terms of steering parameters, wherein the processing module comprises: the device comprises a second determining module and a first processing module. The second determining module is used for determining the constraint function as a minimum objective function; and the first processing module is used for solving the minimum objective function to obtain the steering parameter.
In an alternative embodiment, the obtaining module includes: the device comprises a first obtaining module and a third determining module. The robot comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring continuous multi-frame image information acquired by the robot; and the third determination module is used for determining the vision reprojection error of the robot based on the image information of the continuous multiple frames.
In an alternative embodiment, the third determining module includes: the device comprises an extraction module, a fourth determination module, a second processing module and a fifth determination module. The extraction module is used for extracting the characteristic points in the image information and tracking the characteristic points in the continuous multi-frame image information to obtain track information corresponding to the characteristic points; the fourth determining module is used for determining the position information of the characteristic points in the three-dimensional space according to the track information; the second processing module is used for projecting the characteristic points to a two-dimensional plane corresponding to the image information currently acquired by the robot according to the position information of the characteristic points in the three-dimensional space when the image information currently acquired by the robot comprises the characteristic points, so as to obtain projection points; and the fifth determining module is used for determining the visual re-projection error according to the position of the feature point and the position of the projection point in the image information acquired by the robot at present.
In an alternative embodiment, the constraint function further comprises: inertial measurement constraints, wherein the acquisition module comprises: the device comprises a second obtaining module and a sixth determining module. Wherein, the second acquisition module is used for acquireing the detected value of the inertia measuring instrument who sets up in the robot, and wherein, the detected value includes: acceleration and angular velocity of the robot; and the sixth determining module is used for determining the inertia measurement constraint according to the detection value of the inertia measurement instrument.
In an alternative embodiment, the constraint function further comprises: odometer data, wherein the odometer data is obtained by reading a detection value of the odometer.
In an alternative embodiment, the constraint function further comprises: a priori information, wherein the a priori information includes edge information in the historical image information.
In an alternative embodiment, the positioning device of the robot further comprises a creation module for creating the steering model, wherein the creation module comprises: the third acquisition module is used for acquiring an intermediate variable, wherein the intermediate variable is determined by the difference value of the rotation center of the left tire and the rotation center of the right tire in the vertical direction; a fourth obtaining module, configured to obtain an intermediate matrix, where the intermediate matrix includes: a first matrix formed by the abscissa parameter of the first coordinate parameter, the vertical coordinate parameter of the first coordinate parameter and the vertical coordinate parameter of the second coordinate parameter, a second matrix formed by the first scale parameter and the second scale parameter, and a third matrix formed by the linear velocity of the left tire and the linear velocity of the right tire; and the seventh determining module is used for determining that the corresponding relation between the target matrix formed by the positioning parameters and the intermediate variable and the intermediate matrix is a steering model.
Example 4
According to an embodiment of the present application, there is also provided a positioning system of a robot for implementing the positioning method of the robot, the system including: a processor and a memory.
The memory is connected with the processor and used for providing instructions for the processor to process the following processing steps: acquiring a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; in the robot steering process, steering parameters of the robot are obtained through a constraint function; and determining a positioning parameter based on the steering parameter and a steering model acquired in advance, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for expressing the relation between the positioning parameter and the steering parameter.
It should be noted that the processor in this embodiment may execute the positioning method of the robot in embodiment 1, and related contents are already described in embodiment 1 and are not described herein again.
Example 5
Embodiments of the present application may provide a computing device that may be any one of a group of computer terminals. Optionally, in this embodiment, the computing device may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computing device may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the above-mentioned computing device may execute program codes of the following steps in the positioning method of the robot: acquiring a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; in the robot steering process, steering parameters of the robot are obtained through a constraint function; and determining a positioning parameter based on the steering parameter and a steering model acquired in advance, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for expressing the relation between the positioning parameter and the steering parameter.
Optionally, fig. 8 is a block diagram of a computing device according to an embodiment of the present application. As shown in fig. 8, the computing device 10 may include: one or more processors 802 (only one of which is shown), a memory 804, and a peripheral interface 806.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the positioning method and apparatus for a robot in the embodiments of the present application, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, so as to implement the positioning method for a robot. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memories may further include a memory located remotely from the processor, which may be connected to computing device 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: acquiring a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; in the robot steering process, steering parameters of the robot are obtained through a constraint function; and determining a positioning parameter based on the steering parameter and a steering model acquired in advance, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for expressing the relation between the positioning parameter and the steering parameter.
Optionally, the robot includes a left tire and a right tire, and the steering parameters include: a first coordinate parameter of the instantaneous rotation center of the left tire during steering and a second coordinate parameter of the instantaneous rotation center of the right tire during steering; a first scale parameter of the left tire for representing coefficients superimposed on the left tire and a second scale parameter of the right tire for representing coefficients superimposed on the right tire.
Optionally, the processor may further execute the program code of the following steps: and determining the position and the posture of the robot at the current moment according to the angular speed of the robot during steering and the linear speed of the robot during steering. Wherein, the positioning parameters include: angular velocity when the robot is turning and linear velocity when the robot is turning.
Optionally, the processor may further execute the program code of the following steps: determining a constraint function as a minimum objective function; and solving the minimum objective function to obtain the steering parameters. Wherein the visual reprojection error is expressed in terms of steering parameters.
Optionally, the processor may further execute the program code of the following steps: acquiring continuous multiframe image information acquired by a robot; and determining the vision reprojection error of the robot based on the image information of the continuous multiple frames.
Optionally, the processor may further execute the program code of the following steps: extracting characteristic points in the image information, and tracking the characteristic points in the continuous multi-frame image information to obtain track information corresponding to the characteristic points; determining the position information of the characteristic points in the three-dimensional space according to the track information; when the image information currently acquired by the robot comprises the feature points, projecting the feature points to a two-dimensional plane corresponding to the image information currently acquired by the robot according to the position information of the feature points in the three-dimensional space to obtain projection points; and determining the visual re-projection error according to the positions of the feature points and the positions of the projection points in the image information currently acquired by the robot.
Optionally, the processor may further execute the program code of the following steps: acquiring a detection value of an inertial measurement instrument provided in the robot, wherein the detection value includes: acceleration and angular velocity of the robot; and determining inertial measurement constraints according to the detection value of the inertial measurement instrument. Wherein the constraint function further comprises: inertial measurement constraints.
Optionally, the constraint function further includes: odometer data, wherein the odometer data is obtained by reading a detection value of the odometer.
Optionally, the constraint function further includes: a priori information, wherein the a priori information includes edge information in the historical image information.
Optionally, the processor may further execute the program code of the following steps: creating a steering model, wherein creating the steering model comprises: acquiring an intermediate variable, wherein the intermediate variable is determined by the difference between the rotation center of the left tire and the rotation center of the right tire in the vertical direction; obtaining an intermediate matrix, wherein the intermediate matrix comprises: a first matrix formed by the abscissa parameter of the first coordinate parameter, the vertical coordinate parameter of the first coordinate parameter and the vertical coordinate parameter of the second coordinate parameter, a second matrix formed by the first scale parameter and the second scale parameter, and a third matrix formed by the linear velocity of the left tire and the linear velocity of the right tire; and determining a corresponding relation between the target matrix formed by the positioning parameters and the intermediate variable and the intermediate matrix as a steering model.
It can be understood by those skilled in the art that the structure shown in fig. 8 is only an illustration, and the computing device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 8 is a diagram illustrating a structure of the electronic device. For example, computing device 10 may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 8, or have a different configuration than shown in FIG. 8.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 6
Embodiments of the present application also provide a storage medium. Optionally, in this embodiment, the storage medium may be configured to store a program code executed by the positioning method for a robot provided in the first embodiment.
Optionally, in this embodiment, the storage medium may be located in any one computing device in a computer terminal group in a computer network, or in any one mobile terminal in a mobile terminal group.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: acquiring a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; in the robot steering process, steering parameters of the robot are obtained through a constraint function; and determining a positioning parameter based on the steering parameter and a steering model acquired in advance, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for expressing the relation between the positioning parameter and the steering parameter.
Optionally, the robot comprises a left tire and a right tire, and the steering parameters comprise: a first coordinate parameter of the instantaneous rotation center of the left tire during steering and a second coordinate parameter of the instantaneous rotation center of the right tire during steering; a first scale parameter of the left tire for representing coefficients superimposed on the left tire and a second scale parameter of the right tire for representing coefficients superimposed on the right tire.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: and determining the position and the posture of the robot at the current moment according to the angular speed of the robot during steering and the linear speed of the robot during steering. Wherein, the positioning parameters include: angular velocity when the robot is turning and linear velocity when the robot is turning.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: determining a constraint function as a minimum objective function; and solving the minimum objective function to obtain the steering parameters. Wherein the visual reprojection error is expressed in terms of steering parameters.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: acquiring continuous multiframe image information acquired by a robot; and determining the vision reprojection error of the robot based on the image information of the continuous multiple frames.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: extracting characteristic points in the image information, and tracking the characteristic points in the continuous multi-frame image information to obtain track information corresponding to the characteristic points; determining the position information of the characteristic points in the three-dimensional space according to the track information; when the image information currently acquired by the robot comprises the feature points, projecting the feature points to a two-dimensional plane corresponding to the image information currently acquired by the robot according to the position information of the feature points in the three-dimensional space to obtain projection points; and determining the visual re-projection error according to the positions of the feature points and the positions of the projection points in the image information currently acquired by the robot.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: acquiring a detection value of an inertial measurement instrument provided in the robot, wherein the detection value includes: acceleration and angular velocity of the robot; and determining inertial measurement constraints according to the detection value of the inertial measurement instrument. Wherein the constraint function further comprises: inertial measurement constraints.
Optionally, the constraint function further comprises: odometer data, wherein the odometer data is obtained by reading a detection value of the odometer.
Optionally, the constraint function further comprises: a priori information, wherein the a priori information includes edge information in the historical image information.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: creating a steering model, wherein creating the steering model comprises: acquiring an intermediate variable, wherein the intermediate variable is determined by the difference between the rotation center of the left tire and the rotation center of the right tire in the vertical direction; obtaining an intermediate matrix, wherein the intermediate matrix comprises: a first matrix formed by the abscissa parameter of the first coordinate parameter, the vertical coordinate parameter of the first coordinate parameter and the vertical coordinate parameter of the second coordinate parameter, a second matrix formed by the first scale parameter and the second scale parameter, and a third matrix formed by the linear velocity of the left tire and the linear velocity of the right tire; and determining a corresponding relation between the target matrix formed by the positioning parameters and the intermediate variable and the intermediate matrix as a steering model.
According to an embodiment of the application, the application scene of the robot can be determined, whether the visual re-projection error module is started or not is determined according to the application scene, and when the visual re-projection error module is determined to be started according to the application scene, the robot executes the positioning method of the robot provided by the application.
Optionally, the robot may determine whether precise positioning is required according to environment information of an environment where the robot is located, for example, the robot may collect an environment image of the environment where the robot is located, and analyze the environment image to obtain the environment information, where the environment information includes, but is not limited to, an indoor environment, an outdoor environment, and the like. If the environment information is analyzed, the robot is determined to be in an indoor restaurant, the task of the robot is determined to be meal delivery to a table, the robot needs to be accurately positioned in the application scene, and at the moment, the visual reprojection error module of the robot starts to be started. In addition, if it is determined that the robot does not need to be accurately positioned, for example, it is determined that the robot is outdoors and the task of the robot is to collect soil samples, in this application scenario, the robot is ensured to be within the working area without being accurately positioned, and at this time, the vision re-projection error module of the robot is not started.
According to another embodiment of the invention, the user may also determine whether the robot turns on the visual reprojection error module. For example, the user may control the robot to turn on or off the visual reprojection error module by sending control instructions to the robot. The user can send the control instruction to the robot in a voice mode, and can also send the control instruction to the robot through a control terminal (for example, a remote controller or an upper computer).
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is merely an alternative embodiment of the present application and it should be noted that modifications and embellishments could be made by those skilled in the art without departing from the principle of the present application and should be considered as the scope of the present application.

Claims (15)

1. A method of positioning a robot, comprising:
acquiring a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot;
in the robot steering process, obtaining the steering parameters of the robot through the constraint function;
and determining a positioning parameter based on the steering parameter and a steering model acquired in advance, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for representing the relation between the positioning parameter and the steering parameter.
2. The method of claim 1, wherein the robot includes a left tire and a right tire, and the steering parameters include:
a first coordinate parameter of the instantaneous center of rotation of the left tire during steering and a second coordinate parameter of the instantaneous center of rotation of the right tire during steering;
a first scale parameter for the left tire representing a coefficient superimposed on the left tire and a second scale parameter for the right tire representing a coefficient superimposed on the right tire.
3. The method of claim 1, wherein the positioning parameters comprise: the angular velocity when the robot turns to and the linear velocity when the robot turns to, according to the location parameter obtains the location information of robot, include:
and determining the position and the posture of the robot at the current moment according to the angular velocity of the robot during steering and the linear velocity of the robot during steering.
4. The method of claim 1, wherein the visual reprojection error is expressed according to the steering parameter, and the steering parameter of the robot is obtained through a constraint function during the steering of the robot, and the method comprises:
determining the constraint function as a minimum objective function;
and solving the minimum objective function to obtain the steering parameter.
5. The method of claim 1, wherein obtaining the constraint function corresponding to the robot comprises:
acquiring continuous multiframe image information acquired by the robot;
determining a visual reprojection error of the robot based on the image information of consecutive frames.
6. The method of claim 5, wherein determining a visual reprojection error of the robot based on the image information of consecutive frames comprises:
extracting characteristic points in the image information, and tracking the characteristic points in continuous multi-frame image information to obtain track information corresponding to the characteristic points;
determining the position information of the feature points in the three-dimensional space according to the track information;
when the image information currently acquired by the robot comprises the feature points, projecting the feature points to a two-dimensional plane corresponding to the image information currently acquired by the robot according to the position information of the feature points in the three-dimensional space to obtain projection points;
and determining the visual re-projection error according to the positions of the characteristic points and the positions of the projection points in the image information currently acquired by the robot.
7. The method of claim 1, wherein the constraint function further comprises: an inertial measurement constraint, wherein obtaining the inertial measurement constraint of the robot comprises:
acquiring a detection value of an inertial measurement instrument provided in the robot, wherein the detection value includes: acceleration and angular velocity of the robot;
and determining the inertial measurement constraint according to the detection value of the inertial measurement instrument.
8. The method of claim 1, wherein the constraint function further comprises: odometer data, wherein the odometer data is obtained by reading a detection value of an odometer.
9. The method of claim 1, wherein the constraint function further comprises: a priori information, wherein the a priori information includes edge information in the historical image information.
10. The method of claim 2, wherein creating the steering model comprises:
acquiring an intermediate variable, wherein the intermediate variable is determined by the difference between the rotation center of the left tire and the rotation center of the right tire in the vertical direction;
obtaining an intermediate matrix, wherein the intermediate matrix comprises: a first matrix formed by an abscissa parameter of the first coordinate parameter, a vertical coordinate parameter of the first coordinate parameter, and a vertical coordinate parameter of the second coordinate parameter, a second matrix formed by the first scale parameter and the second scale parameter, and a third matrix formed by the linear velocity of the left tire and the linear velocity of the right tire;
and determining the corresponding relation between a target matrix formed by the positioning parameters and the intermediate variable and the intermediate matrix as the steering model.
11. A method of positioning a robot, comprising:
acquiring image information in the robot steering process, and determining a visual re-projection error according to the image information;
determining a minimum objective function corresponding to the robot according to the vision reprojection error;
estimating steering parameters by solving the minimum objective function to obtain the steering parameters of the robot;
and determining a positioning parameter based on the steering parameter and a steering model acquired in advance, and positioning the robot according to the positioning parameter, wherein the steering model is used for expressing the relation between the positioning parameter and the steering parameter.
12. A positioning device for a robot, comprising:
the acquisition module is used for acquiring a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot;
the processing module is used for obtaining the steering parameters of the robot through the constraint function in the steering process of the robot;
and the determining module is used for determining a positioning parameter based on the steering parameter and a steering model acquired in advance, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for expressing the relation between the positioning parameter and the steering parameter.
13. A storage medium, characterized in that the storage medium comprises a stored program, wherein when the program runs, a device in which the storage medium is located is controlled to execute the positioning method of the robot according to any one of claims 1 to 10.
14. A processor, characterized in that the processor is configured to run a program, wherein the program is configured to perform the method of positioning a robot according to any one of claims 1 to 10 when running.
15. A positioning system for a robot, comprising:
a processor; and
a memory coupled to the processor for providing instructions to the processor for processing the following processing steps:
acquiring a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot;
in the robot steering process, obtaining the steering parameters of the robot through the constraint function;
and determining a positioning parameter based on the steering parameter and a steering model acquired in advance, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for representing the relation between the positioning parameter and the steering parameter.
CN201911025754.1A 2019-10-25 2019-10-25 Positioning method, device and system of robot Pending CN112710308A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911025754.1A CN112710308A (en) 2019-10-25 2019-10-25 Positioning method, device and system of robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911025754.1A CN112710308A (en) 2019-10-25 2019-10-25 Positioning method, device and system of robot

Publications (1)

Publication Number Publication Date
CN112710308A true CN112710308A (en) 2021-04-27

Family

ID=75540955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911025754.1A Pending CN112710308A (en) 2019-10-25 2019-10-25 Positioning method, device and system of robot

Country Status (1)

Country Link
CN (1) CN112710308A (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6330356B1 (en) * 1999-09-29 2001-12-11 Rockwell Science Center Llc Dynamic visual registration of a 3-D object with a graphical model
CN101077578A (en) * 2007-07-03 2007-11-28 北京控制工程研究所 Mobile Robot local paths planning method on the basis of binary environmental information
CN101301220A (en) * 2008-07-03 2008-11-12 哈尔滨工程大学 Positioning apparatus of robot puncturing hole in endoscope operation and locating method
CN101619984A (en) * 2009-07-28 2010-01-06 重庆邮电大学 Mobile robot visual navigation method based on colorful road signs
US20100220173A1 (en) * 2009-02-20 2010-09-02 Google Inc. Estimation of Panoramic Camera Orientation Relative to a Vehicle Coordinate Frame
CN102221358A (en) * 2011-03-23 2011-10-19 中国人民解放军国防科学技术大学 Monocular visual positioning method based on inverse perspective projection transformation
US20130058581A1 (en) * 2010-06-23 2013-03-07 Beihang University Microscopic Vision Measurement Method Based On Adaptive Positioning Of Camera Coordinate Frame
CN103706568A (en) * 2013-11-26 2014-04-09 中国船舶重工集团公司第七一六研究所 System and method for machine vision-based robot sorting
CN104359464A (en) * 2014-11-02 2015-02-18 天津理工大学 Mobile robot positioning method based on stereoscopic vision
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
JP2015182144A (en) * 2014-03-20 2015-10-22 キヤノン株式会社 Robot system and calibration method of robot system
CN108489482A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN108827315A (en) * 2018-08-17 2018-11-16 华南理工大学 Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration
CN109676604A (en) * 2018-12-26 2019-04-26 清华大学 Robot non-plane motion localization method and its motion locating system
CN109959381A (en) * 2017-12-22 2019-07-02 深圳市优必选科技有限公司 A kind of localization method, device, robot and computer readable storage medium
US20190204084A1 (en) * 2017-09-29 2019-07-04 Goertek Inc. Binocular vision localization method, device and system
WO2019136613A1 (en) * 2018-01-09 2019-07-18 深圳市沃特沃德股份有限公司 Indoor locating method and device for robot
US20190306666A1 (en) * 2016-12-23 2019-10-03 Cloudminds (Shenzhen) Robotics Systems Co., Ltd. Positioning method, terminal and server
CN110345944A (en) * 2019-05-27 2019-10-18 浙江工业大学 Merge the robot localization method of visual signature and IMU information

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6330356B1 (en) * 1999-09-29 2001-12-11 Rockwell Science Center Llc Dynamic visual registration of a 3-D object with a graphical model
CN101077578A (en) * 2007-07-03 2007-11-28 北京控制工程研究所 Mobile Robot local paths planning method on the basis of binary environmental information
CN101301220A (en) * 2008-07-03 2008-11-12 哈尔滨工程大学 Positioning apparatus of robot puncturing hole in endoscope operation and locating method
US20100220173A1 (en) * 2009-02-20 2010-09-02 Google Inc. Estimation of Panoramic Camera Orientation Relative to a Vehicle Coordinate Frame
CN101619984A (en) * 2009-07-28 2010-01-06 重庆邮电大学 Mobile robot visual navigation method based on colorful road signs
US20130058581A1 (en) * 2010-06-23 2013-03-07 Beihang University Microscopic Vision Measurement Method Based On Adaptive Positioning Of Camera Coordinate Frame
CN102221358A (en) * 2011-03-23 2011-10-19 中国人民解放军国防科学技术大学 Monocular visual positioning method based on inverse perspective projection transformation
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN103706568A (en) * 2013-11-26 2014-04-09 中国船舶重工集团公司第七一六研究所 System and method for machine vision-based robot sorting
JP2015182144A (en) * 2014-03-20 2015-10-22 キヤノン株式会社 Robot system and calibration method of robot system
CN104359464A (en) * 2014-11-02 2015-02-18 天津理工大学 Mobile robot positioning method based on stereoscopic vision
US20190306666A1 (en) * 2016-12-23 2019-10-03 Cloudminds (Shenzhen) Robotics Systems Co., Ltd. Positioning method, terminal and server
US20190204084A1 (en) * 2017-09-29 2019-07-04 Goertek Inc. Binocular vision localization method, device and system
CN109959381A (en) * 2017-12-22 2019-07-02 深圳市优必选科技有限公司 A kind of localization method, device, robot and computer readable storage medium
WO2019136613A1 (en) * 2018-01-09 2019-07-18 深圳市沃特沃德股份有限公司 Indoor locating method and device for robot
CN108489482A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN108827315A (en) * 2018-08-17 2018-11-16 华南理工大学 Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration
CN109676604A (en) * 2018-12-26 2019-04-26 清华大学 Robot non-plane motion localization method and its motion locating system
CN110345944A (en) * 2019-05-27 2019-10-18 浙江工业大学 Merge the robot localization method of visual signature and IMU information

Similar Documents

Publication Publication Date Title
CN111442722B (en) Positioning method, positioning device, storage medium and electronic equipment
CN109282822B (en) Storage medium, method and apparatus for constructing navigation map
CN107748569B (en) Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system
US20160364885A1 (en) Photogrammetric methods and devices related thereto
CN108810473B (en) Method and system for realizing GPS mapping camera picture coordinate on mobile platform
CN105931275A (en) Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN110967711A (en) Data acquisition method and system
US20210183100A1 (en) Data processing method and apparatus
WO2022193508A1 (en) Method and apparatus for posture optimization, electronic device, computer-readable storage medium, computer program, and program product
CN110470333B (en) Calibration method and device of sensor parameters, storage medium and electronic device
CN108332759A (en) A kind of map constructing method and system based on 3D laser
JP2022510418A (en) Time synchronization processing method, electronic devices and storage media
CN106652028B (en) Environment three-dimensional mapping method and device
US20160169662A1 (en) Location-based facility management system using mobile device
CN111521971B (en) Robot positioning method and system
CN112116651A (en) Ground target positioning method and system based on monocular vision of unmanned aerial vehicle
CN113587934A (en) Robot, indoor positioning method and device and readable storage medium
WO2022000713A1 (en) Augmented reality self-positioning method based on aviation assembly
CN112344855A (en) Obstacle detection method and device, storage medium and drive test equipment
Kumar et al. An improved tracking using IMU and vision fusion for mobile augmented reality applications
CN112950710A (en) Pose determination method and device, electronic equipment and computer readable storage medium
CN103903253A (en) Mobile terminal positioning method and system
WO2022088613A1 (en) Robot positioning method and apparatus, device and storage medium
CN113601510A (en) Robot movement control method, device, system and equipment based on binocular vision
CN111157012B (en) Robot navigation method and device, readable storage medium and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination