CN109214254B - Method and device for determining displacement of robot - Google Patents

Method and device for determining displacement of robot Download PDF

Info

Publication number
CN109214254B
CN109214254B CN201710552202.0A CN201710552202A CN109214254B CN 109214254 B CN109214254 B CN 109214254B CN 201710552202 A CN201710552202 A CN 201710552202A CN 109214254 B CN109214254 B CN 109214254B
Authority
CN
China
Prior art keywords
images
displacement
corner
determining
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710552202.0A
Other languages
Chinese (zh)
Other versions
CN109214254A (en
Inventor
郑卫锋
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhendi Technology Co., Ltd
Original Assignee
PowerVision Robot Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PowerVision Robot Inc filed Critical PowerVision Robot Inc
Priority to CN201710552202.0A priority Critical patent/CN109214254B/en
Publication of CN109214254A publication Critical patent/CN109214254A/en
Application granted granted Critical
Publication of CN109214254B publication Critical patent/CN109214254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the technical field of electronic equipment, in particular to a method and a device for determining robot displacement, which are used for solving the problem of inaccurate positioning of horizontal displacement of a robot in the prior art; the method provided by the embodiment of the application comprises the following steps: acquiring two adjacent frames of images shot by a camera on the robot; extracting the corner points of each image in the two adjacent images, and calculating the characteristic vector of each corner point; determining a matching corner point in the two adjacent frames of images according to the determined feature vector of each corner point; and calculating the displacement of the robot from shooting the previous frame image to shooting the next frame image in the two adjacent frames of images based on the matching angular points in the two adjacent frames of images, wherein resolution reduction processing is not carried out on the two adjacent frames of images shot by a camera on the robot, so that more image characteristics can be matched, and the horizontal displacement positioning of the unmanned aerial vehicle is more accurate.

Description

Method and device for determining displacement of robot
Technical Field
The present application relates to the field of electronic devices, and in particular, to a method and an apparatus for determining a displacement of a robot.
Background
With the development of science and technology, robots gradually come into the field of vision of the public and gradually go deep into the lives of people, and slowly change the lives of people. At present, the application range of robots is very wide, such as unmanned aerial vehicles for aerial photography, remote sensing mapping and military reconnaissance, robots for geological survey, environmental monitoring and line inspection, and the like.
In the moving process of the robot, whether the displacement information of the robot can be quickly and accurately acquired is very important for further position movement of the robot. For example, when an unmanned aerial vehicle for aerial photography suspends, the current displacement information of the unmanned aerial vehicle needs to be detected in real time, and only after the current displacement information is acquired, whether the displacement needs to be further adjusted can be determined; for another example, in the process of moving the robot to the destination, only after the current displacement information is acquired, the robot for line inspection can accurately adjust the moving line after the current displacement information is acquired. Nowadays, those skilled in the relevant art mostly determine the horizontal displacement of a robot based on common feature information in two adjacent images taken by cameras installed on the robot.
In the prior art, before feature information matching is performed on two adjacent frames of images, feature searching needs to be performed on the two frames of images, but since the information content contained in one frame of image is large, the operation speed of a processor is reduced when feature searching is performed on the whole frame of image, and real-time feedback of horizontal displacement of a robot is not facilitated. Therefore, in order to increase the matching speed of the images, in the prior art, before feature search is performed on two adjacent frames of images, resolution reduction processing is usually performed on the acquired images for multiple times, and then feature matching is performed layer by layer from the layer with the lowest resolution to the layer with the higher resolution. With the prior art method, although the matching speed of the image is improved, the feature objects may be lost during the process of performing feature matching layer by layer. For example, after the resolution is reduced for multiple times, when feature matching is performed from the image layer with the lowest resolution, because the resolution is relatively low and the area of the vehicle is relatively small, a feature object of the vehicle may become blurred, the vehicle is not matched, and only a feature object of the building is matched, so that further detail matching is performed only on the matched building in the subsequent process, that is, if the feature object of the vehicle is not matched in the image layer with the lowest resolution, the vehicle is not subjected to detail matching in the subsequent process. Therefore, common characteristic objects matched by two adjacent frames of images are reduced, and errors are larger when robot displacement is calculated based on fewer common characteristic objects.
In summary, in the prior art, when the horizontal displacement of the robot is calculated by using two adjacent frames of images, the image resolution is reduced to improve the matching speed of the images, but this method may lose the feature object, and therefore, there is a problem that the horizontal displacement of the robot is not accurately positioned.
Disclosure of Invention
The embodiment of the application provides a method and a device for determining robot displacement, which are used for solving the problem of inaccurate positioning of horizontal displacement of a robot in the prior art.
The method for determining the displacement of the robot provided by the embodiment of the application comprises the following steps:
acquiring two adjacent frames of images shot by a camera on the robot;
extracting the corner points of each image in the two adjacent images, and calculating the characteristic vector of each corner point;
determining a matching corner point in the two adjacent frames of images according to the determined feature vector of each corner point;
and calculating the displacement of the robot from the shooting of the previous frame image to the shooting of the next frame image in the two adjacent frames of images based on the matched corner points in the two adjacent frames of images.
Optionally, the extracting corner points of the image includes:
acquiring a gray matrix of the image;
calculating the convolution of the gray matrix and the horizontal direction gradient template to be used as a horizontal gradient matrix;
calculating the convolution of the gray matrix and the vertical gradient template to be used as a vertical gradient matrix;
calculating a response matrix for determining the position of the corner point according to the calculated horizontal gradient matrix, the calculated vertical gradient matrix and the calculated Gaussian function; each element in the response matrix corresponds to a pixel point in the image, and the value of each element is used for expressing the response value of the pixel point corresponding to the element;
and determining pixel points corresponding to elements of which the response values are greater than a first preset threshold value in the response matrix as corner points of the image.
Optionally, the feature vector of each corner point is determined according to the following steps:
determining a main direction angle of the angular point according to the gradient direction of each pixel point within a first preset distance range away from the angular point under a first coordinate system;
rotating the first coordinate system according to the main direction angle to obtain a second coordinate system;
and determining the feature vector of the corner point according to the gradient amplitude and the gradient direction of each pixel point within a second preset distance range away from the corner point in a second coordinate system.
Optionally, determining a matching corner in the two adjacent frames of images according to the determined feature vector of each corner, including:
determining a K-dimensional tree according to the feature vector of each corner of any one of the two adjacent frames of images; the dimension of the feature vector of each corner point is the same, and the value of K is equal to the dimension of the feature vector;
and searching a corner point which is closest to the feature vector of the corner point in the K-dimensional tree aiming at each corner point of another frame of image except any one frame of image in the two adjacent frames of images, and determining the searched corner point as a matching corner point of the corner point.
Optionally, determining, based on the matched corner points in the two adjacent images, a displacement of the robot from capturing a previous image to capturing a next image in the two adjacent images, including:
selecting a sample corner point with a preset logarithm from the matching corner points of the two adjacent frames of images;
determining a rotary displacement transformation matrix for representing the position offset between the matched corner points of the two adjacent frames of images based on the sample corner points;
determining the matching corner points with the position offset smaller than a second preset threshold value in the matching corner points of the two adjacent frames of images as optimized matching corner points based on the rotation displacement transformation matrix;
if the proportion of the optimized matching corner points in the matching corner points of the two adjacent frames of images is smaller than a third preset threshold value, returning to the step of selecting a preset logarithm sample corner point from the matching corner points of the two adjacent frames of images;
and determining the displacement of the robot from shooting the previous frame image to shooting the next frame image in the two adjacent frames of images according to the optimized matching corner points.
Optionally, determining, according to the optimized matching corner, a displacement of the robot from shooting of a previous image to shooting of a next image in the two adjacent images, including:
determining the angular point displacement of each optimized matching angular point in any one of the two adjacent frames of images from the position in the previous frame of image to the position in the next frame of image;
determining the average value of the angular point displacements of all the matched angular points as the reference object displacement corresponding to the two adjacent frames of images; or determining the number of matched angular points corresponding to each divided displacement interval according to the angular point displacement of each matched angular point, and determining the middle value of the displacement interval with the largest number of matched angular points as the displacement of the reference object corresponding to the two adjacent frames of images;
and according to the determined reference object displacement, determining the displacement of the robot from the shooting of the previous frame image to the shooting of the next frame image in the two adjacent frame images.
Optionally, the displacement (X, Y) of the robot from capturing a previous image to capturing a next image of the two adjacent images is determined based on the following formula:
X=x×H/f;
Y=y×H/f;
wherein (x, y) is a reference displacement, H is a shooting distance when the camera on the robot shoots the two adjacent frames of images, and f is a focal length when the camera on the robot shoots the two adjacent frames of images.
The device for determining the displacement of the robot provided by the embodiment of the application comprises:
the acquisition module is used for acquiring two adjacent frames of images shot by a camera on the robot;
the extraction module is used for extracting the corner points of each image in the two adjacent images;
the characteristic vector calculation module is used for calculating a characteristic vector of each corner point;
a matching corner determining module, configured to determine matching corners in the two adjacent frames of images according to the determined feature vector of each corner;
and the displacement calculation module is used for calculating the displacement of the robot from the shooting of the previous frame image to the shooting of the next frame image in the two adjacent frames of images based on the matched corner points in the two adjacent frames of images.
The extraction module is specifically configured to:
acquiring a gray matrix of the image;
calculating the convolution of the gray matrix and the horizontal direction gradient template to be used as a horizontal gradient matrix;
calculating the convolution of the gray matrix and the vertical gradient template to be used as a vertical gradient matrix;
calculating a response matrix for determining the position of the corner point according to the calculated horizontal gradient matrix, the calculated vertical gradient matrix and the calculated Gaussian function; each element in the response matrix corresponds to a pixel point in the image, and the value of each element is used for expressing the response value of the pixel point corresponding to the element;
and determining pixel points corresponding to elements of which the response values are greater than a first preset threshold value in the response matrix as corner points of the image.
Optionally, the feature vector calculation module is specifically configured to:
determining a main direction angle of the angular point according to the gradient direction of each pixel point within a first preset distance range away from the angular point under a first coordinate system;
rotating the first coordinate system according to the main direction angle to obtain a second coordinate system;
and determining the feature vector of the corner point according to the gradient amplitude and the gradient direction of each pixel point within a second preset distance range away from the corner point in a second coordinate system.
Optionally, the matching corner point determining module is specifically configured to:
determining a K-dimensional tree according to the feature vector of each corner of any one of the two adjacent frames of images; the dimension of the feature vector of each corner point is the same, and the value of K is equal to the dimension of the feature vector;
and searching a corner point which is closest to the feature vector of the corner point in the K-dimensional tree aiming at each corner point of another frame of image except any one frame of image in the two adjacent frames of images, and determining the searched corner point as a matching corner point of the corner point.
Optionally, the displacement calculation module is specifically configured to:
selecting a sample corner point with a preset logarithm from the matching corner points of the two adjacent frames of images;
determining a rotary displacement transformation matrix for representing the position offset between the matched corner points of the two adjacent frames of images based on the sample corner points;
determining the matching corner points with the position offset smaller than a second preset threshold value in the matching corner points of the two adjacent frames of images as optimized matching corner points based on the rotation displacement transformation matrix;
if the proportion of the optimized matching corner points in the matching corner points of the two adjacent frames of images is smaller than a third preset threshold value, returning to the step of selecting a preset logarithm sample corner point from the matching corner points of the two adjacent frames of images;
and determining the displacement of the robot from shooting the previous frame image to shooting the next frame image in the two adjacent frames of images according to the optimized matching corner points.
Optionally, the displacement calculation module is specifically configured to:
determining the angular point displacement of each optimized matching angular point in any one of the two adjacent frames of images from the position in the previous frame of image to the position in the next frame of image;
determining the average value of the angular point displacements of all the matched angular points as the reference object displacement corresponding to the two adjacent frames of images; or determining the number of matched angular points corresponding to each divided displacement interval according to the angular point displacement of each matched angular point, and determining the middle value of the displacement interval with the largest number of matched angular points as the displacement of the reference object corresponding to the two adjacent frames of images;
and according to the determined reference object displacement, determining the displacement of the robot from the shooting of the previous frame image to the shooting of the next frame image in the two adjacent frame images.
Optionally, the displacement (X, Y) of the robot from capturing a previous image to capturing a next image of the two adjacent images is determined based on the following formula:
X=x×H/f;
Y=y×H/f;
wherein (x, y) is a reference displacement, H is a shooting distance when the camera on the robot shoots the two adjacent frames of images, and f is a focal length when the camera on the robot shoots the two adjacent frames of images.
In the embodiment of the application, two adjacent frames of images shot by a camera on a robot are obtained, the angular points of the images are extracted for each frame of image in the two adjacent frames of images, the characteristic vector of each angular point is calculated, and then the matching angular points in the two adjacent frames of images are determined according to the determined characteristic vector of each angular point; and calculating the displacement of the robot from the shooting of the previous image to the shooting of the next image in the two adjacent images based on the matched corner points in the two adjacent images. Therefore, the corner matching is carried out on the two adjacent frames of images with the original resolution ratio shot by the camera, more common characteristic objects can be matched, and the horizontal displacement positioning of the unmanned aerial vehicle can be more accurate based on the more common characteristic objects.
Drawings
Fig. 1 is a flowchart of a method for determining robot displacement according to an embodiment of the present disclosure;
fig. 2 is a flowchart of optimizing a matching corner provided in the embodiment of the present application;
fig. 3 is a schematic diagram of a matching corner point in two adjacent frames of images according to an embodiment of the present application;
fig. 4 is a flowchart of determining robot displacement according to matching corner displacement according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a robot provided in an embodiment of the present application taking an image during movement;
FIG. 6 is a diagram illustrating a relationship between a displacement of a robot and a displacement of an image captured by a camera mounted on the robot according to an embodiment of the present disclosure;
fig. 7 is a flowchart of determining a displacement of a robot according to a displacement of a matching corner point according to an embodiment of the present application;
fig. 8 is a flowchart of another method for determining robot displacement according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of determining a main direction angle of a corner point according to an embodiment of the present application;
fig. 10 is a schematic diagram of determining a gradient histogram of a corner point according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a second coordinate system provided by an embodiment of the present application;
FIG. 12 is a diagram illustrating a feature vector for determining a seed point according to an embodiment of the present disclosure;
fig. 13 is a schematic diagram of feature vectors for determining a corner point according to an embodiment of the present application;
fig. 14 is a structural diagram of an apparatus for determining displacement of a robot according to an embodiment of the present application.
Detailed Description
In the embodiment of the application, two adjacent frames of images shot by a camera on a robot are obtained, the angular points of the images are extracted for each frame of image in the two adjacent frames of images, the characteristic vector of each angular point is calculated, and then the matching angular points in the two adjacent frames of images are determined according to the determined characteristic vector of each angular point; and calculating the displacement of the robot from the shooting of the previous image to the shooting of the next image in the two adjacent images based on the matched corner points in the two adjacent images. Therefore, the corner matching is carried out on the two adjacent frames of images with the original resolution ratio shot by the camera, more common characteristic objects can be matched, and the horizontal displacement positioning of the unmanned aerial vehicle can be more accurate based on the more common characteristic objects.
The embodiments of the present application will be described in further detail with reference to the drawings attached hereto.
Example one
As shown in fig. 1, a flowchart of a method for determining a displacement of a robot provided in an embodiment of the present application includes the following steps:
s101: two adjacent frames of images taken by a camera on the robot are acquired.
In practical application, unmanned aerial vehicle is at the in-process of flight, and the camera of installing on unmanned aerial vehicle can shoot its place of flying over according to preset time interval, and later, transmits these images for the treater in real time, and the treater is again according to the information determination unmanned aerial vehicle's in these images displacement. However, for the unmanned aerial vehicle, since the flying speed is fast, if images with a large number of frames apart are used to determine the displacement, matching feature information may not be obtained. Therefore, two adjacent frames of images shot by the camera on the unmanned aerial vehicle can be acquired, so that matching characteristic information can be acquired as much as possible, and accuracy of unmanned aerial vehicle displacement determination is further guaranteed.
S102: and extracting the corner points of each image in two adjacent frames of images, and calculating the characteristic vector of each corner point.
Optionally, for each image in two adjacent frames, extracting corner points in the image according to the following steps:
firstly, acquiring a gray matrix of the image, calculating the convolution of the gray matrix and a preset horizontal direction gradient template to be used as a horizontal gradient matrix, calculating the convolution of the gray matrix and a preset vertical direction gradient template to be used as a vertical gradient matrix, calculating a response matrix for determining the position of an angular point according to the horizontal gradient matrix, the vertical gradient matrix and a Gaussian function, and determining a pixel point corresponding to an element of which the response value is greater than a first preset threshold value in the response matrix as the angular point of the image; each element in the response matrix corresponds to a pixel point in the image, the value of each element is used for representing the size of the response value of the pixel point corresponding to the element, and the probability that the pixel point is the corner point is higher when the response value of one pixel point is larger.
In a specific implementation process, for each corner extracted from two adjacent frames of images, a feature vector of the corner can be determined according to the following steps:
in a first coordinate system, determining a main direction angle range of the corner point according to the gradient direction of each pixel point within a first preset distance range from the corner point, selecting an angle from the main direction angle range as a main direction angle of the corner point, rotating the first coordinate system according to the main direction angle to obtain a second coordinate system, and then in the second coordinate system, determining a feature vector of the corner point according to the gradient amplitude and the gradient direction of each pixel point within a second preset distance range from the corner point.
S103: and determining the matched corner points in the two adjacent frames of images according to the determined feature vector of each corner point.
In a specific implementation process, after feature vectors of all corner points in two adjacent frames of images are obtained, a K-dimensional tree can be determined according to the feature vector of the corner point included in any one of the two adjacent frames of images, then for each corner point included in another frame of image except for any one of the two adjacent frames of images, a corner point closest to the feature vector of the corner point is searched in the K-dimensional tree, and the searched corner point is determined as a matching corner point of the corner point; the dimension of the feature vector of each corner point is the same, and the value of K is equal to the dimension of the feature vector.
S104: and calculating the displacement of the robot from the shooting of the previous image to the shooting of the next image in the two adjacent images based on the matched corner points in the two adjacent images.
Optionally, because there are many matching corner points in two adjacent frames of images, before calculating the horizontal displacement of the robot, optimization may be performed on the determined matching corner points, specifically, optimization may be performed according to the flow shown in fig. 2:
s201: and selecting a sample corner point with a preset logarithm from the matching corner points in the two adjacent frames of images.
S202: and determining a rotary displacement transformation matrix for representing the position offset between the matched corner points in the two adjacent frames of images based on the sample corner points.
The rotation displacement transformation matrix comprises a rotation matrix R and a translation matrix T between matching points.
In particular, assuming that x and x ' are any pair of matching corner points in two adjacent frames of images, given enough pairs of matching corner points (at least 7 pairs), x ' can be obtained according to the formula 'TThe basic matrix F is calculated with Fx 0, and then K according to the formula ETFK calculates the intrinsic matrix E, and then performs Singular Value Decomposition (SVD) on the intrinsic matrix E to obtain a rotation matrix R and a translation matrix THere, K is an intra-camera parameter matrix, and in computer vision, the intra-camera parameter matrix K is:
Figure GDA0002446121070000101
wherein f is the focal length of the camera; dx and dy are the length and width of one pixel, respectively, (u)0,v0) Is the coordinate of the image center position (which may be the origin at the lower left corner of the image).
S203: and determining the matching corner points with the position offset smaller than a second preset threshold value in the matching corner points in the two adjacent frames of images as the optimized matching corner points based on the rotation displacement transformation matrix.
As shown in FIG. 3, assume that the points of the P points in the two adjacent frame images are P1 and P2 (a pair of matching corner points), P1 (x)1,y1,1)、P2(x2,y2And 1) the positions of P1 and P2 in the plane coordinate system of the two frame images, respectively.
First, the position P1(X1, Y1, Z1) of P1 in the camera coordinate system is obtained according to the following formula:
Figure GDA0002446121070000102
and K is an in-camera parameter matrix.
Then, the position P1(X2, Y2, Z2) of P1 in the second frame image in the camera coordinate system is obtained according to the following formula:
Figure GDA0002446121070000103
wherein the content of the first and second substances,
Figure GDA0002446121070000111
is a rotation matrix;
Figure GDA0002446121070000112
is a translation matrix.
Further, the plane seating of P1 in the second frame image is calculated according to the following formulaPosition P1 (x) in the frame2’,y2’,1):
Figure GDA0002446121070000113
Finally, the distance Δ d between P1 and P2 is calculated according to the following formula:
Figure GDA0002446121070000114
and if delta d is less than Td, determining the pair of matched corner points as the optimized matched corner points.
S204: judging whether the proportion of the optimized matching corner points in the two adjacent frames of images is smaller than a third preset threshold value or not, if so, entering S205; if not, the process returns to S201.
For example, there are 100 pairs of matching corners in two adjacent frames of images, and if 40 pairs of matching corners are determined in step S204, the ratio of the matching corners in the two adjacent frames of images is optimized to be 40%, and if the third preset threshold is 60%, the process proceeds to step S205; if the third preset threshold is 40%, the process returns to S201.
S205: and determining the displacement of the robot from shooting the previous frame image to shooting the next frame image in the two adjacent frames of images according to the optimized matching corner points.
In a specific implementation process, after the optimal matching corner points in two adjacent images are determined, the displacement of the robot may be determined according to any one of the following manners.
The first method is as follows: for each pair of optimized matching angular points, determining angular point displacement of the matching angular point from the position in the previous frame image to the position in the next frame image in the two adjacent frames of images; determining the average value of the angular point displacements of all the matched angular points as the reference object displacement corresponding to the two adjacent frames of images; and determining the displacement of the robot according to the displacement of the reference object.
Specifically, the above process may be performed according to the steps shown in fig. 4:
s401: and determining the angular point displacement of the pair of matched angular points from the position in the previous frame image to the position in the next frame image in the two adjacent frames of images aiming at each optimized matched angular point in any one frame image in the two adjacent frames of images.
S402: and determining the average value of the angular point displacements of all the matched angular points as the reference object displacement corresponding to two adjacent frames of images.
S403: and determining the displacement of the robot according to the displacement of the reference object.
Specifically, as shown in fig. 5, a denotes a camera installed on the drone, which takes a picture of a place where the drone flies, and transfers the taken picture to the processor in real time. Assuming that the focal length of the unmanned aerial vehicle is f when the camera takes an image, the height from the ground when the camera takes an image is H, and the lateral displacement of the unmanned aerial vehicle on the ground is X, which can be represented by the lateral displacement X of the reference object in two adjacent frames of images, as shown in fig. 6.
As can be seen from fig. 6, the following relationship exists between the lateral displacement X of the reference object in the two adjacent images and the lateral displacement X of the unmanned aerial vehicle on the ground:
Figure GDA0002446121070000121
accordingly, the longitudinal displacement Y of the drone on the ground can be characterized by the longitudinal displacement Y of the reference object in the two adjacent images, which has the following relationship with the longitudinal displacement Y of the drone on the ground:
Figure GDA0002446121070000122
then, after determining the reference object displacement corresponding to the two adjacent frames of images, the displacement (X, Y) of the drone may be determined based on the following formula:
X=x×H/f;
Y=y×H/f;
wherein, (x, y) is reference object displacement, the reference object displacement takes pixels as units, the pixel units need to be converted into international units of meters during calculation, H is shooting distance when a camera on the unmanned aerial vehicle shoots two adjacent frames of images, and f is focal distance when the camera on the unmanned aerial vehicle shoots two adjacent frames of images.
Here, it is further explained that H represents the vertical height when the two adjacent frames of images are captured by the camera installed on the drone if the two adjacent frames of images are captured by the camera; if two adjacent images are obtained from a camera installed on a robot walking on the ground, H represents a lateral photographing distance at which the camera photographs the two images. For both cases, H can be acquired by an ultrasound probe, which can be mounted on the robot or used separately on the ground.
The second method comprises the following steps: aiming at each optimized matching corner point in any one of two adjacent frames of images, determining the corner point displacement of the matching corner point from the position in the previous frame of image to the position in the next frame of image in the two adjacent frames of images; and determining the number of matched angular points corresponding to each divided displacement interval according to the angular point displacement of each matched angular point, and determining the middle value of the displacement interval with the largest number of matched angular points as the displacement of the reference object corresponding to two adjacent frames of images.
Specifically, the above process may be performed according to the steps shown in fig. 7:
s701: and aiming at each optimized matching corner point in any one of two adjacent frames of images, determining the corner point displacement of the matching corner point from the position in the previous frame of image to the position in the next frame of image in the two adjacent frames of images.
S702: and determining the number of the matched angular points corresponding to each divided displacement interval according to the angular point displacement of each matched angular point.
Specifically, firstly, a statistical interval is determined; then, equally dividing the statistical interval into N displacement intervals; and finally, marking the angular point displacement corresponding to each pair of matched angular points on the statistical interval. The left end point value and the right end point value of the statistical interval are respectively the minimum value and the maximum value in the displacement of the matching angular points; and N is a positive integer and represents the subdivision degree of the statistical interval, and can be determined according to the number of the matched corner points.
S703: and determining the middle value of the displacement interval with the largest number of matched corner points as the displacement of the reference object corresponding to the two adjacent frames of images.
Here, although the displacements of all the matching corner points are counted, the displacements of all the matching corner points are not used to determine the displacement of the reference object, but the corner point displacement interval with the most concentrated distribution is used to determine the displacement of the reference object. Therefore, corner displacement with larger error can be abandoned, and the accuracy is higher.
S704: and determining the displacement of the robot according to the displacement of the reference object.
Optionally, this step is performed as in S403, and is not described herein again.
In the embodiment of the application, two adjacent frames of images shot by a camera on a robot are obtained, the angular points of the images are extracted for each frame of image in the two adjacent frames of images, the characteristic vector of each angular point is calculated, and then the matching angular points in the two adjacent frames of images are determined according to the determined characteristic vector of each angular point; and calculating the displacement of the robot from the shooting of the previous image to the shooting of the next image in the two adjacent images based on the matched corner points in the two adjacent images. Therefore, the corner matching is carried out on the two adjacent frames of images with the original resolution ratio shot by the camera, more common characteristic objects can be matched, and the horizontal displacement positioning of the unmanned aerial vehicle can be more accurate based on the more common characteristic objects.
Example two
As shown in fig. 8, a flowchart of another method for determining a displacement of a robot according to an embodiment of the present application includes the following steps:
s801: two adjacent frames of images taken by a camera on the robot are acquired.
S802: and extracting the corner points of each image in two adjacent frames of images, and calculating the characteristic vector of each corner point.
Here, for each of two adjacent frames of images, the image is converted into a grayscale image, a grayscale matrix I for representing the image is acquired, and then a horizontal gradient matrix Ix and a vertical gradient matrix Iy of the image are calculated according to the following formulas:
Figure GDA0002446121070000141
wherein the content of the first and second substances,
Figure GDA0002446121070000142
respectively a preset horizontal direction gradient template and a vertical direction gradient template.
Further, the response matrix M is calculated according to the following formula:
Figure GDA0002446121070000143
wherein w (x, y) is a Gaussian function,
Figure GDA0002446121070000144
sigma may take 0.5 for actual calculation.
Optionally, each element in the response matrix M corresponds to a pixel point in the image, a value of each element is used to represent a response value of the pixel point corresponding to the element, and the probability that the pixel point is the corner point is higher when the response value is larger, so that the pixel point with the response value larger than the first preset threshold in the response matrix can be determined as the corner point.
In a specific implementation process, after obtaining corner points in two adjacent frames of images, a feature vector of each corner point can be determined according to the following steps:
as shown in fig. 9, for any corner point, under a first coordinate system, xoy (x axis is horizontal direction, y axis is vertical direction), and with the corner point (large black dot) as a center, calculating a gradient direction of each pixel point (small black dot) within a first preset distance range from the corner point, wherein the direction of an arrow represents the gradient direction of the pixel point, the length of the arrow represents the gradient size of the pixel point, then dividing 0-360 ° into 10 angular intervals, for each pixel point in fig. 9, counting the angular interval in which the gradient direction of the pixel point falls to obtain a gradient histogram of the corner point, as shown in fig. 10, then determining the angular interval in which the number of the falling pixels is the largest as a main direction angular range of the corner point, and then selecting an angle from the main direction angular range as a main direction angle of the corner point, for example, the middle angle of the main direction angle range is used as the main direction angle of the corner point.
Assuming that the determined principal direction angle for a corner point in fig. 9 is 60 °, the rectangular coordinate system in fig. 9 can be rotated until the x-axis coincides with the principal direction angle of the corner point, resulting in a second coordinate system x 'oy', as shown in fig. 11.
As shown in fig. 12, in a second coordinate system, a preset window size is again taken with the corner as a center, for example, a window of 8 pixels × 8 pixels, each cell in the window represents a pixel, a gradient magnitude and a gradient direction of each pixel are calculated, an arrow direction represents a gradient direction of the pixel, and a length represents a gradient magnitude of the pixel, then a weighting operation is performed on the pixels in the window of 8 pixels × 8 pixels by using a gaussian function, so that the pixels close to the corner have a larger weight, and the pixels far from the corner have a smaller weight, then a gradient histogram including 8 directions is drawn based on the pixels included in the window of 4 pixels × 4 pixels, an accumulated value of the gradient magnitudes of the pixels in each gradient direction is calculated, and a seed point is formed, as shown in the right diagram in fig. 12, assuming that the gradient strengths in 8 directions with the horizontal right as a starting point (counterclockwise direction) are respectively 2, 2, 4, 1, 2, 3, 2, 2, the feature vector of the seed point is (2, 2, 4, 1, 2, 3, 2, 2). The method of combining the directional information of the neighborhoods can enhance the anti-noise capability and simultaneously provide more rational fault tolerance for the feature matching containing the positioning error.
It should be noted that, the method for determining the seed points is different from the method for determining the main direction angle range of the corner points, at this time, the gradient histogram of each seed region is divided into 8 direction intervals between 0 ° and 360 °, each interval is 45 °, that is, each seed point has gradient strength information in 8 directions.
In a specific implementation process, in order to enhance the robustness of matching, 16 seed points of 4 × 4 may be used for each corner point to describe, so that one corner point may generate a 128-dimensional feature vector, as shown in fig. 13, for 16 seed points in the figure, the feature vectors of the respective seed points may be concatenated in order from left to right and from top to bottom to obtain the 128-dimensional feature vector of the corner point at the center.
S803: and determining the matched corner points in the two adjacent frames of images according to the determined feature vector of each corner point.
In a specific implementation process, a K-dimensional tree may be determined according to a feature vector of each corner of any one of two adjacent frames of images, for each corner of another frame of image except the any one of the two adjacent frames of images, a corner closest to the feature vector of the corner is searched in the K-dimensional tree, and the found corner is determined as a matching corner of the corner, where if the feature vector of each corner is 128-dimensional, K equals to 128.
S804: and calculating the displacement of the robot from the shooting of the previous image to the shooting of the next image in the two adjacent images based on the matched corner points in the two adjacent images.
Here, the implementation of this step is the same as that of the first embodiment, and is not described herein again.
In the embodiment of the application, two adjacent frames of images shot by a camera on a robot are obtained, the angular points of the images are extracted for each frame of image in the two adjacent frames of images, the characteristic vector of each angular point is calculated, and then the matching angular points in the two adjacent frames of images are determined according to the determined characteristic vector of each angular point; and calculating the displacement of the robot from the shooting of the previous image to the shooting of the next image in the two adjacent images based on the matched corner points in the two adjacent images. Therefore, the corner matching is carried out on the two adjacent frames of images with the original resolution ratio shot by the camera, more common characteristic objects can be matched, and the horizontal displacement positioning of the unmanned aerial vehicle can be more accurate based on the more common characteristic objects.
EXAMPLE III
Based on the same inventive concept, the embodiment of the present application further provides a device for determining robot displacement corresponding to the method for determining robot displacement, and as the principle of solving the problem of the device is similar to that of the method for determining robot displacement in the embodiment of the present application, the implementation of the device may refer to the implementation of the method, and repeated details are omitted.
As shown in fig. 14, a block diagram of an apparatus for determining a displacement of a robot according to an embodiment of the present application includes:
an acquiring module 1401, configured to acquire two adjacent frames of images captured by a camera on a robot;
an extracting module 1402, configured to extract an angular point of each of the two adjacent frames of images;
a feature vector calculation module 1403, configured to calculate a feature vector of each corner point;
a matching corner determining module 1404, configured to determine matching corners in two adjacent frames of images according to the determined feature vector of each corner;
and the displacement calculation module 1405 is used for calculating the displacement of the robot from the shooting of the previous image to the shooting of the next image in the two adjacent images based on the matched corner points in the two adjacent images.
The extraction module 1402 is specifically configured to:
acquiring a gray matrix of the image;
calculating the convolution of the gray matrix and the horizontal direction gradient template to be used as a horizontal gradient matrix;
calculating the convolution of the gray matrix and the vertical gradient template to be used as a vertical gradient matrix;
calculating a response matrix for determining the position of the corner point according to the calculated horizontal gradient matrix, the calculated vertical gradient matrix and the calculated Gaussian function; each element in the response matrix corresponds to a pixel point in the image, and the value of each element is used for expressing the response value of the pixel point corresponding to the element;
and determining pixel points corresponding to elements of which the response values are greater than a first preset threshold value in the response matrix as corner points of the image.
Optionally, the feature vector calculation module 1403 is specifically configured to:
determining a main direction angle of the angular point according to the gradient direction of each pixel point within a first preset distance range away from the angular point under a first coordinate system;
rotating the first coordinate system according to the main direction angle to obtain a second coordinate system;
and determining the feature vector of the corner point according to the gradient amplitude and the gradient direction of each pixel point within a second preset distance range away from the corner point in a second coordinate system.
Optionally, the matching corner determination module 1404 is specifically configured to:
determining a K-dimensional tree according to the feature vector of each corner of any one of two adjacent frames of images; the dimension of the feature vector of each corner point is the same, and the value of K is equal to the dimension of the feature vector;
and searching a corner point which is closest to the characteristic vector of the corner point in the K-dimensional tree aiming at each corner point of another frame of image except any one frame of image in two adjacent frames of images, and determining the searched corner point as a matching corner point of the corner point.
Optionally, the displacement calculation module 1405 is specifically configured to:
selecting a sample angular point with a preset logarithm from matching angular points of two adjacent frames of images;
determining a rotary displacement transformation matrix for representing the position offset between the matched corner points of two adjacent frames of images based on the sample corner points;
determining the matching corner points with the position offset smaller than a second preset threshold value in the matching corner points of two adjacent frames of images as optimized matching corner points based on the rotation displacement transformation matrix;
if the proportion of the optimized matching corner points in the matching corner points of the two adjacent frames of images is smaller than a third preset threshold value, returning to the step of selecting a preset logarithm sample corner point from the matching corner points of the two adjacent frames of images;
and determining the displacement of the robot from shooting the previous frame image to shooting the next frame image in the two adjacent frames of images according to the optimized matching corner points.
Optionally, the displacement calculation module 1405 is specifically configured to:
aiming at each optimized matching corner point in any one of two adjacent frames of images, determining the corner point displacement of the matching corner point from the position in the previous frame of image to the position in the next frame of image in the two adjacent frames of images;
determining the average value of the angular point displacements of all the matched angular points as the reference object displacement corresponding to the two adjacent frames of images; or determining the number of matched angular points corresponding to each divided displacement interval according to the angular point displacement of each matched angular point, and determining the middle value of the displacement interval with the largest number of matched angular points as the displacement of the reference object corresponding to the two adjacent frames of images;
and determining the displacement of the robot from the shooting of the previous frame image to the shooting of the next frame image in the two adjacent frames of images according to the determined displacement of the reference object.
Optionally, the displacement (X, Y) of the robot from capturing the previous image to capturing the next image of the two adjacent images is determined based on the following formula:
X=x×H/f;
Y=y×H/f;
where (x, y) is the reference displacement, H is the shooting distance when the camera on the robot shoots two adjacent frames of images, and f is the focal length when the camera on the robot shoots two adjacent frames of images.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (14)

1. A method of determining robot displacement, comprising:
acquiring two adjacent frames of images shot by a camera on the robot;
extracting the corner points of each image in the two adjacent images, and calculating the characteristic vector of each corner point;
determining a matching corner point in the two adjacent frames of images according to the determined feature vector of each corner point;
and calculating the displacement of the robot from the shooting of the previous frame image to the shooting of the next frame image in the two adjacent frames of images based on the matched corner points in the two adjacent frames of images.
2. The method of claim 1, wherein said extracting corners of the image comprises:
acquiring a gray matrix of the image;
calculating the convolution of the gray matrix and the horizontal direction gradient template to be used as a horizontal gradient matrix;
calculating the convolution of the gray matrix and the vertical gradient template to be used as a vertical gradient matrix;
calculating a response matrix for determining the position of the corner point according to the calculated horizontal gradient matrix, the calculated vertical gradient matrix and the calculated Gaussian function; each element in the response matrix corresponds to a pixel point in the image, and the value of each element is used for expressing the response value of the pixel point corresponding to the element;
and determining pixel points corresponding to elements of which the response values are greater than a first preset threshold value in the response matrix as corner points of the image.
3. The method of claim 1, wherein the feature vector for each corner point is determined according to the following steps:
determining a main direction angle of the angular point according to the gradient direction of each pixel point within a first preset distance range away from the angular point under a first coordinate system;
rotating the first coordinate system according to the main direction angle to obtain a second coordinate system;
and determining the feature vector of the corner point according to the gradient amplitude and the gradient direction of each pixel point within a second preset distance range away from the corner point in a second coordinate system.
4. The method of claim 1, wherein determining a matching corner point in the two adjacent frames of images according to the determined feature vector of each corner point comprises:
determining a K-dimensional tree according to the feature vector of each corner of any one of the two adjacent frames of images; the dimension of the feature vector of each corner point is the same, and the value of K is equal to the dimension of the feature vector;
and searching a corner point which is closest to the feature vector of the corner point in the K-dimensional tree aiming at each corner point of another frame of image except any one frame of image in the two adjacent frames of images, and determining the searched corner point as a matching corner point of the corner point.
5. The method of any one of claims 1 to 4, wherein determining the displacement of the robot from the shooting of the previous image to the shooting of the next image in the two adjacent images based on the matching corner points in the two adjacent images comprises:
selecting a sample corner point with a preset logarithm from the matching corner points of the two adjacent frames of images;
determining a rotary displacement transformation matrix for representing the position offset between the matched corner points of the two adjacent frames of images based on the sample corner points;
determining a matching corner point with the position offset smaller than a second preset threshold value in the matching corner points of the two adjacent frames of images as an optimized matching corner point based on the rotation displacement transformation matrix;
if the proportion of the optimized matching corner points in the matching corner points of the two adjacent frames of images is smaller than a third preset threshold, determining the displacement of the robot from the shooting of the previous frame of image to the shooting of the next frame of image according to the optimized matching corner points; otherwise, returning to the step of selecting the sample corner points with the preset logarithm from the matching corner points of the two adjacent frames of images.
6. The method of claim 5, wherein determining the displacement of the robot from the previous image to the next image in the two adjacent images according to the optimized matching corner points comprises:
determining the angular point displacement of each optimized matching angular point in any one of the two adjacent frames of images from the position in the previous frame of image to the position in the next frame of image;
determining the average value of the angular point displacements of all the matched angular points as the reference object displacement corresponding to the two adjacent frames of images; or determining the number of matched angular points corresponding to each divided displacement interval according to the angular point displacement of each matched angular point, and determining the middle value of the displacement interval with the largest number of matched angular points as the displacement of the reference object corresponding to the two adjacent frames of images;
and according to the determined reference object displacement, determining the displacement of the robot from the shooting of the previous frame image to the shooting of the next frame image in the two adjacent frame images.
7. The method of claim 6, wherein the displacement (X, Y) of the robot from capturing a previous image to capturing a subsequent image of the two adjacent images is determined based on the following formula:
X=x×H/f;
Y=y×H/f;
wherein (x, y) is a reference displacement, H is a shooting distance when the camera on the robot shoots the two adjacent frames of images, and f is a focal length when the camera on the robot shoots the two adjacent frames of images.
8. An apparatus for determining displacement of a robot, comprising:
the acquisition module is used for acquiring two adjacent frames of images shot by a camera on the robot;
the extraction module is used for extracting the corner points of each image in the two adjacent images;
the characteristic vector calculation module is used for calculating a characteristic vector of each corner point;
a matching corner determining module, configured to determine matching corners in the two adjacent frames of images according to the determined feature vector of each corner;
and the displacement calculation module is used for calculating the displacement of the robot from the shooting of the previous frame image to the shooting of the next frame image in the two adjacent frames of images based on the matched corner points in the two adjacent frames of images.
9. The apparatus of claim 8, wherein the extraction module is specifically configured to:
acquiring a gray matrix of the image;
calculating the convolution of the gray matrix and the horizontal direction gradient template to be used as a horizontal gradient matrix;
calculating the convolution of the gray matrix and the vertical gradient template to be used as a vertical gradient matrix;
calculating a response matrix for determining the position of the corner point according to the calculated horizontal gradient matrix, the calculated vertical gradient matrix and the calculated Gaussian function; each element in the response matrix corresponds to a pixel point in the image, and the value of each element is used for expressing the response value of the pixel point corresponding to the element;
and determining pixel points corresponding to elements of which the response values are greater than a first preset threshold value in the response matrix as corner points of the image.
10. The apparatus of claim 8, wherein the feature vector calculation module is specifically configured to:
determining a main direction angle of the angular point according to the gradient direction of each pixel point within a first preset distance range away from the angular point under a first coordinate system;
rotating the first coordinate system according to the main direction angle to obtain a second coordinate system;
and determining the feature vector of the corner point according to the gradient amplitude and the gradient direction of each pixel point within a second preset distance range away from the corner point in a second coordinate system.
11. The apparatus of claim 8, wherein the matching corner determination module is specifically configured to:
determining a K-dimensional tree according to the feature vector of each corner of any one of the two adjacent frames of images; the dimension of the feature vector of each corner point is the same, and the value of K is equal to the dimension of the feature vector;
and searching a corner point which is closest to the feature vector of the corner point in the K-dimensional tree aiming at each corner point of another frame of image except any one frame of image in the two adjacent frames of images, and determining the searched corner point as a matching corner point of the corner point.
12. The apparatus of any one of claims 8 to 11, wherein the displacement calculation module is specifically configured to:
selecting a sample corner point with a preset logarithm from the matching corner points of the two adjacent frames of images;
determining a rotary displacement transformation matrix for representing the position offset between the matched corner points of the two adjacent frames of images based on the sample corner points;
determining a matching corner point with the position offset smaller than a second preset threshold value in the matching corner points of the two adjacent frames of images as an optimized matching corner point based on the rotation displacement transformation matrix;
if the proportion of the optimized matching corner points in the matching corner points of the two adjacent frames of images is smaller than a third preset threshold, determining the displacement of the robot from the shooting of the previous frame of image to the shooting of the next frame of image according to the optimized matching corner points; otherwise, returning to the step of selecting the sample corner points with the preset logarithm from the matching corner points of the two adjacent frames of images.
13. The apparatus of claim 12, wherein the displacement calculation module is specifically configured to:
determining the angular point displacement of each optimized matching angular point in any one of the two adjacent frames of images from the position in the previous frame of image to the position in the next frame of image;
determining the average value of the angular point displacements of all the matched angular points as the reference object displacement corresponding to the two adjacent frames of images; or determining the number of matched angular points corresponding to each divided displacement interval according to the angular point displacement of each matched angular point, and determining the middle value of the displacement interval with the largest number of matched angular points as the displacement of the reference object corresponding to the two adjacent frames of images;
and according to the determined reference object displacement, determining the displacement of the robot from the shooting of the previous frame image to the shooting of the next frame image in the two adjacent frame images.
14. The apparatus of claim 13, wherein the displacement (X, Y) of the robot from capturing a previous image to capturing a subsequent image of the two adjacent images is determined based on the following formula:
X=x×H/f;
Y=y×H/f;
wherein (x, y) is a reference displacement, H is a shooting distance when the camera on the robot shoots the two adjacent frames of images, and f is a focal length when the camera on the robot shoots the two adjacent frames of images.
CN201710552202.0A 2017-07-07 2017-07-07 Method and device for determining displacement of robot Active CN109214254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710552202.0A CN109214254B (en) 2017-07-07 2017-07-07 Method and device for determining displacement of robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710552202.0A CN109214254B (en) 2017-07-07 2017-07-07 Method and device for determining displacement of robot

Publications (2)

Publication Number Publication Date
CN109214254A CN109214254A (en) 2019-01-15
CN109214254B true CN109214254B (en) 2020-08-14

Family

ID=64991160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710552202.0A Active CN109214254B (en) 2017-07-07 2017-07-07 Method and device for determining displacement of robot

Country Status (1)

Country Link
CN (1) CN109214254B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110505463A (en) * 2019-08-23 2019-11-26 上海亦我信息技术有限公司 Based on the real-time automatic 3D modeling method taken pictures
CN111228655A (en) * 2020-01-14 2020-06-05 于金明 Monitoring method and device based on virtual intelligent medical platform and storage medium
CN111738093B (en) * 2020-05-28 2024-03-29 哈尔滨工业大学 Automatic speed measuring method for curling balls based on gradient characteristics
CN112669378A (en) * 2020-12-07 2021-04-16 山东省科学院海洋仪器仪表研究所 Method for rapidly detecting angular points of underwater images of seawater
CN116661479B (en) * 2023-07-28 2023-11-07 深圳市城市公共安全技术研究院有限公司 Building inspection path planning method, equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102506868A (en) * 2011-11-21 2012-06-20 清华大学 SINS (strap-down inertia navigation system)/SMANS (scene matching auxiliary navigation system)/TRNS (terrain reference navigation system) combined navigation method based on federated filtering and system
CN105549614A (en) * 2015-12-17 2016-05-04 北京猎鹰无人机科技有限公司 Target tracking method of unmanned plane
CN106093455A (en) * 2014-04-10 2016-11-09 深圳市大疆创新科技有限公司 The measuring method of the flight parameter of unmanned vehicle and device
CN106403924A (en) * 2016-08-24 2017-02-15 智能侠(北京)科技有限公司 Method for robot fast positioning and attitude estimation based on depth camera
CN106709950A (en) * 2016-11-28 2017-05-24 西安工程大学 Binocular-vision-based cross-obstacle lead positioning method of line patrol robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2790874C (en) * 2011-10-07 2019-11-05 E2M Technologies B.V. A motion platform system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102506868A (en) * 2011-11-21 2012-06-20 清华大学 SINS (strap-down inertia navigation system)/SMANS (scene matching auxiliary navigation system)/TRNS (terrain reference navigation system) combined navigation method based on federated filtering and system
CN106093455A (en) * 2014-04-10 2016-11-09 深圳市大疆创新科技有限公司 The measuring method of the flight parameter of unmanned vehicle and device
CN105549614A (en) * 2015-12-17 2016-05-04 北京猎鹰无人机科技有限公司 Target tracking method of unmanned plane
CN106403924A (en) * 2016-08-24 2017-02-15 智能侠(北京)科技有限公司 Method for robot fast positioning and attitude estimation based on depth camera
CN106709950A (en) * 2016-11-28 2017-05-24 西安工程大学 Binocular-vision-based cross-obstacle lead positioning method of line patrol robot

Also Published As

Publication number Publication date
CN109214254A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
US10885328B2 (en) Determination of position from images and associated camera positions
CN109214254B (en) Method and device for determining displacement of robot
CN106529538A (en) Method and device for positioning aircraft
CN108955718B (en) Visual odometer and positioning method thereof, robot and storage medium
CN112233177B (en) Unmanned aerial vehicle pose estimation method and system
US10311595B2 (en) Image processing device and its control method, imaging apparatus, and storage medium
CN109472828B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN112598757A (en) Multi-sensor time-space calibration method and device
CN104820996A (en) Target tracking method based on self-adaptive blocks of video
CN111397541A (en) Method, device, terminal and medium for measuring slope angle of refuse dump
JP2017117386A (en) Self-motion estimation system, control method and program of self-motion estimation system
CN109883400B (en) Automatic target detection and space positioning method for fixed station based on YOLO-SITCOL
CN111598956A (en) Calibration method, device and system
CN111862146A (en) Target object positioning method and device
CN112802112B (en) Visual positioning method, device, server and storage medium
CN114723811A (en) Stereo vision positioning and mapping method for quadruped robot in unstructured environment
CN113916136A (en) High-rise structure dynamic displacement measurement method based on unmanned aerial vehicle aerial photography
CN112991372A (en) 2D-3D camera external parameter calibration method based on polygon matching
CN114766039A (en) Object detection method, object detection device, terminal device, and medium
CN115100535B (en) Satellite remote sensing image rapid reconstruction method and device based on affine camera model
US11282280B2 (en) Method and system for node vectorisation
CN115597592B (en) Comprehensive positioning method applied to unmanned aerial vehicle inspection
Sasiadek et al. Accurate feature matching for autonomous vehicle navigation in urban environments
Roosab et al. Comparing ORB and AKAZE for visual odometry of unmanned aerial vehicles
Janoušek et al. Stereo Camera-Based Position Estimation for Unmanned Aircraft Navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 264200 Zone E, blue venture Valley, No. 40, Yangguang Road, Nanhai new area, Weihai City, Shandong Province

Patentee after: Zhendi Technology Co., Ltd

Address before: 100086 3rd floor, block a, Zhizhen building, 7 Zhichun Road, Haidian District, Beijing

Patentee before: POWERVISION TECH Inc.