CN110332930B - Position determination method, device and equipment - Google Patents

Position determination method, device and equipment Download PDF

Info

Publication number
CN110332930B
CN110332930B CN201910702089.9A CN201910702089A CN110332930B CN 110332930 B CN110332930 B CN 110332930B CN 201910702089 A CN201910702089 A CN 201910702089A CN 110332930 B CN110332930 B CN 110332930B
Authority
CN
China
Prior art keywords
monocular camera
coordinate
target object
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910702089.9A
Other languages
Chinese (zh)
Other versions
CN110332930A (en
Inventor
檀冲
张书新
赵海洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Puppy Vacuum Cleaner Group Co Ltd
Original Assignee
Xiaogou Electric Internet Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaogou Electric Internet Technology Beijing Co Ltd filed Critical Xiaogou Electric Internet Technology Beijing Co Ltd
Priority to CN201910702089.9A priority Critical patent/CN110332930B/en
Publication of CN110332930A publication Critical patent/CN110332930A/en
Application granted granted Critical
Publication of CN110332930B publication Critical patent/CN110332930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Abstract

The application discloses a position determining method, which particularly can acquire a first image including a target object shot by a monocular camera when the monocular camera is located at a first position, and acquire a second image including the target object shot by the monocular camera after the monocular camera rotates around a preset rotating shaft at the first position by a preset angle. After the first image and the second image are acquired, the target coordinates may be determined according to a conversion relationship between a first coordinate of the target object in the first image and target coordinates of the target object in a world coordinate system, and a conversion relationship between a second coordinate of the target object in the second image and the target coordinates. By utilizing the scheme provided by the embodiment of the application, the target coordinate of the target object in the world coordinate system can be accurately determined, and further, the performance of the intelligent mobile device can be improved.

Description

Position determination method, device and equipment
Technical Field
The present application relates to the field of data processing, and in particular, to a method, an apparatus, and a device for determining a location.
Background
With the development of scientific technology, some intelligent mobile devices, such as intelligent robots, intelligent sweeping machines, and the like, have appeared. The working principle of the intelligent mobile equipment comprises the following steps: the method comprises the steps of shooting an image of a real three-dimensional space, determining position coordinates of a target object in the real three-dimensional space in a world coordinate system according to the shot image of the real three-dimensional space, and controlling the intelligent mobile equipment to move forward to the target object based on the position coordinates of the target object in the world coordinate system.
Currently, an image of a real stereoscopic space can be captured using a monocular camera, but since an image captured using only the monocular camera, the position coordinates of a target object in a world coordinate system cannot be determined. In the conventional art, a monocular camera and an Inertial Measurement Unit (IMU) may be combined to determine the position coordinates of a target object in a world coordinate system.
However, before the IMU is put into use, calibration is required, and a certain error is introduced in the calibration process, so that the position coordinates of the target object in the world coordinate system determined based on the monocular camera and the IMU in the prior art are inaccurate, and further, the performance of the intelligent mobile device is affected.
Disclosure of Invention
The technical problem to be solved by the application is that in the prior art, the position coordinates of a target object in a world coordinate system determined based on a monocular camera and an IMU are inaccurate, so that the performance of intelligent mobile equipment is affected.
In a first aspect, an embodiment of the present application provides a method for determining a position, where the method includes:
the method comprises the steps of acquiring a first image which is shot by a monocular camera and comprises a target object when the monocular camera is located at a first position, and acquiring a second image which is shot by the monocular camera and comprises the target object after the monocular camera rotates around a preset rotating shaft at the first position for a preset angle;
and determining the target coordinates according to the conversion relation between the first coordinates of the target object in the first image and the target coordinates of the target object in the world coordinate system and the conversion relation between the second coordinates of the target object in the second image and the target coordinates.
Optionally, the determining the target coordinate according to a conversion relationship between a first coordinate of the target object in the first image and a target coordinate of the target object in a world coordinate system, and a conversion relationship between a second coordinate of the target object in the second image and the target coordinate includes:
determining a first equation reflecting a functional relationship between the first coordinate and the target coordinate according to the first coordinate, an internal reference matrix of the monocular camera and a first external reference matrix of the monocular camera; wherein the first external reference matrix of the monocular camera is determined according to a relative position relationship between the first position and an origin of a camera coordinate system of the monocular camera;
determining a second equation reflecting a functional relationship between the second coordinate and the target coordinate according to the second coordinate, the internal reference matrix of the monocular camera and the second external reference matrix of the monocular camera; the second appearance matrix of the monocular camera is determined according to the relative position relation between the second position and the origin of the camera coordinate system of the monocular camera; the second position is a position where the monocular camera is located after rotating around the preset rotating shaft by a preset angle at the first position;
and determining the target coordinates of the target object in the world coordinate system according to the first equation and the second equation.
Optionally, the first position is an origin of a camera coordinate system of the monocular camera.
Optionally, the preset rotation axis is an X axis, a Y axis, or a Z axis of a camera coordinate system of the monocular camera.
In a second aspect, an embodiment of the present application provides a position determination apparatus, including:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring a first image which is shot by a monocular camera when the monocular camera is positioned at a first position and comprises a target object, and acquiring a second image which is shot by the monocular camera after the monocular camera rotates around a preset rotating shaft at the first position by a preset angle and comprises the target object;
a determining unit, configured to determine the target coordinate according to a conversion relationship between a first coordinate of the target object in the first image and a target coordinate of the target object in a world coordinate system, and a conversion relationship between a second coordinate of the target object in the second image and the target coordinate.
Optionally, the determining unit includes:
the first determining subunit is used for determining a first equation reflecting the functional relationship between the first coordinate and the target coordinate according to the first coordinate, the internal reference matrix of the monocular camera and the first external reference matrix of the monocular camera; wherein the first external reference matrix of the monocular camera is determined according to a relative position relationship between the first position and an origin of a camera coordinate system of the monocular camera;
a second determining subunit, configured to determine, according to the second coordinate, the internal reference matrix of the monocular camera, and the second external reference matrix of the monocular camera, a second equation that represents a functional relationship between the second coordinate and the target coordinate; the second appearance matrix of the monocular camera is determined according to the relative position relation between the second position and the origin of the camera coordinate system of the monocular camera; the second position is a position where the monocular camera is located after rotating around the preset rotating shaft by a preset angle at the first position;
and the determining subunit is used for determining the target coordinates of the target object in the world coordinate system according to the first equation and the second equation.
Optionally, the first position is an origin of a camera coordinate system of the monocular camera.
Optionally, the preset rotation axis is an X axis, a Y axis, or a Z axis of a camera coordinate system of the monocular camera.
In a third aspect, an embodiment of the present application provides a location determining apparatus, where the apparatus includes: a processor and a memory;
the memory to store instructions;
the processor, configured to execute the instructions in the memory, to perform the method of any of the above first aspects.
In a fourth aspect, an embodiment of the present application provides an intelligent mobile device, where the intelligent mobile device includes: the monocular camera comprises a monocular camera and a rotation control mechanism, wherein the rotation control mechanism is used for controlling the monocular camera to rotate for a preset angle around a preset rotating shaft.
Compared with the prior art, the embodiment of the application has the following advantages:
the embodiment of the application provides a position determining method, and specifically, a first image including a target object and captured by a monocular camera when the monocular camera is located at a first position can be obtained, and a second image including the target object and captured by the monocular camera after the monocular camera rotates around a preset rotation axis by a preset angle at the first position can be obtained. After the first image and the second image are acquired, the target coordinates may be determined according to a conversion relationship between a first coordinate of the target object in the first image and target coordinates of the target object in a world coordinate system, and a conversion relationship between a second coordinate of the target object in the second image and the target coordinates. Since the first coordinates of the target object in the first image are accurate and the second coordinates of the target object in the second image are also accurate, the conversion relationship between the first coordinates of the target object in the first image and the target coordinates of the target object in the world coordinate system and the conversion relationship between the second coordinates of the target object in the second image and the target coordinates are accurate. Therefore, the target coordinates of the calculated target object in the world coordinate system can be considered to be accurate. Therefore, by the scheme provided by the embodiment of the application, the target coordinate of the target object in the world coordinate system can be accurately determined, and further, the performance of the intelligent mobile device can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of a position determining method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for determining target coordinates according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a position determination apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The inventor of the present application has found through research that, in the conventional technology, a monocular camera and an Inertial Measurement Unit (IMU) may be combined to determine the position coordinates of a target object in a world coordinate system. However, before the IMU is put into use, calibration is required, and a certain error is introduced in the calibration process, so that the position coordinates of the target object in the world coordinate system determined based on the monocular camera and the IMU in the prior art are inaccurate, and further, the performance of the intelligent mobile device is affected. For example, it may cause the smart mobile device to not accurately reach the location of the target object.
In order to solve the above problem, an embodiment of the present application provides a position determining method, which can accurately determine a target coordinate of a target object in a world coordinate system, and further can improve performance of an intelligent mobile device.
Various non-limiting embodiments of the present application are described in detail below with reference to the accompanying drawings.
Exemplary method
Referring to fig. 1, the figure is a schematic flowchart of a position determination method according to an embodiment of the present application. The position determining method provided in the embodiment of the present application may be executed by a processing device, where the processing device may be a server, and the processing device may also be a processor of the intelligent device, and the embodiment of the present application is not particularly limited.
The position determining method provided in the embodiment of the present application can be implemented, for example, by the following steps S101 to S102.
S101: the method comprises the steps of obtaining a first image which is shot by a monocular camera when the monocular camera is located at a first position and comprises a target object, and obtaining a second image which is shot by the monocular camera after the monocular camera rotates around a preset rotating shaft at the first position for a preset angle, wherein the second image comprises the target object.
In the embodiment of the present application, the monocular camera may be a camera located on the smart mobile device. The embodiment of the present application is not particularly limited to the smart mobile device, and the smart mobile device may be a device that can move to a target position based on environment information of a real three-dimensional space. The intelligent mobile device can be, for example, an intelligent sweeper, which in turn can be, for example, an intelligent robot, or the like. Of course, the smart mobile device may have other functions besides the function of moving to the target position, for example, for a smart robot in a restaurant, the smart mobile device may also have a function of placing dishes on a table, and the description thereof is omitted here.
In this embodiment of the application, the smart mobile device may further include a corresponding rotation control mechanism, in addition to the monocular camera, where the rotation control mechanism is configured to control the monocular camera to rotate around a preset rotation axis by a preset angle. The embodiment of the application does not specifically limit the preset rotating shaft, and the preset rotating shaft can be determined according to actual conditions. The preset angle is not specifically limited, and the preset angle can be determined according to actual conditions. As an example, the preset angle may be determined according to a shooting angle of view of the monocular camera, for example, the preset angle may be an arbitrary angle smaller than one third of the shooting angle of view of the monocular camera.
It should be noted that the target object mentioned in the embodiment of the present application may be an object existing in a real three-dimensional space, for example, the target object may be a table, a chair, a door, a window, and the like existing in the real three-dimensional space, and a description thereof is omitted here.
In the embodiment of the present application, the target coordinates of the target object in the world coordinate system may represent the position of the target object in the real stereo space.
In an embodiment of the present application, to determine target coordinates of a target object in a world coordinate system, a monocular camera may be controlled to take a first image including the target object when the monocular camera is located at a first position. Then, the monocular camera is controlled to rotate around a preset rotation axis by a preset angle by using the rotation control mechanism, and for convenience of description, the position where the monocular camera is located after rotating around the preset rotation axis by the preset angle at the first position is referred to as a "second position". The monocular camera may be controlled to capture a second image including the target object when the monocular camera is located at the second position.
It is understood that information of the target object in the real stereoscopic space cannot be completely known using only the first image because the monocular camera can photograph an object of the real stereoscopic space as a two-dimensional planar image, and thus, a part of depth information may be lost. Accordingly, the information of the target object in the real stereoscopic space cannot be completely known only by using the second image. Therefore, the target coordinates of the target object in the world coordinate system cannot be calculated only by the first image, and the target coordinates of the target object in the world coordinate system cannot be calculated only by the second image. However, by combining the information of the target object in the real stereo space provided by the first image and the second image, some information not carried in the first image and the second image, such as depth information, can be deduced. Thereby, the target coordinates of the target object in the world coordinate system can be calculated. Therefore, in the embodiment of the present application, after the first image and the second image are acquired, the target coordinates of the target object in the world coordinate system can be determined by using the first image and the second image.
S102: and determining the target coordinates according to the conversion relation between the first coordinates of the target object in the first image and the target coordinates of the target object in the world coordinate system and the conversion relation between the second coordinates of the target object in the second image and the target coordinates.
It is understood that, after the first image is acquired, since the position of the target object in the first image is determined, specifically, the position of the target object in the first image may be represented by the first coordinate of the target object in the image coordinate system corresponding to the first image. Accordingly, after the second image is acquired, since the position of the target object in the second image is determined, specifically, the position of the target object in the second image may be represented by the second coordinate of the target object in the image coordinate system corresponding to the second image.
It can be understood that, for a target object in a real stereoscopic space, it may correspond to a display area on the first image, in other words, the target object in the first image may include a plurality of pixel points, and the first coordinate mentioned in this embodiment may be a coordinate of a pixel point corresponding to any point on the target object included in the first image, for example, the first coordinate may be a coordinate of a pixel point corresponding to a center point of the target object. Similarly, the target object in the second image may include a plurality of pixel points, and the second coordinate mentioned in this embodiment may be a coordinate of a pixel point corresponding to any point on the target object included in the second image, for example, the second coordinate may be a coordinate of a pixel point corresponding to a central point of the target object.
It can be understood that each pixel point in the image coordinate system and the object in the real three-dimensional space corresponding to the pixel point have a certain conversion relationship. In other words, the first coordinate of the target object in the first image and the target coordinate of the target object in the world coordinate system have a certain conversion relationship. Correspondingly, a certain conversion relationship is also provided between the second coordinate of the target object in the second image and the target coordinate of the target object in the world coordinate system.
Therefore, in the embodiment of the present application, the target coordinates of the target object in the world coordinate system may be determined based on the conversion relationship between the first coordinates of the target object in the first image and the target coordinates of the target object in the world coordinate system, and the conversion relationship between the second coordinates of the target object in the second image and the target coordinates.
As can be seen from the above description, since the first coordinates of the target object in the first image are accurate and the second coordinates of the target object in the second image are also accurate, the conversion relationship between the first coordinates of the target object in the first image and the target coordinates of the target object in the world coordinate system and the conversion relationship between the second coordinates of the target object in the second image and the target coordinates are accurate. Therefore, the target coordinates of the calculated target object in the world coordinate system can be considered to be accurate. Therefore, by the scheme provided by the embodiment of the application, the target coordinate of the target object in the world coordinate system can be accurately determined, and further, the performance of the intelligent mobile device can be improved.
A specific implementation of S102 is described below with reference to fig. 2. Fig. 2 is a schematic flowchart of a method for determining target coordinates according to an embodiment of the present disclosure. The method shown in fig. 2 can be implemented, for example, by the following steps S201 to S203.
As described above, in the specific implementation of S102, the "conversion relationship between each pixel point in the image coordinate system and the object in the real stereo space corresponding to the pixel point" is used, and therefore, before describing S201 to S203, an equation representing the "conversion relationship between each pixel point in the image coordinate system and the object in the real stereo space corresponding to the pixel point" is described first. See the following equation (1).
Figure BDA0002151112840000091
In equation (1):
(u, v) are coordinate values of pixel points corresponding to the A point in the image coordinate system on the image comprising the A point in the world coordinate system;
(XA,YA,ZA) Is the coordinate of point A in the world system;
(u0,v0) In the image of the center point of the imageCoordinates in a coordinate system;
0Tis (0,0, 0);
r is a rotation matrix transformed between a camera coordinate system and a world coordinate system of the monocular camera, and is a matrix of 3 x 3;
t is a translation matrix transformed between the monocular camera coordinate system and the world coordinate system, and is a matrix of 3 x 1.
(dx, dy) is the value of the actual size of the dot pixel on the photosensing sheet of the monocular camera;
f is the focal length of the monocular camera.
It can be understood that, since
Figure BDA0002151112840000092
So that the formula (1) can be further modified to obtain the following formula (2).
Figure BDA0002151112840000101
In the formula (2), the first and second groups,
Figure BDA0002151112840000102
also referred to as the reference matrix of the monocular camera, which is a constant once the monocular camera determines; therefore, in the following description of the embodiments of the present application, the internal reference matrix is denoted as C,
Figure BDA0002151112840000103
in the formula (2), the first and second groups,
Figure BDA0002151112840000104
the external parameter of the monocular camera is not a constant, and the value thereof is determined according to the relative position relationship between the shooting position of the monocular camera and the origin of the camera coordinate system of the monocular camera, unlike the internal parameter C of the monocular camera. Specifically, the relative positional relationship can be represented by the aforementioned rotation matrix R and translation matrix t.
S201: and determining a first equation reflecting the functional relation between the first coordinate and the target coordinate according to the first coordinate, the internal reference matrix of the monocular camera and the first external reference matrix of the monocular camera.
It should be noted that the first process may embody a conversion relationship between the first coordinate of the target object in the first image and the target coordinate of the target object in the world coordinate system. First seat is marked by (u)1,v1) The target coordinate of the target object in the world coordinate system is (X)C,YC,ZC) Then, as can be seen from the above equation (2), the first equation can be embodied as the following equation (3).
Figure BDA0002151112840000111
Wherein:
Figure BDA0002151112840000112
a first external reference matrix determined according to a relative position relationship between the first position and an origin of a camera coordinate system of the monocular camera.
S202: and determining a second equation reflecting the functional relationship between the second coordinate and the target coordinate according to the second coordinate, the internal reference matrix of the monocular camera and the second external reference matrix of the monocular camera.
It should be noted that the second equation may embody a conversion relationship between the second coordinate of the target object in the second image and the target coordinate of the target object in the world coordinate system. Second seat is marked by (u)2,v2) Then, as can be seen from the above equation (2), the second equation can be embodied as the following equation (4).
Figure BDA0002151112840000113
Wherein:
Figure BDA0002151112840000114
a second external reference matrix determined according to a relative position relationship between the second position and an origin of a camera coordinate system of the monocular camera.
S203: and determining the target coordinates of the target object in the world coordinate system according to the first equation and the second equation.
After the first equation and the second equation are determined, the first equation and the second equation can be combined, and the target coordinate (X) of the target object in the world coordinate system is calculatedC,YC,ZC)。
As described above, the first external reference matrix is determined from the relative positional relationship between the first position and the origin of the camera coordinate system of the monocular camera, and the second external reference matrix is determined from the relative positional relationship between the second position and the origin of the camera coordinate system of the monocular camera. It can be understood that, when the formula (3) and the formula (4) are used for calculation, the complexity of the first external reference matrix and the complexity of the second external reference matrix affect the calculation complexity of the target coordinate obtained by calculation to some extent.
In an implementation manner of the embodiment of the present application, in order to reduce the calculation complexity of the target coordinates obtained by calculation, when constructing the camera coordinate system of the monocular camera, the camera coordinate system may be constructed based on taking the first position as an origin. In this way, the first external reference matrix is relatively simple, that is, the complexity of the first external reference matrix is reduced, thereby reducing the computational complexity of calculating the target coordinates.
In addition, the value of the second external parameter matrix is determined according to the relative position relation between the second position and the origin of the camera coordinate system of the monocular camera. The relative position relationship can be represented by the rotation matrix R and the translation matrix t. In practical applications, if the monocular camera rotates around a coordinate axis of the camera coordinate system of the camera, after the second position is rotated from the first position, the translation amount on the coordinate axis corresponding to the preset rotation axis is 0. Therefore, when the preset rotation axis is a coordinate axis of the camera coordinate system of the camera, the complexity of the second external parameter matrix can be reduced. Therefore, in an implementation manner of the embodiment of the present application, the aforementioned predetermined rotation axis may be an X axis, a Y axis, or a Z axis of a camera coordinate system of the monocular camera. Therefore, the complexity of the second external parameter matrix is reduced, and the calculation complexity of the target coordinate obtained through calculation is reduced.
A method for obtaining the target coordinates by calculation will be described below by taking the first position as the origin of the camera coordinate system of the monocular camera and the preset rotation axis as the Y axis of the camera coordinate system of the monocular camera as an example.
Rotating the matrix when the monocular camera is in the first position
Figure BDA0002151112840000131
Translation matrix
Figure BDA0002151112840000132
Will be provided with
Figure BDA0002151112840000133
And
Figure BDA0002151112840000134
substituting into equation (3) yields the following equation (5):
Figure BDA0002151112840000135
after the monocular camera rotates around the Y axis from the first position to the second position by the preset angle, the distance between the first position and the second position is L, the corresponding preset angle is theta,
Figure BDA0002151112840000136
translation matrix
Figure BDA0002151112840000137
Will be provided with
Figure BDA0002151112840000138
And
Figure BDA0002151112840000139
substituting into equation (4) yields the following equation (6):
Figure BDA00021511128400001310
the formula (5) and the formula (6) are combined, so that the target coordinate (X) of the target object in the world coordinate system can be obtained through calculationC,YC,ZC)。
It should be noted that, when the preset rotation axis is an X axis and a Z axis of a camera coordinate system of the monocular camera, specific values of the corresponding first external reference matrix and the second external reference matrix may be obtained according to a conversion relationship between an image coordinate system and the camera coordinate system, and a description thereof is not repeated.
Exemplary device
Based on the position determination method provided by the above embodiment, the embodiment of the present application further provides a position determination device, which is described below with reference to the accompanying drawings.
Referring to fig. 3, the figure is a schematic structural diagram of a position determination apparatus according to an embodiment of the present application.
The illustrated position determining apparatus 300 may include, for example: an acquisition unit 301 and a determination unit 302.
An acquiring unit 301, configured to acquire a first image including a target object captured when a monocular camera is located at a first position, and acquire a second image including the target object captured after the monocular camera rotates around a preset rotation axis by a preset angle at the first position;
a determining unit 302, configured to determine the target coordinate according to a conversion relationship between a first coordinate of the target object in the first image and a target coordinate of the target object in a world coordinate system, and a conversion relationship between a second coordinate of the target object in the second image and the target coordinate.
Optionally, the determining unit 302 includes:
the first determining subunit is used for determining a first equation reflecting the functional relationship between the first coordinate and the target coordinate according to the first coordinate, the internal reference matrix of the monocular camera and the first external reference matrix of the monocular camera; wherein the first external reference matrix of the monocular camera is determined according to a relative position relationship between the first position and an origin of a camera coordinate system of the monocular camera;
a second determining subunit, configured to determine, according to the second coordinate, the internal reference matrix of the monocular camera, and the second external reference matrix of the monocular camera, a second equation that represents a functional relationship between the second coordinate and the target coordinate; the second appearance matrix of the monocular camera is determined according to the relative position relation between the second position and the origin of the camera coordinate system of the monocular camera; the second position is a position where the monocular camera is located after rotating around the preset rotating shaft by a preset angle at the first position;
and the determining subunit is used for determining the target coordinates of the target object in the world coordinate system according to the first equation and the second equation.
Optionally, the first position is an origin of a camera coordinate system of the monocular camera.
Optionally, the preset rotation axis is an X axis, a Y axis, or a Z axis of a camera coordinate system of the monocular camera.
Since the apparatus 300 is an apparatus corresponding to the method provided in the above method embodiment, and the specific implementation of each unit of the apparatus 300 is the same as that of the above method embodiment, for the specific implementation of each unit of the apparatus 300, reference may be made to the description part of the above method embodiment, and details are not repeated here.
As can be seen from the above description, since the first coordinates of the target object in the first image are accurate and the second coordinates of the target object in the second image are also accurate, the conversion relationship between the first coordinates of the target object in the first image and the target coordinates of the target object in the world coordinate system and the conversion relationship between the second coordinates of the target object in the second image and the target coordinates are accurate. Therefore, the target coordinates of the calculated target object in the world coordinate system can be considered to be accurate. Therefore, by the scheme provided by the embodiment of the application, the target coordinate of the target object in the world coordinate system can be accurately determined, and further, the performance of the intelligent mobile device can be improved.
An embodiment of the present application further provides a device for determining a location, where the device includes: a processor and a memory;
the memory to store instructions;
the processor, configured to execute the instructions in the memory, to perform the position determination method of any of the above method embodiments.
An embodiment of the present application further provides an intelligent mobile device, where the intelligent mobile device includes: the monocular camera comprises a monocular camera and a rotation control mechanism, wherein the rotation control mechanism is used for controlling the monocular camera to rotate for a preset angle around a preset rotating shaft.
It will be appreciated that when it is desired to determine the target coordinates of a target object in the world coordinate system, first, the monocular camera may be controlled to capture a first image including the target object while the monocular camera is in the first position. And then controlling the monocular camera to rotate around a preset rotating shaft by a preset angle by using the rotation control mechanism, and shooting a second image including the target object after controlling the monocular camera to rotate around the preset rotating shaft by the preset angle at the first position. The above method embodiments may then be performed to provide a position determination method to determine target coordinates of the target object in the world coordinate system.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice in the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the attached claims
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (7)

1. A method of position determination, the method comprising:
the method comprises the steps of acquiring a first image which is shot by a monocular camera and comprises a target object when the monocular camera is located at a first position, and acquiring a second image which is shot by the monocular camera and comprises the target object after the monocular camera rotates around a preset rotating shaft at the first position for a preset angle; the preset angle is determined according to the shooting visual angle of the monocular camera;
determining the target coordinate according to a conversion relation between a first coordinate of the target object in the first image and a target coordinate of the target object in a world coordinate system and a conversion relation between a second coordinate of the target object in the second image and the target coordinate;
the determining the target coordinates according to a conversion relationship between a first coordinate of the target object in the first image and a target coordinate of the target object in a world coordinate system and a conversion relationship between a second coordinate of the target object in the second image and the target coordinates comprises:
determining a first equation reflecting a functional relationship between the first coordinate and the target coordinate according to the first coordinate, an internal reference matrix of the monocular camera and a first external reference matrix of the monocular camera; wherein the first external reference matrix of the monocular camera is determined according to a relative position relationship between the first position and an origin of a camera coordinate system of the monocular camera;
determining a second equation reflecting a functional relationship between the second coordinate and the target coordinate according to the second coordinate, the internal reference matrix of the monocular camera and the second external reference matrix of the monocular camera; the second appearance matrix of the monocular camera is determined according to the relative position relation between the second position and the origin of the camera coordinate system of the monocular camera; the second position is a position where the monocular camera is located after rotating around the preset rotating shaft by a preset angle at the first position;
and determining the target coordinates of the target object in the world coordinate system according to the first equation and the second equation.
2. The method of claim 1, wherein the first location is an origin of a camera coordinate system of the monocular camera.
3. The method according to any one of claims 1-2, wherein the predetermined rotation axis is an X-axis, a Y-axis or a Z-axis of a camera coordinate system of the monocular camera.
4. A position determining apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring a first image which is shot by a monocular camera when the monocular camera is positioned at a first position and comprises a target object, and acquiring a second image which is shot by the monocular camera after the monocular camera rotates around a preset rotating shaft at the first position by a preset angle and comprises the target object; the preset angle is determined according to the shooting visual angle of the monocular camera;
a determination unit configured to determine the target coordinate according to a conversion relationship between a first coordinate of the target object in the first image and a target coordinate of the target object in a world coordinate system, and a conversion relationship between a second coordinate of the target object in the second image and the target coordinate;
the determination unit includes:
the first determining subunit is used for determining a first equation reflecting the functional relationship between the first coordinate and the target coordinate according to the first coordinate, the internal reference matrix of the monocular camera and the first external reference matrix of the monocular camera; wherein the first external reference matrix of the monocular camera is determined according to a relative position relationship between the first position and an origin of a camera coordinate system of the monocular camera;
a second determining subunit, configured to determine, according to the second coordinate, the internal reference matrix of the monocular camera, and the second external reference matrix of the monocular camera, a second equation that represents a functional relationship between the second coordinate and the target coordinate; the second appearance matrix of the monocular camera is determined according to the relative position relation between the second position and the origin of the camera coordinate system of the monocular camera; the second position is a position where the monocular camera is located after rotating around the preset rotating shaft by a preset angle at the first position;
and the determining subunit is used for determining the target coordinates of the target object in the world coordinate system according to the first equation and the second equation.
5. The apparatus of claim 4, wherein the first location is an origin of the monocular camera coordinate system.
6. The apparatus according to any one of claims 4-5, wherein the predetermined rotation axis is an X-axis, a Y-axis or a Z-axis of a camera coordinate system of the monocular camera.
7. A position determining device, the device comprising: a processor and a memory;
the memory to store instructions;
the processor, configured to execute the instructions in the memory, to perform the method of any of claims 1-3.
CN201910702089.9A 2019-07-31 2019-07-31 Position determination method, device and equipment Active CN110332930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910702089.9A CN110332930B (en) 2019-07-31 2019-07-31 Position determination method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910702089.9A CN110332930B (en) 2019-07-31 2019-07-31 Position determination method, device and equipment

Publications (2)

Publication Number Publication Date
CN110332930A CN110332930A (en) 2019-10-15
CN110332930B true CN110332930B (en) 2021-09-17

Family

ID=68148152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910702089.9A Active CN110332930B (en) 2019-07-31 2019-07-31 Position determination method, device and equipment

Country Status (1)

Country Link
CN (1) CN110332930B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113167577A (en) * 2020-06-22 2021-07-23 深圳市大疆创新科技有限公司 Surveying method for a movable platform, movable platform and storage medium
CN114010104A (en) * 2021-11-01 2022-02-08 普联技术有限公司 Statistical method and statistical device for cleaning area
CN114608555A (en) * 2022-02-28 2022-06-10 珠海云洲智能科技股份有限公司 Target positioning method, system and storage medium
CN115802159B (en) * 2023-02-01 2023-04-28 北京蓝色星际科技股份有限公司 Information display method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101581569A (en) * 2009-06-17 2009-11-18 北京信息科技大学 Calibrating method of structural parameters of binocular visual sensing system
CN101932906A (en) * 2008-02-12 2010-12-29 特林布尔公司 Localization of a surveying instrument in relation to a ground mark
CN103020957A (en) * 2012-11-20 2013-04-03 北京航空航天大学 Mobile-robot-carried camera position calibration method
CN103810475A (en) * 2014-02-19 2014-05-21 百度在线网络技术(北京)有限公司 Target object recognition method and apparatus
CN105340258A (en) * 2013-06-28 2016-02-17 夏普株式会社 Location detection device
DE102018200154A1 (en) * 2017-01-12 2018-07-12 Fanuc Corporation Calibration device, calibration method and program for a visual sensor
CN109272570A (en) * 2018-08-16 2019-01-25 合肥工业大学 A kind of spatial point three-dimensional coordinate method for solving based on stereoscopic vision mathematical model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103528571B (en) * 2013-10-12 2016-04-06 上海新跃仪表厂 Single eye stereo vision relative pose measuring method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101932906A (en) * 2008-02-12 2010-12-29 特林布尔公司 Localization of a surveying instrument in relation to a ground mark
CN101581569A (en) * 2009-06-17 2009-11-18 北京信息科技大学 Calibrating method of structural parameters of binocular visual sensing system
CN103020957A (en) * 2012-11-20 2013-04-03 北京航空航天大学 Mobile-robot-carried camera position calibration method
CN105340258A (en) * 2013-06-28 2016-02-17 夏普株式会社 Location detection device
CN103810475A (en) * 2014-02-19 2014-05-21 百度在线网络技术(北京)有限公司 Target object recognition method and apparatus
DE102018200154A1 (en) * 2017-01-12 2018-07-12 Fanuc Corporation Calibration device, calibration method and program for a visual sensor
CN109272570A (en) * 2018-08-16 2019-01-25 合肥工业大学 A kind of spatial point three-dimensional coordinate method for solving based on stereoscopic vision mathematical model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
双目视觉在类人机器人测距中的应用;袁泉等;《武汉工程大学学报》;20170415(第02期);193-198 *

Also Published As

Publication number Publication date
CN110332930A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN110332930B (en) Position determination method, device and equipment
US10984554B2 (en) Monocular vision tracking method, apparatus and non-volatile computer-readable storage medium
CN106529495B (en) Obstacle detection method and device for aircraft
CN107705333B (en) Space positioning method and device based on binocular camera
CN107113376B (en) A kind of image processing method, device and video camera
JP6443700B2 (en) Method, apparatus, and system for obtaining antenna configuration parameters
CN113074733A (en) Flight trajectory generation method, control device and unmanned aerial vehicle
WO2020014987A1 (en) Mobile robot control method and apparatus, device, and storage medium
US10841570B2 (en) Calibration device and method of operating the same
CN110176032A (en) A kind of three-dimensional rebuilding method and device
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN105635555A (en) Camera focusing control method, image pick-up device and wearable intelligent terminal
WO2015112647A1 (en) Object oriented image processing and rendering in a multi-dimensional space
WO2020063058A1 (en) Calibration method for multi-degree-of-freedom movable vision system
CN106060527A (en) Method and apparatus for extending locating range of binocular camera
JP2005256232A (en) Method, apparatus and program for displaying 3d data
CN109040525B (en) Image processing method, image processing device, computer readable medium and electronic equipment
CN110825079A (en) Map construction method and device
US10388069B2 (en) Methods and systems for light field augmented reality/virtual reality on mobile devices
CN111316325A (en) Shooting device parameter calibration method, equipment and storage medium
CN113034347B (en) Oblique photography image processing method, device, processing equipment and storage medium
CN109451216B (en) Display processing method and device for shot photos
CN111699453A (en) Control method, device and equipment of movable platform and storage medium
CN112254653B (en) Program control method for 3D information acquisition
CN110675445B (en) Visual positioning method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 7-605, 6th floor, building 1, yard a, Guanghua Road, Chaoyang District, Beijing 100026

Patentee after: Beijing dog vacuum cleaner Group Co.,Ltd.

Address before: 7-605, 6th floor, building 1, yard a, Guanghua Road, Chaoyang District, Beijing 100026

Patentee before: PUPPY ELECTRONIC APPLIANCES INTERNET TECHNOLOGY (BEIJING) Co.,Ltd.