CN110675445B - Visual positioning method, device and storage medium - Google Patents

Visual positioning method, device and storage medium Download PDF

Info

Publication number
CN110675445B
CN110675445B CN201910924517.2A CN201910924517A CN110675445B CN 110675445 B CN110675445 B CN 110675445B CN 201910924517 A CN201910924517 A CN 201910924517A CN 110675445 B CN110675445 B CN 110675445B
Authority
CN
China
Prior art keywords
camera
angle
inclination angle
determining
azimuth angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910924517.2A
Other languages
Chinese (zh)
Other versions
CN110675445A (en
Inventor
陈海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenlan Intelligent Technology Shanghai Co ltd
Original Assignee
Deep Blue Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deep Blue Technology Shanghai Co Ltd filed Critical Deep Blue Technology Shanghai Co Ltd
Priority to CN201910924517.2A priority Critical patent/CN110675445B/en
Publication of CN110675445A publication Critical patent/CN110675445A/en
Application granted granted Critical
Publication of CN110675445B publication Critical patent/CN110675445B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The application discloses a visual positioning method, a visual positioning device and a storage medium, relates to the technical field of positioning and aims to solve the problem that the operation of the visual positioning method in the prior art is complex. According to the method, the azimuth angle and the inclination angle of an object to be positioned when the object to be positioned is respectively positioned at the centers of the image planes of the two cameras are determined by rotating the two cameras, and the coordinate of the object to be positioned is determined according to the determined position, the determined azimuth angle and the determined inclination angle of the two cameras. Therefore, the coordinate of the object to be positioned can be determined according to the position rotating angle of the camera, and the effect of simplifying the positioning operation is achieved; meanwhile, the object to be positioned is positioned at the center of the image surface, so that deformation is avoided, and the positioning accuracy is further improved.

Description

Visual positioning method, device and storage medium
Technical Field
The present application relates to the field of positioning technologies, and in particular, to a visual positioning method, apparatus, and storage medium.
Background
Computer vision-based visual positioning is a positioning method developed in recent years, which uses a visual sensor to acquire an image of an object, and then uses a computer to perform image processing, thereby obtaining position information of the object. Currently, a monocular vision positioning method or a binocular vision positioning method is generally adopted to position an object to be positioned.
In the monocular visual positioning, a target is positioned by using a single camera, and the method uses the internal angle and distance parameters of the camera for calculation, so the internal parameters of the camera need to be accurately calibrated and the camera needs to be accurately fixed at one position. Any change in camera position will result in a positioning failure. So it is not well applied in engineering practice.
Binocular vision positioning is a method for acquiring three-dimensional geometric information of an object by acquiring two images of the object to be measured from different positions by using imaging equipment based on a parallax principle and calculating position deviation between corresponding points of the images. In practical application, all physical parameters of the two cameras are required to be completely consistent, so that comparison can be carried out.
Therefore, the two visual positioning methods have the problem of complex operation in the actual use process.
Disclosure of Invention
The embodiment of the application provides a visual positioning method, a visual positioning device and a storage medium, which are used for solving the problem that the operation of the visual positioning method in the prior art is complex.
In a first aspect, an embodiment of the present application provides a visual positioning method, including:
determining an object to be positioned in the first camera and the object to be positioned in the second camera simultaneously through an image recognition technology;
rotating the first camera, and determining a first azimuth angle and a first inclination angle of the object to be positioned at the image surface center position of the first camera; and;
rotating the second camera to determine a second azimuth angle and a second inclination angle of the object to be positioned at the image surface center position of the second camera;
and determining the coordinates of the object to be positioned in a space coordinate system according to the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle, the position of the first camera and the position of the second camera.
In one embodiment, the rotating the first camera to determine a first azimuth angle and a first inclination angle of the object to be positioned at the image plane center position of the first camera includes:
displaying a first vertical strip passing through the center position on the image surface of the first camera through a photosensitive chip of the first camera;
controlling the first camera to rotate horizontally, and taking a corresponding horizontal rotation angle of the object to be positioned when the object to be positioned is displayed in the first vertical strip as the first azimuth angle;
displaying a first horizontal strip passing through the center position on the image surface of the first camera through a photosensitive chip of the first camera;
and controlling the first camera to vertically rotate, and taking a corresponding vertical rotation angle of the object to be positioned when the object to be positioned is displayed in the first horizontal strip as the first inclination angle.
In an embodiment, the rotating the second camera to determine a second azimuth angle and a second inclination angle of the object to be positioned at the image plane center position of the second camera includes:
displaying a second vertical strip passing through the center position on the image surface of the second camera through a photosensitive chip of the second camera;
controlling the second camera to rotate horizontally, and taking a corresponding horizontal rotation angle of the object to be positioned when the object to be positioned is displayed in the second vertical strip as the second azimuth angle;
displaying a second horizontal strip passing through the center position on the image surface of the second camera through a photosensitive chip of the second camera;
and controlling the second camera to vertically rotate, and taking a corresponding vertical rotation angle of the object to be positioned as the second inclination angle when the object to be positioned is displayed in the second horizontal strip.
In one embodiment, before the rotating the first camera and determining the first azimuth angle and the first inclination angle of the object to be positioned at the image plane center position of the first camera, the method further includes:
mapping the object to be positioned on the image surface of the first camera through a lens of the first camera, and determining a first position of the object to be positioned on the image surface of the first camera;
estimating the first azimuth angle and the first inclination angle according to the first position; and;
the object to be positioned is mapped on the image surface of the second camera through the lens of the second camera, and the second position of the object to be positioned on the image surface of the second camera is determined;
and estimating the second azimuth angle and the second inclination angle according to the second position.
In one embodiment, the determining the coordinates of the object to be positioned in the spatial coordinate system according to the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle, the position of the first camera, and the position of the second camera includes:
mapping the first camera, the second camera and the object to be positioned to the space coordinate system, and determining the coordinate of the first camera according to the position of the first camera; determining the coordinates of the second camera according to the position of the second camera;
and determining the coordinates of the object to be positioned in the space coordinate system according to the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle, the coordinates of the first camera and the coordinates of the second camera.
In a second aspect, an embodiment of the present application provides a visual positioning apparatus, including:
the object determining module is used for simultaneously determining an object to be positioned in the first camera and the object to be positioned in the second camera through an image recognition technology;
the first angle acquisition module is used for rotating the first camera and determining a first azimuth angle and a first inclination angle of the object to be positioned at the image surface center position of the first camera; and;
the second angle acquisition module is used for rotating the second camera and determining a second azimuth angle and a second inclination angle of the object to be positioned at the image surface center position of the second camera;
and the coordinate determining module is used for determining the coordinates of the object to be positioned in a space coordinate system according to the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle, the position of the first camera and the position of the second camera.
In one embodiment, the first acquisition angle module includes:
a display first vertical stripe unit, configured to display a first vertical stripe passing through a center position on the first camera image plane through a photosensitive chip of the first camera;
a first azimuth angle determining unit, configured to control the first camera to rotate horizontally, and use a horizontal rotation angle corresponding to the object to be positioned when the object to be positioned is displayed in the first vertical stripe as the first azimuth angle;
the display unit is used for displaying a first horizontal strip passing through the center position on the image surface of the first camera through a photosensitive chip of the first camera;
and the first inclination angle determining unit is used for controlling the first camera to vertically rotate and taking a corresponding vertical rotation angle of the object to be positioned when the object to be positioned is displayed in the first horizontal strip as the first inclination angle.
In one embodiment, the second acquisition angle module includes:
a second vertical stripe display unit, configured to display a second vertical stripe passing through a center position on an image plane of the second camera through a photosensitive chip of the second camera;
a second azimuth angle determining unit, configured to control the second camera to rotate horizontally, and use a horizontal rotation angle corresponding to the object to be positioned when the object to be positioned is displayed in the second vertical stripe as the second azimuth angle;
the second horizontal stripe display unit is used for displaying a second horizontal stripe passing through the center position on the image surface of the second camera through a photosensitive chip of the second camera;
and the second inclination angle determining unit is used for controlling the second camera to vertically rotate and taking a corresponding vertical rotation angle of the object to be positioned as the second inclination angle when the object to be positioned is displayed in the second horizontal strip.
In one embodiment, the apparatus further comprises:
a first position determining module, configured to rotate the first camera by the first angle obtaining module, determine a first azimuth and a first inclination of the object to be positioned at the center of the image plane of the first camera, map the object to be positioned on the image plane of the first camera through a lens of the first camera, and determine a first position of the object to be positioned on the image plane of the first camera;
the first estimation module is used for estimating the first azimuth angle and the first inclination angle according to the first position; and;
a second position determining module, configured to map the object to be positioned on the image plane of the second camera through a lens of the second camera, and determine a second position of the object to be positioned on the image plane of the second camera;
and the second estimation module is used for estimating the second azimuth angle and the second inclination angle according to the second position.
In one embodiment, determining the coordinates module includes:
the mapping unit is used for mapping the first camera, the second camera and the object to be positioned to the space coordinate system and determining the coordinate of the first camera according to the position of the first camera; determining the coordinates of the second camera according to the position of the second camera;
and the coordinate unit for determining the object to be positioned is used for determining the coordinate of the object to be positioned in the space coordinate system according to the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle, the coordinate of the first camera and the coordinate of the second camera.
In a third aspect, another embodiment of the present application further provides a computing device comprising at least one processor; and;
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute a visual positioning method provided by the embodiments of the present application.
In a fourth aspect, another embodiment of the present application further provides a computer storage medium, where the computer storage medium stores computer-executable instructions for causing a computer to execute a visual positioning method in an embodiment of the present application.
According to the visual positioning method, the visual positioning device and the storage medium, the two cameras are rotated, the azimuth angle and the inclination angle of the object to be positioned when the object to be positioned is respectively positioned at the image plane centers of the two cameras are determined, and the coordinate of the object to be positioned is determined according to the determined position, the azimuth angle and the inclination angle of the two cameras. Therefore, the coordinates of the object to be positioned can be determined according to the position of the camera and the rotating angle, so that the effect of simplifying the positioning operation is achieved; meanwhile, the object to be positioned is positioned at the center of the image surface, so that deformation is avoided, and the positioning accuracy is further improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of the imaging principle of binocular vision positioning in the prior art;
FIG. 2 is a schematic flow chart of a visual positioning method in an embodiment of the present application;
fig. 3 is a first schematic image diagram of an image displayed on an image plane of a first camera in an embodiment of the present application;
fig. 4 is a second schematic image diagram of an image displayed on the image plane of the first camera in the embodiment of the present application;
fig. 5 is a schematic position diagram of an object to be positioned on the image plane in the embodiment of the present application;
FIG. 6 is a schematic diagram of a camera and an object to be positioned mapped in a spatial coordinate system in an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a two-way long-short term memory network model for training in the embodiment of the present application;
fig. 8 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
In order to solve the problem that the operation of the visual positioning method in the prior art is complex, embodiments of the present application provide a visual positioning method, an apparatus, and a storage medium. In order to better understand the technical solution provided by the embodiments of the present application, the following brief description is made on the basic principle of the solution:
visual positioning is a positioning method in which a visual sensor is used to obtain an image of an object, and then a computer is used to perform image processing, thereby obtaining position information of the object. In the prior art, two methods, monocular vision positioning and binocular vision positioning, are usually adopted to position the object to be positioned.
Monocular vision positioning is to position an object to be positioned based on a pinhole imaging principle, and because the method uses the internal angle and distance parameters of the cameras for calculation, the internal parameters of each camera need to be accurately calibrated. In addition, the method needs to perform a complex algorithm to correct the acquired data, so that accurate positioning information is difficult to obtain. Some algorithms also require the use of external reference dimensions, which require that the camera must be fixed precisely in one place, and any change in the position of the camera (e.g., angle, height, etc.) will result in a positioning failure, and therefore are not well suited for engineering practice.
The binocular vision positioning is a method for positioning an object to be positioned based on a parallax principle, namely, two images of the object to be positioned are obtained from different positions by using imaging equipment, and three-dimensional geometric information of the object is obtained by calculating position deviation between corresponding points of the images. As shown in fig. 1, which is the imaging principle of binocular vision positioning. The left image and the right image respectively represent image planes of the two cameras, and the point P is an object to be positioned. The position of the point P is determined by the positions of the projected points of the point P on the two image planes. However, in practical applications, binocular vision positioning has the following problems: firstly, all physical parameters of the two cameras are completely consistent, so that comparison can be carried out. Second, positioning is inaccurate because the dual cameras are difficult to aim at the same feature point of the object.
Therefore, both of the above-described positioning methods have a problem of complicated operation. In addition, the above two methods also have a problem of positioning error caused by deformation generated by the spherical lens of the camera. In view of this, in the visual positioning method, the visual positioning device, and the storage medium provided in the embodiments of the present application, the two cameras are rotated to determine the azimuth angle and the inclination angle of the object to be positioned when the object to be positioned is respectively located at the center of the image planes of the two cameras, and the coordinates of the object to be positioned are determined according to the determined positions, azimuth angles, and inclination angles of the two cameras. Thus, the coordinates of the object to be positioned can be determined according to the position of the camera and the rotating angle; the method does not need to rely on the internal parameters of the cameras as in the monocular vision positioning method, and does not need to accurately aim the same characteristic point as in the existing binocular vision positioning method, thereby achieving the effect of simplifying the positioning operation; meanwhile, the object to be positioned is positioned at the center of the image surface, so that deformation is avoided, and the positioning accuracy is further improved.
For the convenience of understanding, the technical solutions provided in the present application are further described below with reference to the accompanying drawings. Fig. 2 is a schematic flow chart of visual positioning, which includes the following steps:
step 201: and simultaneously determining the object to be positioned in the first camera and the object to be positioned in the second camera by an image recognition technology.
Wherein, first camera and second camera are located same space, and the distance between first camera and the second camera can be confirmed according to actual conditions.
The camera may be a camera for normal light imaging, or a camera for thermal imaging, and the present application does not limit this.
Step 202: and rotating the first camera to determine a first azimuth angle and a first inclination angle of the object to be positioned at the image surface center position of the first camera.
In the embodiment of the application, the first camera and the second camera are both provided with the horizontal rotating motor and the longitudinal rotating motor, and can rotate horizontally and vertically. And an angle sensor is arranged at the same time and used for recording the rotation angle of the camera.
Step 203: and rotating the second camera to determine a second azimuth angle and a second inclination angle of the object to be positioned at the image surface center position of the second camera.
Step 204: and determining the coordinates of the object to be positioned in a space coordinate system according to the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle, the position of the first camera and the position of the second camera.
Wherein, the position of camera is through measuring when according to the camera installation and is obtained, for example: mounting height, distance between cameras, etc.
It should be noted that the execution order of step 202 and step 203 is not limited.
Therefore, the coordinates of the object to be positioned can be determined according to the position of the camera and the rotating angle, so that the effect of simplifying the positioning operation is achieved; meanwhile, the object to be positioned is positioned at the center of the image surface, so that deformation is avoided, and the positioning accuracy is further improved.
In order to obtain the first azimuth angle and the first inclination angle of the first camera when the object to be positioned is at the image plane center position of the first camera, step 202 may specifically be implemented as steps a1-a 4:
step A1: and displaying a first vertical strip passing through the central position on the image surface of the first camera through the photosensitive chip of the first camera.
As shown in fig. 3, it is the image displayed on the first camera image plane. Wherein, the gray area is a shielding area; the white area is a first vertical stripe of the display; the "+" in the first vertical stripe in FIG. 3 is the center position of the first camera image plane.
Step A2: and controlling the first camera to rotate horizontally, and taking a corresponding horizontal rotation angle of the object to be positioned when the object to be positioned is displayed in the first vertical strip as the first azimuth angle.
In the embodiment of the application, the first camera is controlled to rotate horizontally through the horizontal rotating motor, and when an object to be positioned appears in the first vertical strip, the first azimuth angle of the first camera is determined through the angle sensor.
Step A3: and displaying a first horizontal strip passing through the center position on the image surface of the first camera through the photosensitive chip of the first camera.
As shown in fig. 4, it is the image displayed on the first camera image plane. Wherein, the gray area is a shielding area; the white area is a first horizontal stripe of the display; the "+" in the first horizontal stripe in fig. 4 is the center position of the first camera image plane.
Step A4: and controlling the first camera to vertically rotate, and taking a corresponding vertical rotation angle of the object to be positioned when the object to be positioned is displayed in the first horizontal strip as the first inclination angle.
In the embodiment of the application, the first camera is controlled to rotate horizontally and vertically through the longitudinal rotating motor, and when an object to be positioned appears in the first horizontal strip, the first inclination angle of the first camera is determined through the angle sensor.
In the embodiment of the present application, the first horizontal stripe passing through the center position may be displayed first, and after the first inclination angle is determined, the first vertical stripe passing through the center position may be displayed, so as to determine the first azimuth angle.
And rotating the first camera through the steps to enable the object to be positioned at the center position of the image surface of the first camera, and determining the first azimuth angle and the first inclination angle at the moment. Therefore, the object to be positioned is positioned at the center of the image surface, so that deformation is avoided, and the positioning accuracy is further improved; meanwhile, the position of the object to be positioned can be determined through the first azimuth angle, the first inclination angle and other parameters, and the positioning operation is simplified.
Similarly, in order to obtain the second azimuth angle and the second inclination angle of the second camera when the object to be positioned is at the image plane center position of the second camera, step 203 may be specifically implemented as steps B1-B4:
step B1: and displaying a second vertical strip passing through the central position on the image surface of the second camera through the photosensitive chip of the second camera.
Step B2: and controlling the second camera to rotate horizontally, and taking a corresponding horizontal rotation angle of the object to be positioned when the object to be positioned is displayed in the second vertical strip as the second azimuth angle.
Step B3: and displaying a second horizontal strip passing through the central position on the image surface of the second camera through the photosensitive chip of the second camera.
Step B4: and controlling the second camera to vertically rotate, and taking a corresponding vertical rotation angle of the object to be positioned when the object to be positioned is displayed in the second horizontal strip as the second inclination angle.
And rotating the second camera through the steps to enable the object to be positioned at the center position of the image surface of the second camera, and determining a second azimuth angle and a second inclination angle at the moment. Therefore, the object to be positioned is positioned at the center of the image surface, so that deformation is avoided, and the positioning accuracy is further improved; meanwhile, the position of the object to be positioned can be determined through the second azimuth angle, the second inclination angle and other parameters, and the positioning operation is simplified.
In order to determine the first azimuth angle, the first inclination angle, the second azimuth angle, and the second inclination angle more quickly, in the embodiment of the present application, before the first azimuth angle, the first inclination angle, the second azimuth angle, and the second inclination angle are obtained, the first azimuth angle, the first inclination angle, the second azimuth angle, and the second inclination angle are estimated, which may be specifically implemented as steps C1-C4:
step C1: and mapping the object to be positioned on the image plane of the first camera through the lens of the first camera, and determining the first position of the object to be positioned on the image plane of the first camera.
Step C2: and estimating the first azimuth angle and the first inclination angle according to the first position.
In the embodiment of the present application, a rectangular coordinate system is established with the center position of the image plane as the center. An object to be positioned is mapped on an image surface of the camera through a lens of the camera by using a principle of pinhole imaging, and the position on the image surface is determined. As shown in fig. 5, it is the position of the object to be positioned on the image plane, and according to the coordinates of the position, the azimuth angle and the inclination angle are determined; for example: if the azimuth angle is increased by 5 degrees, the value of the abscissa is increased by 1; if the inclination angle is increased by 5 degrees, the value of the ordinate is increased by 1; if the coordinate of the object A to be positioned in the rectangular coordinate system of the image surface is (1,1), the estimated azimuth angle and the estimated inclination angle are both 5 degrees.
Step C3: and mapping the object to be positioned on the image surface of the second camera through the lens of the second camera, and determining the second position of the object to be positioned on the image surface of the second camera.
Step C4: and estimating the second azimuth angle and the second inclination angle according to the second position.
Therefore, the first azimuth angle, the first inclination angle, the second azimuth angle and the second inclination angle can be estimated, when the camera is rotated, the estimated angle can be directly rotated and fine-tuned, so that the azimuth angle and the inclination angle can be obtained more quickly, and the positioning can be completed more quickly.
After the parameters such as the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle and the like are obtained, the obtained parameters are mapped into a spatial coordinate system, and the coordinates of the object to be positioned are determined, which can be specifically implemented as steps D1-D2:
step D1: mapping the first camera, the second camera and the object to be positioned to the space coordinate system, and determining the coordinate of the first camera according to the position of the first camera; and determining the coordinates of the second camera according to the position of the second camera.
The mounting heights of the two cameras can be normalized, the heights of the two cameras in a space coordinate system are determined, and the plane positions of the two cameras in the space coordinate system are determined according to the positions of the two cameras. For example: taking the x and y values of the first camera to be 0, the x and y values of the second camera can be determined according to the distance between the two cameras.
Step D2: and determining the coordinates of the object to be positioned in the space coordinate system according to the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle, the coordinates of the first camera and the coordinates of the second camera.
In the embodiment of the present application, for convenience of calculation, as shown in fig. 6, the coordinates of the first camera may be a (x1, y1, z1), and the coordinates of the second camera may be B (x2, y2, z 2); the projection of point a on the horizontal plane (xy plane) is a '(a' is the origin of the spatial coordinate system), and the projection of point B on the horizontal plane (xy plane) is B '(B' is located on the x axis). The point C is a point mapped by an object to be positioned in a space coordinate system, and the coordinate of the point C is (x0, y0, z 0); its projection on the horizontal plane is point C'. Point D is the intersection of the AC extension and the A 'C' extension.
And in the space coordinate system, angle YA ' C is a first azimuth angle, angle a ' AC is a first inclination angle, angle YB ' C ' is a second azimuth angle, and angle B ' BC is a second inclination angle. Therefore, the acquired parameters, the first camera, the second camera and the object to be positioned are mapped to a space coordinate system, so that the problem is converted into a mathematical problem, and the point C is solved.
In this embodiment of the present application, the coordinates of the object to be positioned in the spatial coordinate system are determined according to a trigonometric function formula, the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle, the coordinates of the first camera, and the coordinates of the second camera.
In specific implementation, for convenience of calculation, 90 ° -. YA 'C' ═ α, 90 ° -. YB 'C' ═ β, angle a 'C' B '═ γ, and angle a' AC ═ θ; wherein alpha and beta are known parameters. I a 'B' | l, | a 'C' | m, | C 'B' | n, z0 | C 'C |, z1 | a' a |.
According to a distance formula between two points on a plane, obtaining:
Figure BDA0002218591810000121
according to sine theorem and formula (1), we get:
Figure BDA0002218591810000122
since γ is 180- α - β, and sinr is sin (α + β), it can be obtained according to equation (2):
Figure BDA0002218591810000123
Figure BDA0002218591810000124
according to the slope formula, the following results are obtained:
Figure BDA0002218591810000125
since | C 'D | ═ a' D | -m, it can be obtained from equations (3) and (5):
Figure BDA0002218591810000131
according to the theorem of similar triangles, we can obtain:
Figure BDA0002218591810000132
namely:
Figure BDA0002218591810000133
the value of z0 in point C is thus determined according to equations (5) and (6):
Figure BDA0002218591810000134
according to the trigonometric function theorem, the following can be obtained:
Figure BDA0002218591810000135
since x1 and y1 are both 0, we can get:
Figure BDA0002218591810000136
Figure BDA0002218591810000137
thus, point C (x0, y0, z0) was successfully located.
Based on the same inventive concept, the embodiment of the application also provides a visual positioning device. As shown in fig. 7, the apparatus includes:
an object determining module 701, configured to determine, through an image recognition technology, an object to be located in the first camera and the object to be located in the second camera at the same time;
a first angle obtaining module 702, configured to rotate the first camera, and determine a first azimuth angle and a first inclination angle of the object to be positioned at an image plane center position of the first camera; and;
a second angle obtaining module 703, configured to rotate the second camera, and determine a second azimuth angle and a second inclination angle of the object to be positioned at the image plane center position of the second camera;
a coordinate determining module 704, configured to determine coordinates of the object to be located in a spatial coordinate system according to the first azimuth, the first inclination, the second azimuth, the second inclination, the position of the first camera, and the position of the second camera.
Further, the first obtaining angle module 702 includes:
a display first vertical stripe unit, configured to display a first vertical stripe passing through a center position on the first camera image plane through a photosensitive chip of the first camera;
a first azimuth angle determining unit, configured to control the first camera to rotate horizontally, and use a horizontal rotation angle corresponding to the object to be positioned when the object to be positioned is displayed in the first vertical stripe as the first azimuth angle;
the display unit is used for displaying a first horizontal strip passing through the center position on the image surface of the first camera through a photosensitive chip of the first camera;
and the first inclination angle determining unit is used for controlling the first camera to vertically rotate and taking a corresponding vertical rotation angle of the object to be positioned when the object to be positioned is displayed in the first horizontal strip as the first inclination angle.
Further, the second angle obtaining module 703 includes:
a second vertical stripe display unit, configured to display a second vertical stripe passing through a center position on an image plane of the second camera through a photosensitive chip of the second camera;
a second azimuth angle determining unit, configured to control the second camera to rotate horizontally, and use a horizontal rotation angle corresponding to the object to be positioned when the object to be positioned is displayed in the second vertical stripe as the second azimuth angle;
the second horizontal stripe display unit is used for displaying a second horizontal stripe passing through the center position on the image surface of the second camera through a photosensitive chip of the second camera;
and the second inclination angle determining unit is used for controlling the second camera to vertically rotate and taking the corresponding vertical rotation angle of the object to be positioned when the object to be positioned is displayed in the second horizontal strip as the second inclination angle.
Further, the apparatus further comprises:
a first position determining module, configured to rotate the first camera by using the first angle obtaining module 702, determine a first azimuth and a first inclination of the object to be positioned at the image plane center position of the first camera, map the object to be positioned on the image plane of the first camera through a lens of the first camera, and determine a first position of the object to be positioned on the image plane of the first camera;
the first estimation module is used for estimating the first azimuth angle and the first inclination angle according to the first position; and;
a second position determining module, configured to map the object to be positioned on the image plane of the second camera through a lens of the second camera, and determine a second position of the object to be positioned on the image plane of the second camera;
and the second estimation module is used for estimating the second azimuth angle and the second inclination angle according to the second position.
Further, determining coordinates module 704 includes:
the mapping unit is used for mapping the first camera, the second camera and the object to be positioned to the space coordinate system and determining the coordinate of the first camera according to the position of the first camera; determining the coordinates of the second camera according to the position of the second camera;
and the coordinate unit for determining the object to be positioned is used for determining the coordinate of the object to be positioned in the space coordinate system according to the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle, the coordinate of the first camera and the coordinate of the second camera.
Having described the method and apparatus for visual positioning according to an exemplary embodiment of the present application, a computing apparatus according to another exemplary embodiment of the present application is described next.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, a computing device may include at least one processor, and at least one memory, according to embodiments of the application. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform steps 201 and 204 of the visual positioning method according to various exemplary embodiments of the present application described above in the present specification.
The computing device 80 according to this embodiment of the present application is described below with reference to fig. 8. The computing device 80 shown in fig. 8 is only an example and should not bring any limitations to the functionality or scope of use of the embodiments of the present application. The computing device may be, for example, a cell phone, a tablet computer, or the like.
As shown in fig. 8, computing device 80 is embodied in the form of a general purpose computing device. Components of computing device 80 may include, but are not limited to: the at least one processor 81, the at least one memory 82, and a bus 83 connecting the various system components including the memory 82 and the processor 81.
Bus 83 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 82 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)821 and/or cache memory 822, and may further include Read Only Memory (ROM) 823.
Memory 82 may also include a program/utility 825 having a set (at least one) of program modules 824, such program modules 824 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Computing device 80 may also communicate with one or more external devices 84 (e.g., pointing devices, etc.), with one or more devices that enable a user to interact with computing device 80, and/or with any devices (e.g., routers, modems, etc.) that enable computing device 80 to communicate with one or more other computing devices. Such communication may be through input/output (I/O) interfaces 85. Also, computing device 80 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) through network adapter 86. As shown, network adapter 86 communicates with other modules for computing device 80 over bus 83. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computing device 80, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In some possible embodiments, the aspects of the visual positioning method provided herein may also be implemented in the form of a program product, which includes program code for causing a computer device to perform the steps of the method for visual positioning according to various exemplary embodiments of the present application described above in this specification, when the program product is run on the computer device, to perform the steps 201 and 204 as shown in fig. 2.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The visual positioning method of the embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a computing device. However, the program product of the present application is not so limited, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user computing device, partly on the user equipment, as a stand-alone software package, partly on the user computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Moreover, although the operations of the methods of the present application are depicted in the drawings in a sequential order, this does not require or imply that these operations must be performed in this order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a manner that causes the instructions stored in the computer-readable memory to produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A visual positioning method, characterized in that the method comprises:
simultaneously determining an object to be positioned in a first camera and the object to be positioned in a second camera by an image recognition technology;
rotating the first camera, and determining a first azimuth angle and a first inclination angle of the object to be positioned at the image surface center position of the first camera; and (c) a second step of,
rotating the second camera to determine a second azimuth angle and a second inclination angle of the object to be positioned at the image surface center position of the second camera;
determining the coordinates of the object to be positioned in a space coordinate system according to the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle, the position of the first camera and the position of the second camera;
before the step of rotating the first camera and determining a first azimuth angle and a first inclination angle of the object to be positioned at the image plane center position of the first camera, the method further comprises:
mapping the object to be positioned on the image surface of the first camera through a lens of the first camera, and determining a first position of the object to be positioned on the image surface of the first camera;
estimating the first azimuth angle and the first inclination angle according to the first position; and the number of the first and second groups,
the object to be positioned is mapped on the image surface of the second camera through the lens of the second camera, and the second position of the object to be positioned on the image surface of the second camera is determined;
and estimating the second azimuth angle and the second inclination angle according to the second position.
2. The method of claim 1, wherein the rotating the first camera to determine the first azimuth angle and the first inclination angle of the object to be positioned at the image plane center position of the first camera comprises:
displaying a first vertical strip passing through the center position on the image surface of the first camera through a photosensitive chip of the first camera;
controlling the first camera to rotate horizontally, and taking a corresponding horizontal rotation angle of the object to be positioned as the first azimuth angle when the object to be positioned is displayed in the first vertical strip;
displaying a first horizontal strip passing through the center position on the image surface of the first camera through a photosensitive chip of the first camera;
and controlling the first camera to vertically rotate, and taking a corresponding vertical rotation angle of the object to be positioned when the object to be positioned is displayed in the first horizontal strip as the first inclination angle.
3. The method of claim 1, wherein the rotating the second camera to determine a second azimuth angle and a second inclination angle of the object to be positioned at the image plane center position of the second camera comprises:
displaying a second vertical strip passing through the center position on the image surface of the second camera through a photosensitive chip of the second camera;
controlling the second camera to rotate horizontally, and taking a corresponding horizontal rotation angle of the object to be positioned when the object to be positioned is displayed in the second vertical strip as the second azimuth angle;
displaying a second horizontal strip passing through the center position on the image surface of the second camera through a photosensitive chip of the second camera;
and controlling the second camera to vertically rotate, and taking a corresponding vertical rotation angle of the object to be positioned when the object to be positioned is displayed in the second horizontal strip as the second inclination angle.
4. The method of claim 1, wherein determining the coordinates of the object to be positioned in the spatial coordinate system based on the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle, the position of the first camera, and the position of the second camera comprises:
mapping the first camera, the second camera and the object to be positioned to the space coordinate system, and determining the coordinate of the first camera according to the position of the first camera; determining the coordinates of the second camera according to the position of the second camera;
and determining the coordinates of the object to be positioned in the space coordinate system according to the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle, the coordinates of the first camera and the coordinates of the second camera.
5. A visual positioning device, the device comprising:
the object determining module is used for simultaneously determining an object to be positioned in the first camera and the object to be positioned in the second camera through an image recognition technology;
the first angle acquisition module is used for rotating the first camera and determining a first azimuth angle and a first inclination angle of the object to be positioned at the image surface center position of the first camera; and the number of the first and second groups,
the second angle acquisition module is used for rotating the second camera and determining a second azimuth angle and a second inclination angle of the object to be positioned at the image surface center position of the second camera;
the coordinate determining module is used for determining the coordinate of the object to be positioned in a space coordinate system according to the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle, the position of the first camera and the position of the second camera;
a first position determining module, configured to rotate the first camera by the first angle obtaining module, determine a first azimuth and a first inclination of the object to be positioned at the center of the image plane of the first camera, map the object to be positioned on the image plane of the first camera through a lens of the first camera, and determine a first position of the object to be positioned on the image plane of the first camera;
the first estimation module is used for estimating the first azimuth angle and the first inclination angle according to the first position; and the number of the first and second groups,
a second position determining module, configured to map the object to be positioned on the image plane of the second camera through a lens of the second camera, and determine a second position of the object to be positioned on the image plane of the second camera;
and the second estimation module is used for estimating the second azimuth angle and the second inclination angle according to the second position.
6. The apparatus of claim 5, wherein the first obtaining angle module comprises:
a display first vertical stripe unit, configured to display a first vertical stripe passing through a center position on the first camera image plane through a photosensitive chip of the first camera;
a first azimuth angle determining unit, configured to control the first camera to rotate horizontally, and use a horizontal rotation angle corresponding to the object to be positioned when the object to be positioned is displayed in the first vertical stripe as the first azimuth angle;
the display unit is used for displaying a first horizontal strip passing through the center position on the image surface of the first camera through a photosensitive chip of the first camera;
and the first inclination angle determining unit is used for controlling the first camera to vertically rotate and taking a corresponding vertical rotation angle of the object to be positioned when the object to be positioned is displayed in the first horizontal strip as the first inclination angle.
7. The apparatus of claim 5, wherein the second capture angle module comprises:
the second vertical strip display unit is used for displaying a second vertical strip passing through the center position on the second camera image surface through the photosensitive chip of the second camera;
a second azimuth angle determining unit, configured to control the second camera to rotate horizontally, and use a horizontal rotation angle corresponding to the object to be positioned when the object to be positioned is displayed in the second vertical stripe as the second azimuth angle;
the second horizontal stripe display unit is used for displaying a second horizontal stripe passing through the center position on the image surface of the second camera through a photosensitive chip of the second camera;
and the second inclination angle determining unit is used for controlling the second camera to vertically rotate and taking the corresponding vertical rotation angle of the object to be positioned when the object to be positioned is displayed in the second horizontal strip as the second inclination angle.
8. The apparatus of claim 5, wherein determining the coordinate module comprises:
the mapping unit is used for mapping the first camera, the second camera and the object to be positioned to the space coordinate system and determining the coordinate of the first camera according to the position of the first camera; determining the coordinates of the second camera according to the position of the second camera;
and the coordinate unit for determining the object to be positioned is used for determining the coordinate of the object to be positioned in the space coordinate system according to the first azimuth angle, the first inclination angle, the second azimuth angle, the second inclination angle, the coordinate of the first camera and the coordinate of the second camera.
9. A computing device comprising at least one processing unit and at least one memory unit, wherein the memory unit stores a computer program that, when executed by the processing unit, causes the processing unit to perform the steps of the method of any of claims 1 to 4.
10. A computer-readable medium, in which a computer program executable by a terminal device is stored, which program, when run on the terminal device, causes the terminal device to carry out the steps of the method according to any one of claims 1 to 4.
CN201910924517.2A 2019-09-27 2019-09-27 Visual positioning method, device and storage medium Active CN110675445B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910924517.2A CN110675445B (en) 2019-09-27 2019-09-27 Visual positioning method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910924517.2A CN110675445B (en) 2019-09-27 2019-09-27 Visual positioning method, device and storage medium

Publications (2)

Publication Number Publication Date
CN110675445A CN110675445A (en) 2020-01-10
CN110675445B true CN110675445B (en) 2022-06-21

Family

ID=69079515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910924517.2A Active CN110675445B (en) 2019-09-27 2019-09-27 Visual positioning method, device and storage medium

Country Status (1)

Country Link
CN (1) CN110675445B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111215991B (en) * 2020-01-22 2021-06-22 南京豪滨科技有限公司 Steering wheel machining rotation driving device and rotation driving method based on machine vision

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102148965B (en) * 2011-05-09 2014-01-15 厦门博聪信息技术有限公司 Video monitoring system for multi-target tracking close-up shooting
US9600927B1 (en) * 2012-10-21 2017-03-21 Google Inc. Systems and methods for capturing aspects of objects using images and shadowing
US10659750B2 (en) * 2014-07-23 2020-05-19 Apple Inc. Method and system for presenting at least part of an image of a real object in a view of a real environment, and method and system for selecting a subset of a plurality of images
CN108574825B (en) * 2017-03-10 2020-02-21 华为技术有限公司 Method and device for adjusting pan-tilt camera
CN110021044B (en) * 2018-01-10 2022-12-20 华晶科技股份有限公司 Method for calculating coordinates of shot object by using double-fisheye image and image acquisition device
CN109949367B (en) * 2019-03-11 2023-01-20 中山大学 Visible light imaging positioning method based on circular projection
CN110012236A (en) * 2019-03-29 2019-07-12 联想(北京)有限公司 A kind of information processing method, device, equipment and computer storage medium
CN110033480B (en) * 2019-04-19 2023-05-02 西安应用光学研究所 Aerial photography measurement-based airborne photoelectric system target motion vector estimation method
CN112419418A (en) * 2019-08-22 2021-02-26 刘锐 Positioning method based on camera mechanical aiming

Also Published As

Publication number Publication date
CN110675445A (en) 2020-01-10

Similar Documents

Publication Publication Date Title
US11157766B2 (en) Method, apparatus, device and medium for calibrating pose relationship between vehicle sensor and vehicle
US10984554B2 (en) Monocular vision tracking method, apparatus and non-volatile computer-readable storage medium
CN109040736B (en) Method, device, equipment and storage medium for calibrating spatial position of human eye
CN109032348B (en) Intelligent manufacturing method and equipment based on augmented reality
US11557083B2 (en) Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
CN111445533B (en) Binocular camera calibration method, device, equipment and medium
US9939275B1 (en) Methods and systems for geometrical optics positioning using spatial color coded LEDs
CN106570907B (en) Camera calibration method and device
CN110332930B (en) Position determination method, device and equipment
CN110232707A (en) A kind of distance measuring method and device
US20190116354A1 (en) Camera calibration
WO2016187752A1 (en) Method and device for measuring antenna attitude
CN110675445B (en) Visual positioning method, device and storage medium
WO2023010565A1 (en) Method and apparatus for calibrating monocular speckle structured light system, and terminal
CN114187589A (en) Target detection method, device, equipment and storage medium
CN113763478B (en) Unmanned vehicle camera calibration method, device, equipment, storage medium and system
CN110853098A (en) Robot positioning method, device, equipment and storage medium
CN110470232A (en) A kind of method, apparatus, measuring system and electronic equipment measuring difference in height
CN108038871A (en) The pivot of rotating platform determines method, apparatus, server and storage medium
CN113628284B (en) Pose calibration data set generation method, device and system, electronic equipment and medium
CN111220100B (en) Laser beam-based measurement method, device, system, control device, and medium
CN108253931B (en) Binocular stereo vision ranging method and ranging device thereof
CN108650465B (en) Method and device for calculating augmented reality label of camera picture and electronic equipment
CN112449175B (en) Image splicing test method, device, equipment and storage medium
CN117671007B (en) Displacement monitoring method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220825

Address after: Building C, No.888, Huanhu West 2nd Road, Lingang New District, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee after: Shenlan Intelligent Technology (Shanghai) Co.,Ltd.

Address before: Unit 1001, 369 Weining Road, Changning District, Shanghai, 200336 (9th floor of actual floor)

Patentee before: DEEPBLUE TECHNOLOGY (SHANGHAI) Co.,Ltd.