CN113808199A - Positioning method, electronic equipment and positioning system - Google Patents

Positioning method, electronic equipment and positioning system Download PDF

Info

Publication number
CN113808199A
CN113808199A CN202010554485.4A CN202010554485A CN113808199A CN 113808199 A CN113808199 A CN 113808199A CN 202010554485 A CN202010554485 A CN 202010554485A CN 113808199 A CN113808199 A CN 113808199A
Authority
CN
China
Prior art keywords
camera
point
image
calibration
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010554485.4A
Other languages
Chinese (zh)
Other versions
CN113808199B (en
Inventor
姜波
张竞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010554485.4A priority Critical patent/CN113808199B/en
Publication of CN113808199A publication Critical patent/CN113808199A/en
Application granted granted Critical
Publication of CN113808199B publication Critical patent/CN113808199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the disclosure provides a positioning method, an electronic device and a positioning system, in the positioning system comprising a plurality of cameras, a first camera with the same internal reference of the plurality of cameras and a known homography matrix, and a second camera with an unknown other homography matrix, when the postures of the first camera and the second camera are the same, the pixel coordinates of a plurality of second calibration points are obtained by using a plurality of pairs of first calibration points and second calibration points with the same relative position relationship in the first camera and the second camera, so that the homography matrix of the second camera is determined based on the pixel coordinates and world coordinates of the plurality of second calibration points. Therefore, the calibration of the homography matrixes of the plurality of second cameras can be realized only by the first camera based on one known homography matrix.

Description

Positioning method, electronic equipment and positioning system
Technical Field
The disclosed embodiments relate to the field of communication technologies and the field of computer vision technologies, and in particular, to a positioning method, an electronic device, and a positioning system.
Background
The accurate positioning of the traffic incident is beneficial to a vehicle owner to master the road condition in real time, and a safety decision is made in time aiming at an emergency, so that potential safety threats are prevented, and meanwhile, the accurate positioning of the traffic incident is beneficial to timely reaction of related departments and road operators. Identifying and locating traffic events by cameras is an effective solution.
The road positioning scheme of the single camera is realized based on a homography matrix of the camera, wherein the homography matrix of the camera is used for representing the mapping relation between a pixel coordinate system and a world coordinate system of the camera. However, a road length that a single camera can focus on is about one hundred meters or so, and therefore, in order to realize a traffic event recognition function for an expressway, a large number of cameras are generally arranged beside the expressway. This requires calibration of the homography matrix for tens of thousands of cameras alongside the highway.
However, the calibration of the homography matrix is performed on each camera one by one, which causes a huge workload, and a large amount of time and manpower and material resources are consumed, which also causes great difficulty to the positioning work realized based on the cameras.
Disclosure of Invention
The embodiment of the disclosure provides a positioning method, electronic equipment and a positioning system, which are used for realizing a homography matrix calibration task of cameras in batches, reducing the calibration time and workload of the cameras and reducing the positioning cost based on the cameras.
In a first aspect, the present disclosure provides a positioning method applied to a positioning system including a first camera and a second camera, where the first camera and the second camera have the same internal reference; the method comprises the following steps: receiving a first image from a first camera, the first image including a first fiducial point and a plurality of first index points; receiving a second image from a second camera, the second image including a second fiducial point and a plurality of second index points; the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera; determining the pixel coordinate of the second calibration point in the second image according to the attitude parameter of the first camera, the attitude parameter of the second camera, the pixel coordinate and the world coordinate of the first reference point and the world coordinate of the second calibration point; the first calibration points correspond to the second calibration points one by one; in one of the first calibration point and the second calibration point having a corresponding relationship, the relative position of the first calibration point with respect to the first camera is the same as the relative position of the second calibration point with respect to the second camera; determining a homography matrix of the second camera according to the pixel coordinates and world coordinates of the second calibration points; the homography matrix is used for describing a mapping relation between a pixel coordinate system and a world coordinate system of the camera; when a third image from the second camera is received, first world coordinates of a target object in the third image are determined using the homography matrix of the second camera.
In one embodiment of the first aspect, the determining pixel coordinates of the second calibration point in the second image comprises: determining a pitch angle of the second camera according to the attitude parameters of the first camera, the attitude parameters of the second camera and the world coordinates of the first reference point; determining the world coordinate of the second calibration point corresponding to the first calibration point according to the world coordinate of the first calibration point in the first image; and determining the pixel coordinates of the second calibration point in the second image according to the attitude parameters of the first camera, the attitude parameters of the second camera, the pitch angle of the second camera and the world coordinates of the second calibration point.
In an embodiment of the first aspect, the determining world coordinates of the second calibration point corresponding to the first calibration point from the world coordinates of the first calibration point in the first image comprises: determining world coordinates of the second calibration point corresponding to the first calibration point based on a relative position of the first calibration point with respect to the first camera, world coordinates of the first camera in a first world coordinate system, and world coordinates of the second camera in a second world coordinate system; the first world coordinate system is the same as or different from the second world coordinate system.
In one embodiment of the first aspect, the determining pixel coordinates of the second calibration point in the second image comprises: processing the first triangle by using a trigonometric function to obtain a pixel ordinate of the second calibration point in the second image; the first triangle is determined by the optical center of the second camera, the central point of the second image and a first reference point, the pixel ordinate of the first reference point is the same as the pixel ordinate of the second calibration point, and the pixel abscissa of the first reference point is the same as the pixel abscissa of the central point of the second image.
In an embodiment of the first aspect, the processing the first triangle by using a trigonometric function to obtain a pixel ordinate of the second calibration point in the second image includes: determining a first included angle of the first triangle according to the world coordinate of the second calibration point, the height and the pitch angle of the second camera, wherein the first included angle is an included angle between the optical axis of the second camera and a first straight line, and the first straight line is determined by the first reference point and the optical center of the second camera; determining the pixel coordinates of the second calibration point in the second image based on the trigonometric function relationship which is satisfied between the first side and the second side of the first triangle and the first included angle; wherein the first edge is an edge between a center point of the second image and the first reference point, the second edge is an edge between the center point of the second image and the optical center, the second edge is associated with an internal reference of the second camera, and the first edge is perpendicular to the second edge.
In an embodiment of the first aspect, the pixel ordinate of the second index point in the second image satisfies the following formula:
Figure BDA0002543792160000021
alternatively, the first and second electrodes may be,
Figure BDA0002543792160000022
wherein v is2iA pixel ordinate representing the ith second index point on the second image; f denotes the focal length of the second camera; dy represents the number of unit-size pixels; alpha is alpha2Representing the pitch angle of the second camera; h2Indicating the height of the second camera; x is the number of2iIdentifying world abscissas of the ith second calibration point in a second world coordinate system; v. of10A pixel ordinate representing a first reference point; c. CyA pixel ordinate representing a center point of the first image; alpha is alpha1Representing the pitch angle of the first camera; h1Indicating the height of the first camera; x is the number of10=x20Wherein x is10Representing the world abscissa, x, of the first reference point in a first world coordinate system20Representing the world abscissa of the second reference point in the second world coordinate system.
In an embodiment of the first aspect, when the second calibration point is located on a ground projection line of the optical axis of the second camera, a pixel position of the second calibration point in the second image is the same as a pixel position of the first reference point.
In an embodiment of the first aspect, the determining pixel coordinates of the second calibration point in the second image when the second calibration point is outside a ground projection line of the optical axis of the second camera further comprises: processing a second triangle and a third triangle by using a triangle similarity theorem to obtain a pixel abscissa of the second calibration point in the second image; wherein the second triangle is formed by the optical center of the second camera, the second calibration point, and a second reference point; the third triangle is composed of the optical center of the second camera, the pixel position of the second calibration point in the second image, and the first reference point; the second reference point is located in the second world coordinate system, and in the second world coordinate system, a horizontal axis component of the second reference point is the same as a horizontal axis component of the second calibration point, and a vertical axis component of the second reference point is zero.
In an embodiment of the first aspect, the second triangle is similar to the third triangle, wherein the first ratio is equal to the second ratio; the first ratio is a ratio between a third side in the second triangle and a fourth side in the third triangle; the third side is a side between the optical center of the second camera and the second reference point, and the fourth side is a side between the optical center of the second camera and the first reference point; the second ratio is a ratio between a fifth side in the second triangle and a sixth side in the third triangle; the fifth side is a side between the optical center of the second camera and the second calibration point, and the sixth side is a side between the optical center of the second camera and the pixel position of the second calibration point in the second image.
In an embodiment of the first aspect, the abscissa of the pixel of the second index point in the second image satisfies the following formula:
Figure BDA0002543792160000031
wherein u is2iA pixel ordinate representing the ith second index point on the second image; l1 represents the distance between the first reference point and the second index point; f denotes the focal length of the second camera; h2Indicating the height of the second camera; x is the number of2iA world abscissa representing the ith second calibration point in the second world coordinate system; c. CxA pixel abscissa representing a center point of the second image; γ 2i denotes a first angle, which is an angle between the optical axis of the second camera and the first line,
Figure BDA0002543792160000032
wherein alpha is1Representing the pitch angle of the first camera; h1Indicating the height of the first camera; x is the number of10=x20Wherein x is10Representing first reference points in a first world coordinate systemWorld abscissa, x20The world abscissa representing the second reference point in the second world coordinate system.
In a second aspect, the present disclosure provides a positioning method, comprising: receiving a third image acquired by a second camera, wherein the third image comprises a target object; acquiring a first pixel coordinate of the target object in the third image; processing the first pixel coordinate by using the homography matrix of the second camera to obtain a first world coordinate of the target object; the homography matrix is used for describing a mapping relation between a pixel coordinate system and a world coordinate system of the camera; the homography matrix is determined based on pixel coordinates and world coordinates of a plurality of second calibration points, the pixel coordinates of the second calibration points are determined based on attitude parameters of the first camera, attitude parameters of the second camera, pixel coordinates and world coordinates of the first reference points, and world coordinates of the second calibration points; wherein the first camera and the second camera have the same internal reference; a first image is from the first camera, and the first image comprises a first reference point and a plurality of first calibration points; a second image from the second camera, the second image including a second fiducial point and a plurality of second calibration points; the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera; the first calibration points correspond to the second calibration points one by one; in one of the first calibration point and the second calibration point having a corresponding relationship, a relative position of the first calibration point with respect to the first camera is the same as a relative position of the second calibration point with respect to the second camera.
In an embodiment of the second aspect, the method further comprises: adjusting the pose of the first camera; and/or, adjusting the pose of the second camera; so that the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image.
In an embodiment of the second aspect, the method further comprises: acquiring a second world coordinate of the target object; and correcting the homography matrix of the second camera when the error between the first world coordinate and the second world coordinate is larger than a preset threshold value.
In an embodiment of the second aspect, the method further comprises: receiving a first message, wherein the first message is used for realizing pose configuration of a camera; determining pose parameters of the first camera and/or the second camera based on the first message.
In an embodiment of the second aspect, the method further comprises: receiving a second message indicating acquisition of a homography matrix of one or more of the second cameras; and when the second camera indicated by the second message meets a preset calibration condition, determining a homography matrix of the second camera.
In a third aspect, the present disclosure provides a positioning apparatus comprising at least one processor and a memory; the memory stores computer-executable instructions; the at least one processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform a method as set forth in any one of the embodiments of the first aspect or the second aspect.
In a fourth aspect, the present disclosure provides a positioning system comprising: a first electronic device configured to perform the method as set forth in any one of the embodiments of the first aspect or the second aspect; and the camera is used for acquiring images and comprises a first camera and one or more second cameras.
In an embodiment of the fourth aspect, the positioning system further comprises: the second electronic equipment is used for receiving instruction information from a user and sending the instruction information to the first electronic equipment; the first electronic device is further used for executing the action indicated by the instruction information; wherein the first electronic device is integrally provided with the second electronic device or separately provided.
In one embodiment of the fourth aspect, the positioning system is an in-vehicle everything V2X system.
In one possible design, the electronic device (the first electronic device or the second electronic device) referred to in the third to fourth aspects may be (a processor in) a second camera, a terminal, a server (or a node therein), a vehicle processor, or the like.
In a fifth aspect, the present disclosure provides a computer-readable storage medium, having stored therein computer-executable instructions, which, when executed by a processor, implement the positioning method according to any one of the embodiments of the first aspect or the second aspect.
In a sixth aspect, the present application provides a computer program for performing the method of any one of the embodiments of the first or second aspect when the computer program is executed by a computer.
In a possible design, the program in the sixth aspect may be stored in whole or in part on a storage medium packaged with the processor, or in part or in whole on a memory not packaged with the processor.
In summary, the present disclosure provides a positioning method, an electronic device and a positioning system, in a positioning scenario of a traffic event implemented by using multiple cameras, the references of the multiple cameras are the same, and at least the first camera with the homography matrix calibrated is included in the plurality of cameras, then, aiming at the second camera without the homography matrix calibrated, the postures of the first camera and the second camera are the same based on the condition that the pixel coordinates of a pair of datum points with the same relative position relation to the cameras are the same, at the moment, the pixel coordinates of the plurality of second calibration points in the second camera may be acquired based on the plurality of paired first calibration points and second calibration points having the same relative positional relationship in the first camera and the second camera, further, the homography matrix of the second camera may be acquired based on the pixel coordinates and world coordinates of the second calibration point. Therefore, the calibration of a plurality of second cameras which are not calibrated with the homography matrix can be realized only by the first camera which is calibrated with the homography matrix, namely, the homography matrix calibration task of the cameras in batches can be realized, the calibration time and the workload of the cameras are reduced, and the positioning cost based on the cameras is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic diagram of a positioning scenario provided by an embodiment of the present disclosure;
FIG. 2 is a side view of the positioning scenario of FIG. 1;
fig. 3 is a schematic diagram of a pixel coordinate system according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of a world coordinate system provided by an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a positioning system according to an embodiment of the disclosure;
fig. 6 is a schematic flow chart of a positioning method according to an embodiment of the disclosure;
fig. 7 is a schematic diagram illustrating a camera pose adjustment provided by an embodiment of the present disclosure;
fig. 8 is a schematic diagram illustrating an implementation principle of obtaining a pitch angle of a second camera according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram illustrating a calibration principle provided by an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of another calibration principle provided by the embodiments of the present disclosure;
fig. 11 is a schematic flow chart of another positioning method provided in the embodiments of the present disclosure;
fig. 12 is a schematic diagram of an architecture of another positioning system provided in the embodiment of the present disclosure;
fig. 13 is a functional block diagram of an electronic device according to an embodiment of the disclosure;
fig. 14 is a functional block diagram of another electronic device according to an embodiment of the disclosure;
fig. 15 is a schematic physical structure diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The positioning scheme provided by the embodiment of the disclosure is suitable for a positioning system comprising a plurality of cameras, and each camera can acquire an image and position a target object in the image.
The camera may also be referred to as a camera device, and its concrete expression may be a camera, a video recorder, or other equipment with an image acquisition function.
For example, referring to fig. 1, fig. 1 is a schematic diagram of a positioning scenario provided in an embodiment of the present disclosure. As shown in fig. 1, a plurality of cameras 110 are deployed on a road, and fig. 1 exemplarily shows a camera 111, a camera 112, and a camera 113. The road is available for the vehicle 120 to travel, and thus, during the travel of the vehicle 120, the camera 110 may capture an image of the vehicle 120 and locate the vehicle 120 based on the captured image. The positioning method based on the camera 110 will be described in detail later.
The effective detection distance of a single camera is about 100 meters, a large number of cameras are required to be arranged for realizing the function of identifying the traffic events on the roads, and the effectiveness and the accuracy of identifying the traffic events are directly determined by the positioning capability of the cameras. Thus, in the foregoing scenario, multiple cameras 110 are generally deployed in succession on the same side of the roadway.
At this time, referring to fig. 2, fig. 2 shows a side view of the positioning scene shown in fig. 1, and as shown in fig. 2, a plurality of cameras 110 may be disposed in a linear array, and the distance between any two adjacent cameras 110 may be equal. For example, in fig. 2, the distance between the camera 111 and the camera 112, and the distance between the camera 112 and the camera 113 are equal.
As shown in fig. 1 and 2, the camera is generally deployed on the road through a base 130. In the embodiment of the present application, the material and shape of the base 130, the connection mode between the base 130 and the camera 110, and the like are not particularly limited. For example, the base 130 may be a metal rod base, and for example, the base 130 may be composed of a cement column and an alloy fixture, which are not exhaustive.
In the positioning scene shown in fig. 1 or fig. 2, if the traffic event is to be positioned by the camera, the correlation between the pixel coordinate system and the world coordinate system of the image captured by the camera needs to be acquired. In this way, when the camera acquires the image, the world coordinates of the object in the image can be determined based on the pixel coordinates of the object in the image and the association relationship between the pixel coordinate system and the world coordinate system.
The correlation between the pixel coordinate system and the world coordinate system in the camera can be characterized by a homography matrix. That is, the homography matrix of the camera is used to describe the mapping relationship between the pixel coordinate system and the world coordinate system of the camera. The specific positioning mode is detailed later.
The pixel coordinate system is used to describe the position of the object in the image. Specifically, the pixel coordinate system is a planar coordinate system, and in an actual application scenario, the pixel coordinate system may be defined in various ways.
For example, fig. 3 shows a schematic diagram of a pixel coordinate system provided by the embodiment of the present disclosure. For the image shown in FIG. 3 (the image content is not limited here), the upper left corner of the image plane may be taken as a pixelOrigin O of the coordinate systempAnd two coordinate axes in the pixel coordinate system are expressed as: o ispu axis and Opv axis, wherein OpThe u-axis points to the right, and OpThe v-axis is directed downward. As such, in the pixel coordinate system, the pixel position of the object in the pixel coordinate system can be described by the pixel coordinates (u, v).
Besides, the pixel coordinate system can also be defined in other ways. For example, the upper right corner of the image may be taken as the origin of the pixel coordinate system, with the two coordinate axes pointing to the left and below, respectively. Illustratively, the image center point may be taken as the origin of the pixel coordinate system, with the two coordinate axes pointing to the right and above, respectively. And is not exhaustive.
The world coordinate system is used to describe the position of an object in real three-dimensional space. The disclosed embodiments are used to realize the positioning of the traffic event in a two-dimensional plane, and the third coordinate axis in the world coordinate system, except for the horizontal axis and the vertical axis, is not discussed here. In this case, the world coordinate system may be regarded as a two-dimensional rectangular coordinate system. Specifically, the world coordinate system is generally a three-dimensional coordinate system, and in the positioning scene related to the embodiments of the present disclosure, the two-dimensional world coordinate (x) may be passed throughw,yw) To describe the actual position of the object in space. Where w represents world (world). The world coordinate system may also be defined differently.
For example, the world coordinate system may be constructed by using the longitude and the latitude as the horizontal and vertical coordinates of the world coordinate system. This world coordinate system can be regarded as an absolute world coordinate system, and the longitude and latitude of the object are the world coordinates of the object.
Illustratively, a camera-related relative world coordinate system may also be constructed. It will be appreciated that the world coordinates of a fixed position object will differ in the relative world coordinate systems of the different cameras.
For example, in one possible embodiment, the ground at which the camera is located may be projected as the center point of the world coordinate system, with the true north (or true south) direction being the vertical axis direction and the true east (or true west) direction being the horizontal axis direction, to construct a relative world coordinate system with respect to the camera.
In another possible embodiment, the relative world coordinate system with respect to the camera may be constructed by setting the intersection point of the base where the camera is located and the ground as the origin of the relative world coordinate system, and setting the directions indicated by two mutually perpendicular straight lines (without any particular limitation on the directions) as the horizontal and vertical axes, respectively.
For ease of understanding, fig. 4 illustrates a schematic diagram of a world coordinate system provided by an embodiment of the present disclosure, with the relative world coordinate systems of two cameras taken as an example. Fig. 4 specifically shows the case of the first world coordinate system 41 of the camera 1 and the second world coordinate system 42 of the camera 2. The origin of coordinates (O1) of the first world coordinate system 41 is the ground projection of the camera 1, the origin of coordinates (O2) of the second world coordinate system 42 is the ground projection of the camera 2, the horizontal axis of the first world coordinate system 41 is parallel to the horizontal axis of the second world coordinate system 42, and the vertical axis of the first world coordinate system 41 is parallel to the vertical axis of the second world coordinate system 42. In this case, the world coordinates of the point a in the first world coordinate system 41 are different from the world coordinates of the point a in the second world coordinate system 42.
In the multi-camera positioning scene, the homography matrix of each camera is generally different based on the difference in the position, posture and camera parameters of the camera. For example, camera a and camera B have the same internal reference but different postures, and in this case, even if camera a and camera B are disposed at the same position and respectively capture images of a target object at the same fixed position, the target object has different pixel positions in image a (from camera a) and image B (from camera B).
Then, in a multi-camera positioning scenario, the homography matrix for each camera needs to be calibrated. At present, homography matrix calibration is generally implemented for a single camera, and the following two methods are commonly used:
first, a homography matrix is solved by internal and external parameters of the camera. Wherein, the internal and external parameters involved may include but are not limited to: the camera's position (world coordinates), pose, pixel size, focal length, etc. In this calculation method, the homography matrix of the camera needs to be finally acquired by virtue of the relationship among the pixel coordinate system, the image physical coordinate system, the camera coordinate system and the world coordinate system. And will not be described in detail herein.
Second, a homography matrix is obtained by means of manual calibration. As mentioned above, within the image capturing range of the camera, the calibration personnel can select a plurality of fixed positions in the road, such as the top of the lane line, the corner points of the zebra crossing, the signs on both sides of the road, etc., and then manually measure the world coordinates of the fixed positions, so as to calculate the homography matrix of the camera by combining the pixel coordinates of the fixed positions in the image captured by the camera. Further, the internal and external parameters of the camera can be obtained through reverse calculation.
Compared with the first method, the second method is advantageous for improving the positioning accuracy, but the labor consumption is very large, and the road sealing operation is also required. If the homography matrixes of the cameras are respectively calibrated in the mode, long-time road sealing operation is needed, the calibration workload is huge, the resource waste of manpower and material resources is huge, and the positioning work based on the cameras is also difficult.
Then, in a multi-camera positioning scene, if the calibration is performed by adopting the first method, the internal and external parameters of each camera need to be respectively acquired, and multiple calculations are performed based on the acquired data, so that the data acquisition process is complicated, and the calculation amount is huge; if the calibration is performed by the second method, a long-time road sealing operation is required, a large amount of manpower and material resources are consumed, and the positioning requirement of a multi-camera positioning scene such as an expressway is difficult to adapt.
In view of this situation, embodiments of the present disclosure provide a positioning method and a positioning system. Embodiments of the present application will be described below with reference to the accompanying drawings.
First, a positioning system provided by an embodiment of the present disclosure is described. For example, reference may be made to the system architecture diagram shown in fig. 5. As shown in fig. 5, the positioning system 500 includes: the electronic device 510 and the cameras 520, wherein the number of the cameras 520 is multiple, fig. 5 shows n cameras 520 in total, and n is an integer greater than 1. The camera 520 is used for capturing images, and may specifically include but is not limited to: video images and/or photographs.
As before, the pose of the camera is doubly influenced by the internal participation and external participation of the camera. Internal parameters of the camera may include, but are not limited to: focal length, number of pixels per unit size, pixel coordinates of the center point of the image (the definition of the pixel coordinate system is the same). And external parameters of the camera may include, but are not limited to: the pitch angle, elevation, position of the camera, etc. The internal and external parameters of the camera are not limited to the foregoing, for example, the internal parameters of the camera may further include a pose angle of the camera, and the pose angle may include: yaw angle, pitch angle, and roll angle.
When the participation in the camera is completely the same, the posture of the camera is also the same. If the homography matrix is used for representing the mapping relationship between the pixel coordinate system and the relative world coordinate system, the homography matrices of two cameras with the same posture are theoretically completely the same. However, in an actual scene, when a plurality of cameras are deployed, due to differences in the pitch angles, mounting heights, mounting positions, and the like of the cameras, the homography matrices of the cameras differ due to differences in their attitudes.
In the positioning system 500 provided by the embodiment of the present disclosure, the internal references of the plurality of cameras 520 are identical. On the premise, the homography of one of the cameras is calibrated, or the homography matrix of one of the cameras is calibrated, migration calculation of the homography matrices of other cameras can be realized based on the homography matrix of the calibrated camera, the homography matrices of other cameras can be conveniently acquired, and therefore the consumption of manpower and material resources in the multi-camera calibration process is reduced, and the positioning cost is reduced. Details will be described later.
In the positioning system 500 shown in fig. 5, the camera 520 may communicate with the electronic device 510 in a wired or wireless manner. The wireless communication mode may specifically include, but is not limited to, wireless communication schemes such as 2G/3G/4G/5G. The communication mode may have different designs based on the electronic device 510, which will be described in detail later.
Based on the communication relationship between the two, any one of the cameras 520 may transmit the captured image to the electronic device 510. The camera 520 may actively transmit images to the electronic device, for example, periodically transmit images, or for example, transmit images after establishing a communication connection; alternatively, camera 520 may transmit images to the electronic device in response to a request received from the electronic device. The disclosed embodiments are not so limited.
Correspondingly, the electronic device 520 may receive the image from the camera 520 and perform a calibration function and a positioning function on the received image. Illustratively, fig. 5 shows two processing modules of the electronic device 510: a calibration module 511 and a positioning module 512, wherein the calibration module 511 is configured to calibrate (or acquire) the homography matrix of each camera 520 according to the received image, and the positioning module 512 is configured to position the target object in the image according to the received image. The following describes the specific implementation of these two modules. It can be understood that the module division manner shown in fig. 5 is substantially a functional division, and in an actual implementation scenario, the calibration module 511 and the positioning module 512 may be separately arranged, or may be integrated into the same processor (or processing module).
In the positioning system 500 shown in fig. 5, the electronic device 510 may become a first electronic device, which is used for executing the positioning method provided by the embodiment of the disclosure, and the following detailed description is provided.
Illustratively, the electronic device 510 may be embodied as a processor or processing chip in one camera 520, i.e., the electronic device 510 is embodied as one camera 520. At this time, the electronic device 510 can communicate with other cameras 520 other than the camera 520 to which the electronic device belongs by wire. In addition, the wireless communication method may be used for other cameras 520 within the coverage area of the wireless communication method. For example, in a WiFi coverage area where the electronic device 510 is located, the electronic device 510 may communicate with all cameras 520 in the coverage area via WiFi; for other cameras 520 outside the WiFi coverage, indirect wireless communication may also be achieved by forwarding through the middle camera 520.
Illustratively, the electronic device 510 may be embodied as any electronic device 510 that is in communication with the camera 520, or a processor within the electronic device 510. In this manner, based on the communication connection between the camera 520 and the electronic device 510, the electronic device 510 can transmit the captured image to the electronic device 510, and then the electronic device 510 can locate the target object in the image based on the received image.
In the embodiment of the present disclosure, the electronic device 510 (an executing body of the positioning method) may be embodied as one or more of a vehicle, a drone, a network device (e.g., a server), and a terminal.
A terminal, also called a User Equipment (UE), is a device that provides voice and/or data connectivity to a User, for example, a handheld device with a wireless connection function, a vehicle-mounted device, and so on. Common terminals include, for example: the mobile phone includes a mobile phone, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), and a wearable device such as a smart watch, a smart bracelet, a pedometer, and the like.
The network device may be a network side device, for example, a Wireless Fidelity (WIFI) access point AP, a next generation communication base station, such as a 5G NR base station, such as: 5G gNB or small station, micro station, Transmission Reception Point (TRP), relay station, access Point, vehicle-mounted device, wearable device, etc. In this embodiment, the base stations in the communication systems of different communication systems are different. For the sake of distinction, a base station of the 4G communication system is referred to as an LTE eNB, a base station of the 5G communication system is referred to as an NR gNB, and a base station supporting both the 4G communication system and the 5G communication system is referred to as an LTE eNB, and these names are for convenience of distinction only and are not intended to be limiting.
Taking the scenario shown in fig. 1 as an example, in the scenario, the electronic device 510 in the positioning system 500 shown in fig. 5 may embody the vehicle 120 in fig. 1, and the vehicle 120 may communicate with a plurality of cameras 110 to execute the positioning method provided by the embodiment of the present disclosure. For example, the vehicle 120 may locate its own position based on the image from the camera 110; for another example, the vehicle 120 may locate the position of a traffic accident scene occurring ahead of the vehicle in the image based on the image from the camera 110.
In a possible embodiment of fig. 1, the scenario shown in fig. 1 may further include: the vehicle 120 may further establish a communication connection with one or more of a terminal (e.g., a mobile phone) carried by a passenger seated in the vehicle 120, a drone performing a flight mission in the air, and other network devices, so as to form a vehicle to aircraft (V2X) system.
Next, a positioning method performed on the electronic device 510 side is described.
For convenience of explanation, hereinafter, a camera for which the homography matrix has been calibrated is referred to as a first camera, and a camera for which the homography matrix is unknown is referred to as a second camera. It is to be understood that, in a multi-camera scenario, the number of the first cameras may be at least one, and the number of the second cameras may also be at least one, which is not particularly limited by the embodiments of the present disclosure. For example, in the multi-camera positioning scene shown in fig. 1, the camera 111 may be a first camera, and the cameras 112 and 113 may be second cameras, and the homography matrices of the cameras 112 and 113 may be calibrated according to the embodiment of the present disclosure, and further, when any one of the cameras 111 to 113 acquires an image, an object in the image may be positioned.
The embodiments of the present disclosure do not particularly limit the manner in which the homography matrix of the first camera is derived.
In an exemplary embodiment, the homography matrix of the first camera may be entered by a user (e.g., a maintenance person) in advance.
In another exemplary embodiment, the homography matrix of the first camera may be obtained by calibration in advance.
For example, the homography matrix of the first camera can be obtained by acquiring world coordinates of the preset calibration point and pixel coordinates thereof in the image and then calculating a mapping relationship between the world coordinates and the pixel coordinates.
For example, the homography matrix of the first camera may be calculated by dynamically determining the calibration points by a movable calibration device and positioning the calibration points by a positioning device mounted in the calibration device, so that the pixel coordinates of the calibration points are acquired based on the image captured by the first camera and the world coordinates of the calibration points are acquired based on the positioning device. The movable calibration equipment can be a vehicle, an unmanned aerial vehicle or a ground robot; the positioning device carried thereon may include, but is not limited to: one or more of a Real Time Kinematic (RTK) positioning tag, an Ultra Wide Band (UWB) positioning tag, or a Global Positioning System (GPS) receiver, and further, the number of positioning devices may be one or more. Therefore, the calibration point can be dynamically determined in the moving process of the calibration equipment, the automatic calibration of the homography matrix can be realized without the road sealing operation, the adverse effect of manual measurement on the calibration precision and the positioning precision is avoided, and the improvement of the positioning precision is facilitated.
When the electronic device receives an image from the first camera, the homography matrix of the first camera can be utilized to locate an object in the image, provided that the homography matrix of the first camera is known.
On the other hand, the electronic device may also receive images from the second camera, at which time the location of the traffic event may be achieved based on the second camera in the manner shown in fig. 6. As shown in fig. 6, the positioning method may include the steps of:
s602, receiving a first image from a first camera, where the first image includes a first reference point and a plurality of first calibration points.
S604, receiving a second image from a second camera, wherein the second image comprises a second reference point and a plurality of second calibration points.
And S606, determining the pixel coordinate of the second calibration point in the second image according to the attitude parameter of the first camera, the attitude parameter of the second camera, the pixel coordinate and the world coordinate of the first reference point and the world coordinate of the second calibration point.
And S608, determining a homography matrix of the second camera according to the pixel coordinates and the world coordinates of the second calibration points, wherein the homography matrix is used for describing the mapping relation between the pixel coordinate system and the world coordinate system of the camera.
S610, when a third image from the second camera is received, the first world coordinate of the target object in the third image is determined by using the homography matrix of the second camera.
The first world coordinates may also be referred to as target world coordinates.
In the positioning method shown in fig. 6, S602 to S608 may be regarded as a calibration process of the homography matrix of the second camera, and S610 may be regarded as a scene for positioning an object in a subsequently acquired image after the homography matrix of the second camera is calibrated.
The reference points and reference points referred to in fig. 6 will now be described in detail.
In one aspect, the fiducial points are used to indicate the pose of the camera. In the embodiment shown in fig. 6, the following reference point requirements are met: the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera.
For example. The world coordinates of the first reference point in the first world coordinate system may be (a, b), the world coordinates of the second reference point in the second world coordinate system may be (a, b), and the first world coordinate system is a world coordinate system constructed with a ground projection position of the first camera as an origin, the second world coordinate system is a world coordinate system constructed with a ground projection position of the second camera as an origin, and coordinate axes of the first world coordinate system are parallel to coordinate axes of the second world coordinate system. In this case, the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera. Besides, the first reference point falls within the field of view of the image captured by the first camera, that is, the first image captured by the first camera includes the first reference point, and the pixel coordinate of the first reference point in the pixel coordinate system of the first image may be (c1, d 1). Similarly, the second reference point falls within the field of view of the second image captured by the second camera, that is, the second reference point is included in the image captured by the second camera, and the pixel coordinate of the second reference point in the pixel coordinate system of the second image is (c2, d 2). As described above, the pixel coordinate systems in the images captured by the first and second cameras are defined in the same manner, and if c1 is c2 and d1 is d2, the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image.
When the aforementioned reference point condition is satisfied, the postures of the first camera and the second camera are the same. Therefore, when the subsequent calibration is carried out based on the calibration point, the calibration can be realized by utilizing the characteristics that the postures of the cameras are the same. The following is a detailed description.
In the embodiment of the present disclosure, the number and the position of the first reference point and the second reference point may be preset in a self-defined manner in advance, and the embodiment of the present disclosure does not particularly limit this.
For example, one or more first fiducial points may be included in the first image and one or more second fiducial points may be included in the second image. In an actual scene, if a plurality of first reference points and a plurality of second reference points exist, the number of the first reference points and the number of the second reference points are the same and the first reference points and the second reference points are in one-to-one correspondence. In this way, a pair of the first reference point and the second reference point satisfies the aforementioned reference point condition (the pixel positions in the respective images are the same, and the relative positions with respect to the camera are the same).
For example, the reference point may be a preset ground marker, and the ground marker may be a marker carried in the scene, or a marker set and marked by the user.
In one possible embodiment, the first reference point may be the intersection of the base of another camera (for ease of description, referred to as the third camera) within the image capture field of view of the first camera with the ground, and correspondingly, the second reference point may be the intersection of the base of another camera (for ease of description, referred to as the fourth camera) within the image capture field of view of the second camera with the ground, wherein the relative position of the third camera with respect to the first camera is the same as the relative position of the fourth camera with respect to the second camera.
In this embodiment, the third camera may be the second camera, or the first camera may be the fourth camera. For example, if the cameras 1, 2, and 3 are linearly arranged in this order at equal distances, and the intersection of the base of the camera 2 with the ground is located within the image capture field of view of the camera 1, the intersection of the base of the camera 3 with the ground is located within the image capture field of view of the camera 2. Then, assume that camera 1 is the first camera for which the homography matrix has been obtained; and the camera 2 and the camera 3 can be respectively used as a second camera to execute the positioning method shown in the scheme. At this time, with respect to the camera 2, an intersection of the base of the camera 2 and the ground may be a first reference point of the camera 1, and an intersection of the base of the camera 3 and the ground may be a second reference point of the camera 2.
Besides the intersection point of the camera base and the ground, other objects with identification functions in the scene can be used as reference points. For example, the end point of a lane line within the image-capturing field of view of the camera may also be used as a reference point; for another example, a sign in the image capturing field of view of the camera may be used as the reference point.
In addition to the fixed position markers in the aforementioned scenario, in another possible embodiment, the first and second reference points may be the positions where the movable markers are located. The movable markers may be movable markers with prominent colors or display effects, such as point light sources, markers with any shapes having fluorescent display effects, vehicles having indication functions, and the like, which are not exhaustive. The user can control the movable markers to move freely and select proper positions to execute the scheme.
The first reference point and the second reference point need to satisfy the reference point requirement, but they may be indicated by different objects. For example, the first reference point may be the intersection of one of the lampposts within the image capture field of view of the first camera with the ground; in this case, the position of the dot-shaped marker having the prominent color may be preset as the second reference point.
In any of the foregoing embodiments, the relative positions of the first and second reference points with respect to the respective associated cameras in actual three-dimensional space remain the same. In this case, if the reference point requirement is not satisfied, the attitude of the first camera may be adjusted; and/or, adjusting the pose of the second camera; so that the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image.
In a specific implementation scenario, adjusting the posture of the camera (the first camera and/or the second camera) may include, but is not limited to: and adjusting the attitude angle of the camera. I.e. adjusting one or more of the roll angle, pitch angle, yaw angle.
During specific adjustment, the posture of one camera of the first camera and the second camera can be adjusted by taking the other camera as a reference. For example, the attitude of the first camera is kept still, and the attitude of the second camera is adjusted so as to satisfy the aforementioned reference point condition. Alternatively, the pixel coordinates of the reference points may be preset, and the postures of the first camera and the second camera may be adjusted so that the pixel coordinates of the first reference point and the second reference point are consistent with the preset reference points.
Illustratively, fig. 7 shows a camera pose adjustment diagram. Fig. 7 is an image coordinate of an intersection point of a nearest camera base in an image acquisition field of the camera and the ground as a preset reference point, and a pixel coordinate of the preset reference point is preset and is marked as a point a. Therefore, the posture of the camera can be adjusted respectively. As shown in fig. 7A, when the actual image coordinates of the intersection of the nearest one of the camera bases and the ground in the received image from the camera are denoted as point B, and when the reference point condition is not satisfied in fig. 7A, the actual image coordinates are adjusted so that the point a coincides with the point B in fig. 7B after the adjustment, and the reference point condition is satisfied.
In addition, the adjustment process can be automatically realized, and can also be realized by outputting prompt information to prompt a user to adjust.
For example, in one possible embodiment, before the positioning method shown in fig. 6 is performed, image data from the first camera and the second camera may be received, and if the pixel coordinates of the first reference point and the second reference point in the respective images are different, prompt information may be output, and the prompt information is used for prompting the user to adjust the posture of the first camera and/or the second camera.
For another example, in another possible scenario, the camera is mounted on a motorized pan/tilt head and is fixed to the base by the motorized pan/tilt head. In this case, the attitude of the camera can be automatically adjusted by the motorized pan/tilt head so that the aforementioned reference point condition can be satisfied after adjustment.
The calibration points, on the other hand, are used to calibrate the homography matrix of the camera. The first image comprises a plurality of first calibration points, and the second image comprises a plurality of second calibration points. The number of the first calibration points is the same as that of the second calibration points, and the first calibration points and the second calibration points are in one-to-one correspondence; in a first calibration point and a second calibration point having a corresponding relationship, the relative position of the first calibration point with respect to the first camera is the same as the relative position of the second calibration point with respect to the second camera.
Different from the reference point, the number of the calibration points is multiple, and in a specific scene, the number of the first calibration points and the second calibration points can be at least 4, which is related to the calibration requirement of the homography matrix, and the following specific description is provided.
The index point is also a point preset in advance. This is similar to the reference point and can be indicated by a fixed position marker or a movable marker within the image acquisition range, and is not repeated.
The position relationship between the first calibration points (and similarly, the second calibration points) may also be preset by a user, which is not particularly limited in the embodiments of the present disclosure. For example, the first calibration points may be disposed in a rectangular array, or for example, the first calibration points may be disposed in a circular array, or for example, the first calibration points may be irregularly arranged.
As before, the reference point and the calibration point may be preset points in advance, and there may be a case where the two coincide with each other. In an exemplary embodiment, the first image may include a first reference point and N first calibration points, where the first reference point is identical to the world coordinates of one of the first calibration points (in the first world coordinate system of the first camera); correspondingly, a second reference point and N second calibration points may be included in the second image, where the second reference point and the world coordinate of one of the second calibration points (in the second world coordinate system of the second camera) are identical; and the pixel coordinates of the first reference point and the second reference point are the same.
As mentioned above, the homography matrix of the camera is used to represent the mapping relationship between the pixel coordinate system and the world coordinate system, and based on this, the pixel coordinates and the world coordinates of the second calibration points need to be acquired before the homography matrix of the second camera is acquired.
In one aspect, world coordinates of a second index point are obtained.
As described above, in the embodiment of the present disclosure, the world coordinate of the second calibration point is actually the world coordinate of the second calibration point in the second world coordinate system. The second world coordinate may be an absolute world coordinate or a relative world coordinate.
Based on the relative position of the first calibration point with respect to the first camera, the relative position of the second calibration point with respect to the second camera is the same, and therefore, the world coordinates of the second calibration point corresponding to the first calibration point can be determined from the world coordinates of the first calibration point in the first image.
Specifically, the world coordinates of the second calibration point corresponding to the first calibration point may be determined based on the relative position of the first calibration point with respect to the first camera, the world coordinates of the first camera in the first world coordinate system, and the world coordinates of the second camera in the second world coordinate system.
As mentioned above, the first world coordinate system and the second world coordinate system are the same or different, and the following two cases are included:
first, the first world coordinate system is the same as the second world coordinate system, i.e., both are the same world coordinate system.
In this case, in the first world coordinate system, the world coordinates of the first camera, the world coordinates of the second camera, and the world coordinates of the first calibration point are known, and the relative position of the first calibration point with respect to the first camera is the same as the relative position of the second calibration point with respect to the second camera, so that the world coordinates of the second calibration point (one corresponding to the first calibration point) can be calculated based on the relative positional relationship being equal.
For example, in the first world coordinate system, the world coordinate of the first camera is (x)1,y1) The coordinate of the ith first calibration point is (x)1i,y1i) The world coordinate of the second camera is (x)2,y2) The world coordinate of the ith second index point is marked as (x)2i,y2i) Then x2iSatisfies the following conditions: x is the number of2i=x1i-x1+x2;y2iSatisfies the following conditions: y is2i=y1i-y1+y2That is, the world coordinates of the ith second calibration point may be written as: (x)1i-x1+x2,y1i-y1+y2). Wherein i is an integer greater than 0.
Second, the first world coordinate system is different from the second world coordinate system.
In this scenario, the first world coordinate system and the second world coordinate system may be defined identically.
In an exemplary embodiment, the origin of the first world coordinate system is the ground projection of the first camera, and the origin of the second world coordinate system is the ground projection of the second camera; the horizontal axis of the first world coordinate system is parallel to the horizontal axis of the second world coordinate system, and the vertical axis of the first world coordinate system is parallel to the vertical axis of the second world coordinate system.
In another exemplary embodiment, the origin of the first world coordinate system is the intersection point of the base where the first camera is located and the ground, and the origin of the second world coordinate system is the intersection point of the base where the second camera is located and the ground; the horizontal axis of the first world coordinate system is parallel to the horizontal axis of the second world coordinate system, and the vertical axis of the first world coordinate system is parallel to the vertical axis of the second world coordinate system.
When the first world coordinate system and the second world coordinate system are different but defined the same, for a corresponding pair of the first calibration point and the second calibration point, the world coordinates of the first calibration point in the first world coordinate system are the same as the world coordinates of the second calibration point in the second world coordinate system. In this case, the world coordinates of the first calibration point in the first world coordinate system may be directly acquired and used as the world coordinates of the corresponding second calibration point. The method can effectively reduce the data processing amount, can conveniently obtain the world coordinate of the second calibration point without complex world coordinate conversion, and is favorable for improving the calibration efficiency and the positioning efficiency.
On the other hand, the pixel coordinates of the second index point are acquired.
When acquiring the pixel coordinates of the second calibration point, the pitch angle of the second camera may be first determined according to the attitude parameters of the first camera, the attitude parameters of the second camera, and the world coordinates of the first reference point, so that the pixel coordinates of the second calibration point in the second image are determined according to the attitude parameters of the first camera, the attitude parameters of the second camera, the pitch angle of the second camera, and the world coordinates of the second calibration point (the acquiring method may refer to the foregoing, and is not repeated here).
Specifically, fig. 8 is a schematic diagram of an implementation principle of acquiring the pitch angle of the second camera. For simplicity of description, a world coordinate system is respectively constructed by each camera, the origin of the world coordinate system is a projection point of an optical center of the camera on the ground, the horizontal axis (namely, the X axis) is a ground projection line where the optical axis is located, and the vertical axis (namely, the Y axis) is perpendicular to the horizontal axis in the horizontal plane. In this way, the first world coordinate system of the first camera and the second world coordinate system of the second camera have different origins but have parallel coordinate axes. Similarly, for the sake of simplicity, the X-axis of the two will be superposed for exemplary illustration, and the details are shown in fig. 8.
And, for simplicity of explanation, it is assumed that the first reference point B1 is located on the ground projection line (or an extension thereof) of the optical axis of the first camera, the aforementioned calibration point condition is satisfied between the first reference point B1 and the second reference point B2, and the coordinates thereof in the respective world coordinate systems are the same, and thus, the second reference point B2 is located on the ground projection line (or an extension thereof) of the optical axis of the second camera. At this time, as shown in fig. 8, the first reference point B1 and the second reference point B2 both fall on the X axis. Thus, the world coordinate of the first reference point in the first world coordinate system can be represented as (x)100), the world coordinate of the second reference point in the second world coordinate system is marked as (x)200), and, x10=x20
Based on the imaging principle of the camera, the optical axis is the central line of the light beam passing through the central point (i.e. optical center) of the camera lens, so that the optical axis is actually perpendicular to the image formed by the camera, and the intersection point of the optical axis and the image is the central point of the image. As shown in fig. 8, the optical axis of the first camera is a straight line where the points O1, O1 ' and M1 are located, where the point O1 is the optical center of the first camera, the point O1 ' is the center point of the first image captured by the first camera, the point M1 is the intersection point of the optical axis and the X axis, and the point O1 ' can be regarded as the pixel position of the point M1 in the first image. Similarly, the optical axis of the second camera is a straight line where the points O2, O2 ' and M2 are located, where the point O2 is the optical center of the second camera, the point O2 ' is the center point of the second image captured by the second camera, the point M2 is the intersection point of the optical axis and the X axis, and the point O2 ' can be regarded as the pixel position of the point M2 in the second image.
As before, the pixel coordinate systems of the first image and the second image take the same definition. Illustratively, in the scene shown in fig. 8, taking the first image as an example, the origin of the pixel coordinate system is the upper left corner position of the first image, the horizontal axis (denoted as U axis) is toward the right, and the vertical axis (denoted as V axis) is toward the lower. The definition of the second image is the same and is not repeated. At this time, the pixel coordinate of the center point O1' of the first image may be expressed as (c)x1,cy1) The pixel coordinate of the center point O1' of the second image may be expressed as (c)x,cy)。
As shown in fig. 8, the pixel position of the first reference point B1 in the first image is denoted as B1 ', and since the first reference point B1 and the point M1 both lie on the X axis, the pixel abscissa of the point B1' is the same as that of the point O1 ', and the pixel location of the point B1' is denoted as (c) for convenience of explanationx,v10). Similarly, the pixel position of the second reference point B2 in the second image is denoted as B2 ', and since the second reference point B2 and the point M2 both lie on the X-axis, the pixel abscissa of the point B2' is the same as that of the point O2 ', and the pixel location of the point B2' is denoted as (c) for convenience of explanationx,v20)。
When the aforementioned index point condition is satisfied, the pixel coordinates of the first reference point B1 (the pixel coordinates of the actual point B1 ') and the pixel coordinates of the second reference point B2 (the pixel coordinates of the actual point B2') are also the same. That is, v20=v10
Based on the calibration point condition, the included angle γ 1 is equal to the included angle γ 2 in fig. 8. The included angle gamma 1 is an included angle between a straight line where the point B1, the point B1' and the point O1 are located and the optical axis of the first camera; the angle γ 2 is the angle between the line of point B2, point B2', point O2 and the optical axis of the second camera.
The angle γ 1 is related to the pitch angle of the first camera (indicated as angle α 1 in fig. 8), angle β 1, and specifically satisfies: gamma 1 ═ alpha1- β 1. Wherein the included angle alpha1The pitch angle of the first camera can be obtained from internal parameters of the first camera; the included angle β 1 is the included angle between the straight line of the point B1, the point B1' and the point O1 and the X axis.
As shown in fig. 8, the included angle β 1 is actually one included angle in a triangle formed by a point O1, a point O10 and a point B1, wherein the point O10 is an origin of the first world coordinate system and is also a ground projection point of the point O1. In other words, the side length between point O1 and point O10 is actually the height of the first camera (or referred to as the mounting height), hereafter denoted by H1Indicating the height of the first camera. Then in the triangle formed by the point O1, the point O10 and the point B1, the following trigonometric relationship exists: tan β 1 ═ H1/x10Thus, β 1 ═ arctan (H)1/x10). Further, γ 1 ═ α1-arctan(H1/x10)。
Similarly, the angle β 2 is actually an angle in a triangle formed by the point O2, the point O20 and the point B2, wherein the point O20 is an origin of the second world coordinate system and is also a ground projection point of the point O2. In other words, the side length between point O2 and point O20 is actually the height of the second camera (or referred to as the mounting height), hereafter denoted by H2Indicating the height of the second camera. Then in the triangle formed by the point O2, the origin O20 of the second world coordinate system, and the point B2, the following trigonometric relationship exists: tan β 2 ═ H2/x20Thus, β 2 ═ arctan (H)2/x20). Further, γ 2 ═ α2-arctan(H2/x20)。
As before, γ 1 ═ γ 2, so that the following relationship can exist: alpha is alpha1-arctan(H1/x10)=α2-arctan(H2/x20). Further, the pitch angle α of the second camera2Can be expressed as: alpha is alpha2=α1-arctan(H1/x10)+arctan(H2/x20)。
Thus, the attitude parameter (α) may be based on the first camera1And H1) Attitude parameter of the second camera (H)2) World coordinate (x) with first reference point10And x is10=x20) Determining the pitch angle (alpha) of the second camera2)。
It should be noted that, the side length between the optical center O2 and the second image center point O2' can be actually expressed as the focal length f of the second camera, and the side length is converted into a side length in the pixel coordinate system, which can be specifically expressed as: f/dy. Wherein f represents the focal length of the second camera; dy denotes the number of unit-size pixels. In the embodiment shown in fig. 8, the side length can also be calculated by a trigonometric function.
Specifically, in a triangle formed by the optical center 02, the point O2 'and the point B2', the following trigonometric function relationship is satisfied:
Figure BDA0002543792160000161
based on the foregoing relationship, the following relationship can further be obtained:
Figure BDA0002543792160000162
if the reference points of the first camera and the second camera are the same and the reference point condition is satisfied, γ 1 is γ 2, and the relationship may be expressed as:
Figure BDA0002543792160000163
after the pitch angle of the second camera is determined, the pixel coordinates of the second calibration point in the second image can be determined according to the attitude parameters of the first camera, the attitude parameters of the second camera, the pitch angle of the second camera and the world coordinates of the second calibration point.
At this time, reference may be made to the schematic diagram of the calibration principle shown in fig. 9. The scene shown in fig. 9 is the same as fig. 8, and the first world coordinate system, the second world coordinate system, the pixel coordinate system, and the definitions of the same points are the same as fig. 8, and are not repeated here.
As before, a plurality of second index points may be included in the second image. For convenience of explanation, point G will be used below2iRepresenting the ith second index point, the ith second index point G2iThe world coordinates in the second world coordinate system may be expressed as: g2i(x2i,y2i). At this time, the ith second calibration point G2iThe pixel in the second image is denoted as point G2i', point G2iThe pixel coordinate of' can be written as (u)2i,v2i)。
For the sake of brief explanation, fig. 9 shows a case where the ith second index point is located on the ground projection line (or an extension line thereof) of the optical axis of the second camera, that is, the ith second index point G2iFalls on the X-axis, at which time the ith second index point G2iHas a Y-axis component of 0, i.e. Y2iIs 0, in this case, the ith second index point G2iThe world coordinates in the second world coordinate system may be expressed as: g2i(x2i,0). This results in a pixel position (point G) of the second index point in the second image2i') is the same as the pixel abscissa of the center point of the second image (point O2'). That is, point G2iThe pixel coordinate of' can be written as (c)x,v2i)。
As shown in fig. 9, point G2i', point O2', optical center O2 may form a right triangle. For convenience of description, the right triangle is simply referred to as a first triangle, and a first side of the first triangle is perpendicular to a second side.
Wherein the first edge is a center point O2' and a point G of the second image2i' the edge between the two is known based on the pixel coordinates between the two, and the edge length of the first edge in the pixel coordinate system is: v. of2i-cy
The second edge is the edge between the optical center O2 of the second camera and the center point O2' of the second image. As shown in fig. 9, the second side is the optical axis of the second camera, and the side length of the second side is related to the internal parameters of the camera. Specifically, the side length of the second side can be expressed as the focal length (denoted as f) of the second camera. For a uniform side length, the side length of the second side is converted into a length in the pixel coordinate system: f/dy. Wherein f represents the focal length of the second camera; dy denotes the number of unit-size pixels.
In addition, for convenience of explanation, point G will be referred to2i' the line defined by the optical center O2 is simply referred to as the first line, and as shown in FIG. 9, the first line also passes through the second index point G2i. The angle between the first straight line and the optical axis of the second camera is referred to as a first angle, i.e., γ 2i shown in fig. 9.
In this way, the first triangle can be processed by using a trigonometric function to obtain the pixel ordinate of the second calibration point in the second image.
Specifically, the first angle of the first triangle may be determined based on the world coordinates of the second calibration point, the altitude and the pitch angle of the second camera.
As shown in FIG. 9, the first angle γ 2i is relative to the second cameraPitch angle (alpha)2) And the second included angle (beta 2i) satisfies the following conditions: γ 2i ═ α2β 2i, and the second angle β 2i may be at a second index point G2iAnd the optical center O2 and the origin O20 of the second world coordinate system are obtained by a trigonometric function:
Figure BDA0002543792160000171
thus, the first included angle γ 2i can be expressed as:
Figure BDA0002543792160000172
further, based on the calculation shown in FIG. 8 above, the α is substituted2It is possible to obtain:
Figure BDA0002543792160000173
then, the pixel coordinates of the second calibration point in the second image are determined based on the trigonometric function relationship satisfied between the first side and the second side of the first triangle and the first included angle.
In the embodiment shown in FIG. 9, for the ith second index point G2iFor the first included angle γ 2i, the first edge and the second edge satisfy the following relationship:
Figure BDA0002543792160000174
thus, the ith second calibration point G is calibrated2iPixel ordinate v2iSatisfying any one of the following formulas:
Figure BDA0002543792160000175
alternatively, the first and second electrodes may be,
Figure BDA0002543792160000176
alternatively, the first and second electrodes may be,
Figure BDA0002543792160000177
besides, the side length (f/dy) of the second side can also have other representation modes. For example, in the foregoing embodiment shown in fig. 8, f/dy may also be expressed as:
Figure BDA0002543792160000178
then, the ith second calibration point G2iPixel ordinate v2iThe following formula is satisfied:
Figure BDA0002543792160000179
alternatively, the first and second electrodes may be,
Figure BDA00025437921600001710
alternatively, the first and second electrodes may be,
Figure BDA00025437921600001711
alternatively, the first and second electrodes may be,
Figure BDA0002543792160000181
it is not exhaustive here. To avoid ambiguity, the meanings of the labels in the above formula are repeated here. v. of2iA pixel ordinate representing the ith second index point on the second image; f denotes the focal length of the second camera; dy represents the number of unit-size pixels; alpha is alpha2Representing the pitch angle of the second camera; h2Indicating the height of the second camera; x is the number of2iIdentifying world abscissas of the ith second calibration point in a second world coordinate system; v. of10Pixel ordinate, v, representing a first reference point20Pixel ordinate, v, representing a second reference point10=v20;cyA pixel ordinate representing a center point of the second image (or the first image); alpha is alpha1Representing the pitch angle of the first camera; h1Indicating the height of the first camera; x is the number of10Representing the world abscissa, x, of the first reference point in a first world coordinate system20Representing world abscissa, x, of the second reference point in a second world coordinate system10=x20
Then, in the embodiment shown in fig. 9, for the second calibration point that falls on the ground projection line of the optical axis of the second camera (or an extension thereof, i.e., the X-axis), the pixel coordinate of the ith second calibration point in the second image can be determined as (c) in the foregoing mannerx,v2i),v2iThe expression of (c) is as above and not repeated.
However, in a real scene, it may also be the case that the second calibration point falls outside the ground projection line (or its extension, i.e. the X-axis) of the optical axis of the second camera. In this case, the pixel coordinate of the ith second index point in the second image is (u)2i,v2i)。
As an example, reference may be made to the scenario shown in fig. 10. Second index point G2iThe X-axis component and the Y-axis component are provided.
In this case, the second index point G2iThe pixel ordinate of (a) can be calculated in the manner shown in fig. 9. At this time, as shown in fig. 10, the pair v may be implemented by the first reference point and the second reference point2iAnd (4) calculating.
The pixel ordinate of the first reference point is the same as the pixel ordinate of the second calibration point, and the pixel abscissa of the first reference point is the same as the pixel abscissa of the second image center point. As shown in fig. 10, the first reference point is denoted as P1, and the pixel coordinate of the first reference point P1 in the pixel coordinate system may be denoted as (c)x,v2i)。
The second reference point is located in a second world coordinate system, and in the second world coordinate system, a horizontal axis component of the second reference point is the same as a horizontal axis component of the second calibration point, and a vertical axis component of the second reference point is zero. As shown in fig. 10, the second referenceThe point is represented as P2, and the pixel coordinate of the second reference point P2 in the second world coordinate system can be represented as (x)2i,0)。
Based on this, as can be seen from comparing fig. 9 and 10, the first reference point P1 in fig. 10 corresponds to the point G in fig. 92i', and a second reference point P2 in FIG. 10 relative to point G in FIG. 92i. In other words, FIG. 9 corresponds to a special case of the scene shown in FIG. 10, i.e. the second index point G when it is located on the ground projection line of the optical axis of the second camera2iI.e. the second reference point P2, the second index point G2iPixel position G of2i' is a first reference point P1.
In the embodiment shown in fig. 10, the calculation of the pixel ordinate of the i-th second calibration point in the second image may be implemented based on the trigonometric function relationship of the first triangle determined by the optical center O2 of the second camera, the center point O2 'of the second image, and the first reference point P1, and the first edge in the first triangle is the edge between the center point O2' of the second image and the first reference point P1. And the first angle γ 2i is still the angle between the optical axis of the second camera and the first line, which is the line determined by the first reference point and the optical center of the second camera.
In summary, for any second index point within the image capture field of view of the second camera, whether or not it falls on the ground projection line (or its extension) of the optical axis of the camera, the pixel ordinate of the second index point satisfies the aforementioned v2iThe expression of (2) is not described in detail.
As shown in fig. 10, when the second calibration point is located outside the ground projection line of the optical axis of the second camera, the pixel abscissa of the second calibration point is different from the pixel abscissa of the second image center point, and in this case, the pixel abscissa of the second calibration point may be determined based on the triangle similarity theorem.
As shown in fig. 10, the second triangle and the third triangle satisfy the triangle similarity theorem. Wherein the second triangle is composed of the optical center O2 of the second camera and the second index point G2iA second reference point P2; the third triangle is shot by the secondOptical center O2 of camera, pixel position G of second index point in second image2i', a first reference point P1.
Based on the second triangle being similar to the third triangle, then the first ratio is equal to the second ratio. Wherein the first ratio is a ratio between a third side in the second triangle and a fourth side in the third triangle; the third side is the side between the optical center O2 of the second camera and the second reference point P2 (which may be referred to as O2P2), and the fourth side is the side between the optical center O2 of the second camera and the first reference point P1 (which may be referred to as O2P 1);
the second ratio is a ratio between a fifth side in the second triangle and a sixth side in the third triangle; the fifth side is the second reference point P2 and the second calibration point G2iThe edge between (may be denoted as G)2iP2), and the sixth side is the pixel position G of the first reference point P1 and the second index point in the second image2i'side between (G'2iP1)。
Then, the second index point shown in fig. 10 satisfies the following triangle similarity relationship:
Figure BDA0002543792160000191
based on this, as can be seen from fig. 10, point G2iThe world coordinate in the second world coordinate system is (x)2i,y2i) The world coordinate of P2 in the second world coordinate system is (x)2i0), thus, G2iDistance y of P22i. That is, G2iThe distance P2 is effectively the vertical axis component of the second calibration point in the second world coordinate system. Based on the fact that the internal parameters of the first camera and the second camera are the same, and the relative positions of the paired first calibration points and the paired second calibration points relative to the respective cameras are the same, G2iThe distance P2 can be calculated from the world coordinates of the ith first index point in the first world coordinate system. For example, when the first world coordinate system and the second world coordinate system satisfy the setting shown in fig. 8, G2iThe distance P2 is the longitudinal axis of the first calibration point in the first world coordinate systemAnd (4) components. Subsequently, for convenience of description, the distance between the first reference point and the second calibration point is denoted by L1.
Point G2i' the pixel coordinate in the pixel coordinate system of the second image is (u)2i,v2i) And the pixel coordinate of P1 in the pixel coordinate system of the second image is (c)x,v2i) Thus, G2iThe distance of P1 is actually u2i-cx
The distance between the optical center O2 and the second reference point P2 (i.e., O2P2) can be calculated by the pythagorean theorem of the fourth triangle. The fourth triangle is composed of the optical center O2, the second reference point P2, and the origin O20 of the second world coordinate system. As shown in fig. 10, the side O2O20 is perpendicular to the X-axis, and the fourth triangle is a right-angled triangle satisfying the pythagorean theorem:
Figure BDA0002543792160000192
wherein the length of O2O20 is substantially the height (H) of the second camera2) The length of O2P2 is the X-axis component of the second reference point P2, i.e., X2i. Then it is determined that,
Figure BDA0002543792160000193
the distance between the optical center O2 and the first reference point P1 (i.e., O2P1) can be calculated by the pythagorean theorem of the fifth triangle. Wherein the fifth triangle is composed of the optical center O2, the first reference point P1, and the center point O2' of the second image. At this time, as shown in fig. 10, the side O2O 2' is the optical axis, perpendicular to the second image, and the fifth triangle is a right-angled triangle, satisfying the pythagorean theorem:
Figure BDA0002543792160000194
the length of O2O2 'is actually the focal length (f) of the second camera, and the length of O2' P1 can be expressed in various ways. In one possible embodiment, the length of 02 'P1 is the difference between the V-axis component of the second reference point P2 and the V-axis component of the center point O2'. In another possible embodiment, the length of 02' P1 can be represented by the tan trigonometric function of the first included angle γ 2 i. At this time, the process of the present invention,
Figure BDA0002543792160000201
then, the length of O2' P1 can be expressed as: f tan γ 2 i. The calculation manner of the first included angle γ 2i can refer to the foregoing, and is not repeated here. Thus, it can specifically calculate
Figure BDA0002543792160000202
Then, after substituting the expressions of each side length into the similar theoretic expressions, the following relationship can be obtained:
Figure BDA0002543792160000203
wherein u is2iA pixel ordinate representing the ith second index point on the second image; l1 represents the distance between the first reference point and the second index point; f denotes the focal length of the second camera (or the first camera); h2Indicating the height of the second camera; x is the number of2iA world abscissa representing the ith second calibration point in the second world coordinate system; c. CxA pixel abscissa representing a second image center point (or a first image center point); γ 2i denotes a first angle, which is an angle between the optical axis of the second camera and the first line,
Figure BDA0002543792160000204
wherein alpha is1Representing the pitch angle of the first camera; h1Indicating the height of the first camera; x is the number of10=x20Wherein x is10World abscissa, x, representing first reference point in first world coordinate system20The world abscissa representing the second reference point in the second world coordinate system.
In summary, for the ith second calibration point in the image capture field of view of the second camera, the world coordinate (x) of the ith second calibration point in the second world coordinate system2i,y2i) The pixel coordinate of the ith second calibration point in the second image is (u)2i,v2i)。
World coordinate (x) of ith second index point2i,y2i) World coordinates (x) that can be based on the ith first index point1i,y1i) And (4) determining.
Pixel coordinate (u) of ith second index point2i,v2i) In one aspect, the aforementioned v may be passed regardless of whether the ith second index point falls on the ground projection line (or an extension line thereof) of the optical axis of the second camera2iCalculating to obtain the pixel ordinate v of the ith second calibration point2i. On the other hand, when the ith second calibration point falls on the ground projection line (or an extension line thereof) of the optical axis of the second camera, the pixel abscissa u of the ith second calibration point2iThe same as the pixel abscissa of the center point of the second image, i.e.: u. of2i=cx(ii) a Alternatively, when the ith second calibration point falls outside the ground projection line (or its extension line) of the optical axis of the second camera, it can pass through u above2iCalculating to obtain the pixel abscissa u of the ith second calibration point2i
Then, in the above manner, the pixel coordinates and the world coordinates of the plurality of second calibration points in the second image are respectively obtained, and the homography matrix of the second camera can be determined accordingly.
Specifically, for a certain second camera, the pixel coordinates and world coordinates of any one object (e.g., the second calibration point) may satisfy the following formula:
Figure BDA0002543792160000205
wherein (x)w,yw) World coordinates of an object (e.g., a second index point), (u, v) pixel coordinates of an object (e.g., a second index point),
Figure BDA0002543792160000206
is a homography matrix of the second camera, hi,jBeing the matrix parameters of the homography matrix of the second camera,wherein i and j are used for distinguishing matrix parameters, the value is 1-3, and z iscIs a three-dimensional coordinate parameter.
Based on this, the pixel coordinates and world coordinates of the second calibration points can be substituted into the formula, and through solving the equation system, each matrix parameter in the homography matrix can be obtained, so that the homography matrix of the second camera is obtained.
In this formula, 8 unknown parameters are involved, so that at least 4 (4 or more) second calibration points are required to calibrate the homography matrix of the second camera by this scheme.
When the second camera acquires an image (for convenience of description, this is referred to as a third image) on the basis of obtaining the homography matrix of the second camera, the first world coordinate of the target object in the second world coordinate system can be calculated by acquiring the target pixel coordinate of the target object in the third image and substituting the target pixel coordinate of the target object into the above expression. Thus, the target object is positioned.
In this process, if the second world coordinate system is an absolute world coordinate system, such as a longitude and latitude coordinate system, the absolute world coordinates (e.g., longitude and latitude) of the target object in the third image can be directly located based on the homography matrix of the second camera.
Alternatively, if the second world coordinate system is a relative world coordinate system, such as the case shown in fig. 8, the world coordinates of the target object in the third image in the second world coordinate system can be obtained based on the homography matrix of the second camera. In this case, the world coordinates are directly used as the first world coordinates. Alternatively, the world coordinates of the target object in the second world coordinate system may be converted based on a conversion relationship between the second world coordinate system and the absolute world coordinate system to obtain the world coordinates of the target object in the absolute world coordinate system as the first world coordinates.
In the embodiment of the present disclosure, when the target object in the second image is the target vehicle, if the target vehicle is in communication connection with the electronic device of the scheme execution main body, after the electronic device locates the position of the target vehicle, the first world coordinate may also be sent to the target vehicle. The target vehicle may receive the first world coordinates and may learn its location based thereon. Further, the target vehicle can further realize automatic driving and obstacle avoidance based on the received first world coordinate.
The embodiment of the present disclosure further provides another positioning method, please refer to the flowchart shown in fig. 11, and the method includes the following steps:
and S1102, receiving a third image acquired by the second camera, wherein the third image comprises the target object.
S1104, a first pixel coordinate of the target object in the third image is acquired.
S1106, processing the first pixel coordinate by using the homography matrix of the second camera to obtain a first world coordinate of the target object.
It should be noted that, in the embodiment shown in fig. 11, the execution subject (electronic device) of the positioning method may directly call the homography matrix of the second camera to realize the positioning of the target object in the image.
In this embodiment, the homography matrix of the second camera may be stored in any location readable by the electronic device, and may include, but is not limited to: the memory of the electronic device, another electronic device (for example, a camera) communicatively connected to the electronic device, a storage (including a physical storage and a cloud storage) of the electronic device with data access authority, and the like are not exhaustive.
In this way, when the present positioning method is executed, the electronic device may read the homography matrix of the second camera, and then process the target pixel coordinates by using the homography matrix, so as to obtain the first world coordinates of the target object through calculation.
In the embodiment shown in fig. 11, the homography matrix of the second camera can be calibrated in the manner shown in S602-S608 in fig. 6, which is briefly described below, and for inexhaustible points, reference can be made to the foregoing description.
In one possible embodiment as shown in fig. 11, a homography matrix is used to describe the mapping relationship between the pixel coordinate system of the camera and the world coordinate system; the homography matrix is determined based on pixel coordinates of a plurality of second calibration points and world coordinates, the pixel coordinates of the second calibration points being determined based on the attitude parameters of the first camera, the attitude parameters of the second camera, the pixel coordinates and world coordinates of the first reference points, and the world coordinates of the second calibration points.
The first camera and the second camera have the same internal reference; the first image is from a first camera, and the first image comprises a first reference point and a plurality of first calibration points; the second image is from a second camera, the second image including a second fiducial point and a plurality of second index points.
The pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera.
The first calibration points correspond to the second calibration points one by one; in a first calibration point and a second calibration point having a corresponding relationship, the relative position of the first calibration point with respect to the first camera is the same as the relative position of the second calibration point with respect to the second camera.
As before, in this positioning method, it is also possible to adjust the attitude of the first camera, and/or the attitude of the second camera, so that the aforementioned reference point condition is satisfied. That is, when the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera, the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image.
In the embodiment of the present disclosure, in the process of executing the positioning method of any one of the above embodiments, the homography matrix of the second camera may also be corrected based on an error condition between the first world coordinate and the second world coordinate of the target object. The second world coordinate is a world coordinate of the target object in the second world coordinate system acquired by another method, and the first world coordinate is a world coordinate of the target object in the second world coordinate system acquired by any one of the above embodiments.
Illustratively, the second world coordinate System may be obtained by one or more of a Global Positioning System (GPS), by a network communication query, and the like.
In this way, in one possible embodiment, the second world coordinate of the target object may be obtained by other means besides the present scheme, so that the homography matrix of the second camera is corrected when the error between the first world coordinate and the second world coordinate is greater than the preset threshold. The error between the first world coordinate and the second world coordinate can be represented by the distance between the first world coordinate and the second world coordinate, at the moment, the preset threshold is also a distance threshold, and the specific numerical value can be preset in a self-defined mode.
In any of the foregoing embodiments, the electronic device may further receive a first message, the first message being used to implement pose configuration for the cameras, and thus, the electronic device may determine pose parameters of the first camera and/or the second camera based on the first message.
In a specific implementation scenario, the first message may come from a user. In this case, the electronic device has a man-machine interface with the user, and thus can directly receive a message from the user.
Alternatively, the first message may be a forward from another electronic device. For the sake of convenience of differentiation, all the electronic devices in the foregoing may be regarded as a first electronic device, and the first message received by the first electronic device may be from a second electronic device. Specifically, the second electronic device has a human-computer interaction interface, and can receive instruction information (for example, a first message) from a user through the human-computer interaction interface such as touch control, voice control, or a sensor, and transmit the instruction information to the first electronic device.
In any of the foregoing embodiments, the electronic device may further receive a second message, where the second message indicates to acquire the homography matrices of one or more second cameras, and thus, when the second camera indicated by the second message satisfies the preset calibration condition, the electronic device determines the homography matrix of the second camera. In this embodiment, the second message corresponds to a calibration instruction for instructing calibration of the homography matrix of the second camera, and after receiving the second message, the electronic device (or referred to as the first electronic device) executes the calibration processes shown in S602 to S608 if it is determined that the preset calibration condition is satisfied.
The preset calibration condition may be that the second camera is working normally. Further, the preset calibration condition may further include the aforementioned reference point condition. At this time, if the calibration condition is not satisfied, the posture of the first camera and/or the second camera is adjusted until the preset calibration condition is satisfied, that is, the calibration process is started to be executed.
In addition, similar to the first message, the second message may come directly from the user or may come from the second electronic device.
For example, fig. 12 is a schematic diagram of another positioning system provided in the embodiment of the present disclosure, and as shown in fig. 12, the positioning system includes:
a first electronic device 1210 configured to perform the positioning method of any one of the foregoing embodiments;
a camera 1220 for capturing images, the camera including a first camera and one or more second cameras;
the second electronic device 1230, configured to receive instruction information from a user and send the instruction information to the first electronic device 1210;
at this time, the first electronic device 1210 is further configured to perform the action indicated by the instruction information.
In the positioning system shown in fig. 12, the first electronic device 1210 is provided integrally with or separately from the second electronic device 1230.
It is to be understood that some or all of the steps or operations in the above-described embodiments are merely examples, and other operations or variations of various operations may be performed by the embodiments of the present application. Further, the various steps may be performed in a different order presented in the above-described embodiments, and it is possible that not all of the operations in the above-described embodiments are performed.
The embodiment of the disclosure also provides an electronic device.
Illustratively, fig. 13 shows a schematic diagram of an electronic device, as shown in fig. 13, the electronic device 1300 includes: a transceiver module 1310, a calibration module 1320, and a positioning module 1330.
The transceiver module 1310 is configured to receive a first image from a first camera, where the first image includes a first reference point and a plurality of first calibration points;
a transceiver module 1310, further configured to receive a second image from a second camera, the second image including a second reference point and a plurality of second calibration points; the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera;
a calibration module 1320, configured to determine a pixel coordinate of the second calibration point in the second image according to the pose parameter of the first camera, the pose parameter of the second camera, the pixel coordinate and world coordinate of the first reference point, and the world coordinate of the second calibration point; the first calibration points correspond to the second calibration points one by one; in a first calibration point and a second calibration point which have corresponding relations, the relative position of the first calibration point relative to the first camera is the same as the relative position of the second calibration point relative to the second camera;
the calibration module 1320 is further configured to determine a homography matrix of the second camera according to the pixel coordinates and the world coordinates of the plurality of second calibration points; the homography matrix is used for describing a mapping relation between a pixel coordinate system and a world coordinate system of the camera;
the positioning module 1330 is configured to determine, when a third image from the second camera is received, first world coordinates of the target object in the third image using the homography matrix of the second camera.
In a possible embodiment, the calibration module 1320 is specifically configured to: determining a pitch angle of the second camera according to the attitude parameters of the first camera, the attitude parameters of the second camera and the world coordinates of the first reference point; determining the world coordinate of a second calibration point corresponding to the first calibration point according to the world coordinate of the first calibration point in the first image; and determining the pixel coordinates of the second calibration point in the second image according to the attitude parameters of the first camera, the attitude parameters of the second camera, the pitch angle of the second camera and the world coordinates of the second calibration point.
In another possible embodiment, the calibration module 1320 is specifically configured to: determining the world coordinate of a second calibration point corresponding to the first calibration point based on the relative position of the first calibration point relative to the first camera, the world coordinate of the first camera in a first world coordinate system and the world coordinate of the second camera in a second world coordinate system; the first world coordinate system is the same as or different from the second world coordinate system.
In another possible embodiment, the calibration module 1320 is specifically configured to: processing the first triangle by using a trigonometric function to obtain a pixel ordinate of the second calibration point in the second image; the first triangle is determined by the optical center of the second camera, the central point of the second image and the first reference point, the pixel ordinate of the first reference point is the same as the pixel ordinate of the second calibration point, and the pixel abscissa of the first reference point is the same as the pixel abscissa of the central point of the second image.
In another possible embodiment, the calibration module 1320 is specifically configured to: determining a first included angle of the first triangle according to the world coordinate of the second calibration point, the height of the second camera and the pitch angle, wherein the first included angle is an included angle between the optical axis of the second camera and a first straight line, and the first straight line is determined by the first reference point and the optical center of the second camera; determining the pixel coordinate of the second calibration point in the second image based on the trigonometric function relation satisfied between the first side and the second side of the first triangle and the first included angle; the first edge is an edge between a central point of the second image and the first reference point, the second edge is an edge between the central point of the second image and the optical center, the second edge is related to the internal reference of the second camera, and the first edge is perpendicular to the second edge.
In another possible embodiment, the ordinate of the pixel of the second calibration point in the second image satisfies the following formula:
Figure BDA0002543792160000241
alternatively, the first and second electrodes may be,
Figure BDA0002543792160000242
wherein v is2iA pixel ordinate representing the ith second index point on the second image; f denotes the focal length of the second camera; dy represents the number of unit-size pixels; alpha is alpha2Representing the pitch angle of the second camera; h2Indicating the height of the second camera; x is the number of2iIdentifying world abscissas of the ith second calibration point in a second world coordinate system; v. of10A pixel ordinate representing a first reference point; c. CyA pixel ordinate representing a center point of the first image; alpha is alpha1Representing the pitch angle of the first camera; h1Indicating the height of the first camera; x is the number of10=x20Wherein x is10Representing the world abscissa, x, of the first reference point in a first world coordinate system20Representing the world abscissa of the second reference point in the second world coordinate system.
In another possible embodiment, when the second calibration point is located on the ground projection line of the optical axis of the second camera, the pixel position of the second calibration point in the second image is the same as the pixel position of the first reference point.
In another possible embodiment, when the second calibration point is located outside the ground projection line of the optical axis of the second camera, the calibration module 1320 is specifically configured to: processing the second triangle and the third triangle by using a triangle similarity theorem to obtain a pixel abscissa of the second calibration point in the second image; the second triangle is composed of an optical center of the second camera, a second calibration point and a second reference point; the third triangle is composed of the optical center of the second camera, the pixel position of the second calibration point in the second image and the first reference point; the second reference point is located in a second world coordinate system, and in the second world coordinate system, a horizontal axis component of the second reference point is the same as a horizontal axis component of the second calibration point, and a vertical axis component of the second reference point is zero.
In another possible embodiment, the second triangle is similar to the third triangle, wherein the first ratio is equal to the second ratio; the first ratio is a ratio between a third side in the second triangle and a fourth side in the third triangle; the third side is a side between the optical center of the second camera and the second reference point, and the fourth side is a side between the optical center of the second camera and the first reference point; the second ratio is a ratio between a fifth side in the second triangle and a sixth side in the third triangle; the fifth side is the side between the optical center of the second camera and the second index point, and the sixth side is the side between the optical center of the second camera and the pixel position of the second index point in the second image.
In another possible embodiment, the abscissa of the pixel of the second calibration point in the second image satisfies the following formula:
Figure BDA0002543792160000251
wherein u is2iA pixel ordinate representing the ith second index point on the second image; l1 represents the distance between the first reference point and the second index point; f denotes the focal length of the second camera; h2Indicating the height of the second camera; x is the number of2iA world abscissa representing the ith second calibration point in the second world coordinate system; c. CxA pixel abscissa representing a center point of the second image; γ 2i denotes a first angle, which is an angle between the optical axis of the second camera and the first line,
Figure BDA0002543792160000252
wherein alpha is1Representing the pitch angle of the first camera; h1Indicating the height of the first camera; x is the number of10=x20Wherein x is10Representing coordinates of the first reference point in a first worldWorld abscissa, x, of the system20The world abscissa representing the second reference point in the second world coordinate system.
In another possible embodiment, the transceiver module 1310 is further configured to send the first world coordinate to the target vehicle when the target object in the second image is the target vehicle.
The electronic device of the embodiment shown in fig. 13 may be used to implement the technical solution of the method embodiment shown in fig. 6, and the implementation principle and the technical effect may be further referred to in the related description of the method embodiment.
Fig. 14 shows a schematic diagram of another electronic device, and as shown in fig. 14, the electronic device 1400 includes: a transceiver module 1410 and a positioning module 1420. Wherein the content of the first and second substances,
the transceiver module 1410 is configured to receive a third image acquired by the second camera, where the third image includes the target object;
a positioning module 1420, configured to obtain a first pixel coordinate of the target object in the third image;
the positioning module 1420 is further configured to process the first pixel coordinate by using the homography matrix of the second camera to obtain a first world coordinate of the target object;
the homography matrix is used for describing a mapping relation between a pixel coordinate system and a world coordinate system of the camera; the homography matrix is determined based on pixel coordinates and world coordinates of a plurality of second calibration points, and the pixel coordinates of the second calibration points are determined based on the attitude parameters of the first camera, the attitude parameters of the second camera, the pixel coordinates and world coordinates of the first reference points and the world coordinates of the second calibration points;
the first camera and the second camera have the same internal reference; the first image is from a first camera, and the first image comprises a first reference point and a plurality of first calibration points; the second image is from the second camera, and the second image comprises a second reference point and a plurality of second calibration points;
the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera;
the first calibration points correspond to the second calibration points one by one; in a first calibration point and a second calibration point having a corresponding relationship, the relative position of the first calibration point with respect to the first camera is the same as the relative position of the second calibration point with respect to the second camera.
In another possible embodiment, the electronic device 1400 further includes an adjusting module (not shown in fig. 14), where the adjusting module is specifically configured to: adjusting the attitude of the first camera; and/or, adjusting the pose of the second camera; so that the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image.
In another possible embodiment, the positioning module 1420 is further configured to: acquiring a second world coordinate of the target object; and correcting the homography matrix of the second camera when the error between the first world coordinate and the second world coordinate is larger than a preset threshold value.
In another possible embodiment, the positioning module 1410 is further configured to receive a first message, where the first message is used to implement pose configuration for the camera; at this time, the positioning module 1420 is further configured to determine the pose parameters of the first camera and/or the second camera based on the first message.
In another possible embodiment, the transceiver module 1410 is further configured to receive a second message, where the second message indicates to acquire a homography matrix of one or more second cameras; at this time, the positioning module 1420 is further configured to determine the homography matrix of the second camera when the second camera indicated by the second message satisfies the preset calibration condition.
The electronic device of the embodiment shown in fig. 14 may be used to implement the technical solution of the method embodiment shown in fig. 11, and the implementation principle and the technical effect may further refer to the related description in the method embodiment.
It should be understood that the division of the modules of the electronic device shown in fig. 13 and fig. 14 is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling by the processing element in software, and part of the modules can be realized in the form of hardware. For example, the calibration module in fig. 13 may be a separate processing element, or may be implemented by being integrated in a chip of an electronic device, such as a terminal, or may be stored in a memory of the electronic device in the form of a program, and the function of each module is called and executed by a processing element of the electronic device. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. As another example, when one of the above modules is implemented in the form of a Processing element scheduler, the Processing element may be a general purpose processor, such as a Central Processing Unit (CPU) or other processor capable of invoking programs. As another example, these modules may be integrated together, implemented in the form of a system-on-a-chip (SOC).
Fig. 15 shows a physical structure diagram of an electronic device. As shown in fig. 15, the electronic device 1500 includes: at least one processor 152 and memory 154; the memory 154 stores computer-executable instructions; the at least one processor 152 executes computer-executable instructions stored by the memory 154 to cause the at least one processor 152 to perform a positioning method as provided by any of the preceding embodiments.
The processor 152 may also be referred to as a processing unit, and may implement a certain control function. The processor 152 may be a general purpose processor, a special purpose processor, or the like.
In an alternative design, the processor 152 may also store instructions, which can be executed by the processor 152, so that the electronic device 1500 executes the positioning method described in the above method embodiment.
In yet another possible design, electronic device 1500 may include circuitry that may implement the functionality of transmitting or receiving or communicating in the foregoing method embodiments.
Optionally, the electronic device 1500 may include one or more memories 154, on which instructions or intermediate data are stored, and the instructions may be executed on the processor, so that the electronic device 1500 performs the positioning method described in the above method embodiment. Optionally, other relevant data may also be stored in the memory 154. Optionally, instructions and/or data may also be stored in the processor 152. The processor 152 and the memory 154 may be provided separately or may be integrated together.
Optionally, the electronic device 1500 may also include a transceiver 156. The transceiver 156 may also be referred to as a transceiver unit, a transceiver, a transceiving circuit, a transceiver, or the like, for implementing transceiving functions of the electronic device.
For example, if the electronic device 1500 is used to implement operations corresponding to receiving the first image and the second image in the embodiment shown in fig. 6, for example, the transceiver may receive the first image from the first camera, and the transceiver may also receive the second image from the second camera. The transceiver 156 may further perform other corresponding communication functions. And the processor 156 is configured to perform the corresponding determination or control operations, and optionally, may store corresponding instructions in the memory. The specific processing manner of each component can be referred to the related description of the previous embodiment.
The processor 152 and transceiver 156 described herein may be implemented on an Integrated Circuit (IC), an analog IC, a Radio Frequency Integrated Circuit (RFIC), a mixed signal IC, an Application Specific Integrated Circuit (ASIC), a Printed Circuit Board (PCB), an electronic device, or the like. The processor and transceiver may also be fabricated using various 1C process technologies, such as Complementary Metal Oxide Semiconductor (CMOS), N-type metal oxide semiconductor (NMOS), P-type metal oxide semiconductor (PMOS), Bipolar Junction Transistor (BJT), Bipolar CMOS (bicmos), silicon germanium (SiGe), gallium arsenide (GaAs), and the like.
Alternatively, electronic device 1500 may be a stand-alone device or may be part of a larger device. For example, the device may be:
(1) a stand-alone integrated circuit IC, or chip, or system-on-chip or subsystem;
(2) a set of one or more ICs, which optionally may also include storage components for storing data and/or instructions;
(3) an ASIC, such as a modem (MSM);
(4) a module that may be embedded within other devices;
(5) receivers, terminals, cellular telephones, wireless devices, handsets, mobile units, network devices, and the like;
(6) others, and so forth.
The embodiment of the disclosure also provides a positioning system. The description of the positioning system can be referred to the related description of fig. 5 and fig. 12, and will not be repeated here.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is enabled to execute the positioning method described in the foregoing embodiment.
In addition, the present application also provides a computer program product, which includes a computer program, when the computer program runs on a computer, the computer executes the positioning method described in the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions described in accordance with the present application are generated, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk), among others.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.

Claims (18)

1. A method of positioning, comprising: the method is applied to a positioning system comprising a first camera and a second camera, and the first camera and the second camera have the same internal reference; the method comprises the following steps:
receiving a first image from a first camera, the first image including a first fiducial point and a plurality of first index points;
receiving a second image from a second camera, the second image including a second fiducial point and a plurality of second index points; the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera;
determining the pixel coordinate of the second calibration point in the second image according to the attitude parameter of the first camera, the attitude parameter of the second camera, the pixel coordinate and the world coordinate of the first reference point and the world coordinate of the second calibration point; the first calibration points correspond to the second calibration points one by one; in one of the first calibration point and the second calibration point having a corresponding relationship, the relative position of the first calibration point with respect to the first camera is the same as the relative position of the second calibration point with respect to the second camera;
determining a homography matrix of the second camera according to the pixel coordinates and world coordinates of the second calibration points; the homography matrix is used for describing a mapping relation between a pixel coordinate system and a world coordinate system of the camera;
when a third image from the second camera is received, first world coordinates of a target object in the third image are determined using the homography matrix of the second camera.
2. The method of claim 1, wherein determining pixel coordinates of the second calibration point in the second image comprises:
determining a pitch angle of the second camera according to the attitude parameters of the first camera, the attitude parameters of the second camera and the world coordinates of the first reference point;
determining the world coordinate of the second calibration point corresponding to the first calibration point according to the world coordinate of the first calibration point in the first image;
and determining the pixel coordinates of the second calibration point in the second image according to the attitude parameters of the first camera, the attitude parameters of the second camera, the pitch angle of the second camera and the world coordinates of the second calibration point.
3. The method of claim 2, wherein determining world coordinates of the second calibration point corresponding to the first calibration point from the world coordinates of the first calibration point in the first image comprises:
determining world coordinates of the second calibration point corresponding to the first calibration point based on a relative position of the first calibration point with respect to the first camera, world coordinates of the first camera in a first world coordinate system, and world coordinates of the second camera in a second world coordinate system;
the first world coordinate system is the same as or different from the second world coordinate system.
4. The method of claim 2 or 3, wherein said determining pixel coordinates of said second calibration point in said second image comprises:
processing the first triangle by using a trigonometric function to obtain a pixel ordinate of the second calibration point in the second image;
the first triangle is determined by the optical center of the second camera, the central point of the second image and a first reference point, the pixel ordinate of the first reference point is the same as the pixel ordinate of the second calibration point, and the pixel abscissa of the first reference point is the same as the pixel abscissa of the central point of the second image.
5. The method of claim 4, wherein the processing the first triangle using a trigonometric function to obtain a pixel ordinate of the second calibration point in the second image comprises:
determining a first included angle of the first triangle according to the world coordinate of the second calibration point, the height and the pitch angle of the second camera, wherein the first included angle is an included angle between the optical axis of the second camera and a first straight line, and the first straight line is determined by the first reference point and the optical center of the second camera;
determining the pixel coordinates of the second calibration point in the second image based on the trigonometric function relationship which is satisfied between the first side and the second side of the first triangle and the first included angle; wherein the first edge is an edge between a center point of the second image and the first reference point, the second edge is an edge between the center point of the second image and the optical center, the second edge is associated with an internal reference of the second camera, and the first edge is perpendicular to the second edge.
6. The method according to claim 4 or 5, wherein the pixel ordinate of the second calibration point in the second image satisfies the following formula:
Figure FDA0002543792150000021
alternatively, the first and second electrodes may be,
Figure FDA0002543792150000022
wherein v is2iA pixel ordinate representing the ith second index point on the second image; f denotes the focal length of the second camera; dy represents the number of unit-size pixels; alpha is alpha2Representing the pitch angle of the second camera; h2Indicating the height of the second camera; x is the number of2iIdentifying world abscissas of the ith second calibration point in a second world coordinate system; v. of10A pixel ordinate representing a first reference point; c. CyA pixel ordinate representing a center point of the first image; alpha is alpha1Representing the pitch angle of the first camera; h1Indicating the height of the first camera; x is the number of10=x20Wherein x is10Representing the world abscissa, x, of the first reference point in a first world coordinate system20Representing the world abscissa of the second reference point in the second world coordinate system.
7. A method according to any of claims 4-6, characterized in that the pixel position of the second calibration point in the second image is the same as the pixel position of the first reference point when the second calibration point is located on the ground projection line of the optical axis of the second camera.
8. The method of any of claims 4-6, wherein determining pixel coordinates of the second pointing point in the second image when the second pointing point is outside a ground projection line of an optical axis of the second camera, further comprises:
processing a second triangle and a third triangle by using a triangle similarity theorem to obtain a pixel abscissa of the second calibration point in the second image;
wherein the second triangle is formed by the optical center of the second camera, the second calibration point, and a second reference point; the third triangle is composed of the optical center of the second camera, the pixel position of the second calibration point in the second image, and the first reference point; the second reference point is located in a second world coordinate system, and in the second world coordinate system, a horizontal axis component of the second reference point is the same as a horizontal axis component of the second calibration point, and a vertical axis component of the second reference point is zero.
9. The method of claim 8, wherein the second triangle is similar to the third triangle, wherein a first ratio is equal to a second ratio;
the first ratio is a ratio between a third side in the second triangle and a fourth side in the third triangle; the third side is a side between the optical center of the second camera and the second reference point, and the fourth side is a side between the optical center of the second camera and the first reference point;
the second ratio is a ratio between a fifth side in the second triangle and a sixth side in the third triangle; the fifth side is a side between the optical center of the second camera and the second calibration point, and the sixth side is a side between the optical center of the second camera and the pixel position of the second calibration point in the second image.
10. The method according to claim 8 or 9, wherein the abscissa of the pixel of the second calibration point in the second image satisfies the following formula:
Figure FDA0002543792150000031
wherein u is2iA pixel ordinate representing the ith second index point on the second image; l1 denotes a reference point between the first reference point and the second index pointA distance; f denotes the focal length of the second camera; h2Indicating the height of the second camera; x is the number of2iA world abscissa representing the ith second calibration point in the second world coordinate system; c. CxA pixel abscissa representing a center point of the second image; γ 2i denotes a first angle, which is an angle between the optical axis of the second camera and the first line,
Figure FDA0002543792150000032
wherein alpha is1Representing the pitch angle of the first camera; h1Indicating the height of the first camera; x is the number of10=x20Wherein x is10World abscissa, x, representing first reference point in first world coordinate system20The world abscissa representing the second reference point in the second world coordinate system.
11. A method of positioning, comprising:
receiving a third image acquired by a second camera, wherein the third image comprises a target object;
acquiring a first pixel coordinate of the target object in the third image;
processing the first pixel coordinate by using the homography matrix of the second camera to obtain a first world coordinate of the target object;
the homography matrix is used for describing a mapping relation between a pixel coordinate system and a world coordinate system of the camera; the homography matrix is determined based on pixel coordinates and world coordinates of a plurality of second calibration points, the pixel coordinates of the second calibration points are determined based on attitude parameters of the first camera, attitude parameters of the second camera, pixel coordinates and world coordinates of the first reference points, and world coordinates of the second calibration points;
wherein the first camera and the second camera have the same internal reference; a first image is from the first camera, and the first image comprises a first reference point and a plurality of first calibration points; a second image from the second camera, the second image including a second fiducial point and a plurality of second calibration points;
the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera;
the first calibration points correspond to the second calibration points one by one; in one of the first calibration point and the second calibration point having a corresponding relationship, a relative position of the first calibration point with respect to the first camera is the same as a relative position of the second calibration point with respect to the second camera.
12. The method of claim 11, further comprising:
adjusting the pose of the first camera; and/or, adjusting the pose of the second camera; so that the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image.
13. The method according to claim 11 or 12, characterized in that the method further comprises:
acquiring a second world coordinate of the target object;
and correcting the homography matrix of the second camera when the error between the first world coordinate and the second world coordinate is larger than a preset threshold value.
14. The method according to any one of claims 11-13, further comprising:
receiving a first message, wherein the first message is used for realizing pose configuration of a camera;
determining pose parameters of the first camera and/or the second camera based on the first message.
15. The method according to any one of claims 11-14, further comprising:
receiving a second message indicating acquisition of a homography matrix of one or more of the second cameras;
and when the second camera indicated by the second message meets a preset calibration condition, determining a homography matrix of the second camera.
16. An electronic device comprising at least one processor and memory; the memory stores computer-executable instructions; the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the method of any one of claims 1-15.
17. A positioning system, comprising:
a first electronic device for performing the method of any one of claims 1-15;
and the camera is used for acquiring images and comprises a first camera and one or more second cameras.
18. The positioning system of claim 17, further comprising:
the second electronic equipment is used for receiving instruction information from a user and sending the instruction information to the first electronic equipment;
the first electronic device is further used for executing the action indicated by the instruction information;
wherein the first electronic device is integrally provided with the second electronic device or separately provided.
CN202010554485.4A 2020-06-17 2020-06-17 Positioning method, electronic equipment and positioning system Active CN113808199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010554485.4A CN113808199B (en) 2020-06-17 2020-06-17 Positioning method, electronic equipment and positioning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010554485.4A CN113808199B (en) 2020-06-17 2020-06-17 Positioning method, electronic equipment and positioning system

Publications (2)

Publication Number Publication Date
CN113808199A true CN113808199A (en) 2021-12-17
CN113808199B CN113808199B (en) 2023-09-08

Family

ID=78943189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010554485.4A Active CN113808199B (en) 2020-06-17 2020-06-17 Positioning method, electronic equipment and positioning system

Country Status (1)

Country Link
CN (1) CN113808199B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114791282A (en) * 2022-03-04 2022-07-26 广州沃定新信息科技有限公司 Road facility coordinate calibration method and device based on vehicle high-precision positioning

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008014940A (en) * 2006-06-08 2008-01-24 Fast:Kk Camera calibration method for camera measurement of planar subject and measuring device applying same
CN106251334A (en) * 2016-07-18 2016-12-21 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN107038722A (en) * 2016-02-02 2017-08-11 深圳超多维光电子有限公司 Equipment positioning method and device
CN107481283A (en) * 2017-08-01 2017-12-15 深圳市神州云海智能科技有限公司 A kind of robot localization method, apparatus and robot based on CCTV camera
CN109015630A (en) * 2018-06-21 2018-12-18 深圳辰视智能科技有限公司 Hand and eye calibrating method, system and the computer storage medium extracted based on calibration point
CN110599548A (en) * 2019-09-02 2019-12-20 Oppo广东移动通信有限公司 Camera calibration method and device, camera and computer readable storage medium
CN110650427A (en) * 2019-04-29 2020-01-03 国网浙江省电力有限公司物资分公司 Indoor positioning method and system based on fusion of camera image and UWB

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008014940A (en) * 2006-06-08 2008-01-24 Fast:Kk Camera calibration method for camera measurement of planar subject and measuring device applying same
CN107038722A (en) * 2016-02-02 2017-08-11 深圳超多维光电子有限公司 Equipment positioning method and device
CN106251334A (en) * 2016-07-18 2016-12-21 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN107481283A (en) * 2017-08-01 2017-12-15 深圳市神州云海智能科技有限公司 A kind of robot localization method, apparatus and robot based on CCTV camera
CN109015630A (en) * 2018-06-21 2018-12-18 深圳辰视智能科技有限公司 Hand and eye calibrating method, system and the computer storage medium extracted based on calibration point
CN110650427A (en) * 2019-04-29 2020-01-03 国网浙江省电力有限公司物资分公司 Indoor positioning method and system based on fusion of camera image and UWB
CN110599548A (en) * 2019-09-02 2019-12-20 Oppo广东移动通信有限公司 Camera calibration method and device, camera and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张雪波 等: "基于单应矩阵的摄像机标定方法及应用", 《控制工程》, vol. 17, no. 02, pages 248 - 255 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114791282A (en) * 2022-03-04 2022-07-26 广州沃定新信息科技有限公司 Road facility coordinate calibration method and device based on vehicle high-precision positioning

Also Published As

Publication number Publication date
CN113808199B (en) 2023-09-08

Similar Documents

Publication Publication Date Title
CN111448476B (en) Technique for sharing mapping data between unmanned aerial vehicle and ground vehicle
US10659677B2 (en) Camera parameter set calculation apparatus, camera parameter set calculation method, and recording medium
WO2020224375A1 (en) Positioning method, apparatus, and device, and computer-readable storage medium
US20190147647A1 (en) System and method for determining geo-location(s) in images
CN109817022B (en) Method, terminal, automobile and system for acquiring position of target object
US20220237738A1 (en) Information processing device, information processing method, information processing program, image processing device, and image processing system for associating position information with captured images
CN111670339B (en) Techniques for collaborative mapping between unmanned aerial vehicles and ground vehicles
US20220357441A1 (en) Radar and camera data fusion
CN106019264A (en) Binocular vision based UAV (Unmanned Aerial Vehicle) danger vehicle distance identifying system and method
CN111596674A (en) Landing positioning method and device for unmanned aerial vehicle and unmanned aerial vehicle nest
CN112068567A (en) Positioning method and positioning system based on ultra-wideband and visual image
CN113808199B (en) Positioning method, electronic equipment and positioning system
CN113393520A (en) Positioning method and system, electronic device and computer readable storage medium
CN113296133A (en) Device and method for realizing position calibration based on binocular vision measurement and high-precision positioning fusion technology
CN114360093B (en) Road side parking space inspection method based on Beidou RTK, SLAM positioning and image analysis
CN110494905B (en) Apparatus, system, method and recording medium for recording program
CN116017693A (en) Mobile base station positioning method and device, computer equipment and mobile base station
WO2018079043A1 (en) Information processing device, image pickup device, information processing system, information processing method, and program
CN113237464A (en) Positioning system, positioning method, positioner, and storage medium
JP7061933B2 (en) Mutual position acquisition system
Guntel et al. Accuracy analysis of control point distribution for different terrain types on photogrammetric block
CN111210471B (en) Positioning method, device and system
Zhang et al. Visual-inertial fusion based positioning systems
CN218888715U (en) Emergency positioning system based on ultra-wideband radar P440
US20240105059A1 (en) Delimiter-based occupancy mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220215

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Applicant after: Huawei Cloud Computing Technologies Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

GR01 Patent grant
GR01 Patent grant