CN113808199B - Positioning method, electronic equipment and positioning system - Google Patents

Positioning method, electronic equipment and positioning system Download PDF

Info

Publication number
CN113808199B
CN113808199B CN202010554485.4A CN202010554485A CN113808199B CN 113808199 B CN113808199 B CN 113808199B CN 202010554485 A CN202010554485 A CN 202010554485A CN 113808199 B CN113808199 B CN 113808199B
Authority
CN
China
Prior art keywords
camera
point
image
calibration
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010554485.4A
Other languages
Chinese (zh)
Other versions
CN113808199A (en
Inventor
姜波
张竞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to CN202010554485.4A priority Critical patent/CN113808199B/en
Publication of CN113808199A publication Critical patent/CN113808199A/en
Application granted granted Critical
Publication of CN113808199B publication Critical patent/CN113808199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the disclosure provides a positioning method, an electronic device and a positioning system, wherein in the positioning system comprising a plurality of cameras, internal parameters of the plurality of cameras are the same and comprise a first camera with a known homography matrix, and when the gesture of the first camera is the same as that of a second camera, a plurality of pairs of first calibration points and second calibration points with the same relative position relationship in the first camera and the second camera are utilized to acquire pixel coordinates of the plurality of second calibration points, so that the homography matrix of the second camera is determined based on the pixel coordinates and world coordinates of the plurality of second calibration points. Therefore, calibration of homography matrixes of a plurality of second cameras can be achieved only by the first camera based on the known homography matrix, the homography matrix calibration task of the batch cameras can be achieved, camera calibration time and workload are reduced, and further the positioning cost based on the cameras is reduced.

Description

Positioning method, electronic equipment and positioning system
Technical Field
The embodiment of the disclosure relates to the technical field of communication and computer vision, in particular to a positioning method, electronic equipment and a positioning system.
Background
Accurate positioning of traffic incidents is beneficial to master road conditions in real time, timely makes safety decisions aiming at emergencies, prevents potential safety threats, and is beneficial to timely reaction of related departments and road operators. Identifying traffic events and locating them by means of cameras is an effective solution.
The road positioning scheme of the single camera is realized based on a homography matrix of the camera, wherein the homography matrix of the camera is used for representing the mapping relation between a pixel coordinate system and a world coordinate system of the camera. However, a single camera can pay attention to a road having a length of about one hundred meters, and thus, in order to realize a traffic event recognition function of an expressway, a large number of cameras are generally arranged beside the expressway. This requires calibration of the homography matrix for tens of thousands of cameras beside the highway.
However, calibration of homography matrix is performed on each camera one by one, so that the workload is huge, and a great amount of time, manpower and material resources are consumed, which also causes great difficulty for positioning work based on camera realization.
Disclosure of Invention
The embodiment of the disclosure provides a positioning method, electronic equipment and a positioning system, which are used for realizing homography matrix calibration tasks of batch cameras, reducing camera calibration time and workload and reducing positioning cost based on cameras.
In a first aspect, the present disclosure provides a positioning method applied to a positioning system including a first camera and a second camera, the first camera and the second camera having the same internal reference; the method comprises the following steps: receiving a first image from a first camera, the first image including a first fiducial point and a plurality of first fiducial points; receiving a second image from a second camera, the second image including a second fiducial point and a plurality of second fiducial points; the pixel coordinates of the first datum point in the first image are the same as the pixel coordinates of the second datum point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera; determining pixel coordinates of the second calibration point in the second image according to the gesture parameters of the first camera, the gesture parameters of the second camera, the pixel coordinates and world coordinates of the first reference point and the world coordinates of the second calibration point; the first calibration points are in one-to-one correspondence with the second calibration points; in the first calibration point and the second calibration point which have a corresponding relationship, the relative position of the first calibration point relative to the first camera is the same as the relative position of the second calibration point relative to the second camera; determining a homography matrix of the second camera according to the pixel coordinates and the world coordinates of the plurality of second calibration points; the homography matrix is used for describing the mapping relation between the pixel coordinate system of the camera and the world coordinate system; when a third image from the second camera is received, determining a first world coordinate of a target object in the third image using a homography matrix of the second camera.
In an embodiment of the first aspect, the determining pixel coordinates of the second calibration point in the second image includes: determining a pitch angle of the second camera according to the attitude parameter of the first camera, the attitude parameter of the second camera and the world coordinates of the first reference point; determining world coordinates of the second calibration point corresponding to the first calibration point according to the world coordinates of the first calibration point in the first image; and determining pixel coordinates of the second calibration point in the second image according to the attitude parameters of the first camera, the attitude parameters of the second camera, the pitch angle of the second camera and the world coordinates of the second calibration point.
In an embodiment of the first aspect, the determining, according to world coordinates of the first calibration point in the first image, world coordinates of the second calibration point corresponding to the first calibration point includes: determining world coordinates of the second calibration point corresponding to the first calibration point based on a relative position of the first calibration point with respect to the first camera, world coordinates of the first camera in a first world coordinate system, and world coordinates of the second camera in a second world coordinate system; the first world coordinate system is the same as or different from the second world coordinate system.
In an embodiment of the first aspect, the determining pixel coordinates of the second calibration point in the second image includes: processing the first triangle by using a trigonometric function to obtain the pixel ordinate of the second calibration point in the second image; the first triangle is determined by the optical center of the second camera, the center point of the second image and a first reference point, the pixel ordinate of the first reference point is the same as the pixel ordinate of the second calibration point, and the pixel abscissa of the first reference point is the same as the pixel abscissa of the center point of the second image.
In an embodiment of the first aspect, the processing the first triangle with a trigonometric function to obtain a pixel ordinate of the second calibration point in the second image includes: determining a first included angle of the first triangle according to world coordinates of the second standard point, the height and the pitch angle of the second camera, wherein the first included angle is an included angle between an optical axis of the second camera and a first straight line, and the first straight line is determined by the first standard point and the optical center of the second camera; determining pixel coordinates of the second calibration point in the second image based on trigonometric function relationships satisfied between the first edge, the second edge and the first included angle of the first triangle; the first edge is an edge between the center point of the second image and the first reference point, the second edge is an edge between the center point of the second image and the optical center, the second edge is related to an internal reference of the second camera, and the first edge is perpendicular to the second edge.
In an embodiment of the first aspect, the pixel ordinate of the second calibration point in the second image satisfies the following formula:
or alternatively, the process may be performed,
wherein v is 2i Representing the pixel ordinate of the ith second calibration point on the second image; f represents the focal length of the second camera; dy represents the number of pixels per unit size; alpha 2 Representing a pitch angle of the second camera; h 2 Representing the height of the second camera; x is x 2i Identifying world abscissas of the ith second calibration point in the second world coordinate system; v 10 A pixel ordinate representing a first reference point; c y A pixel ordinate representing a center point of the first image; alpha 1 Representing a pitch angle of the first camera; h 1 Representing the height of the first camera; x is x 10 =x 20 Wherein x is 10 Representing the world abscissa, x, of a first reference point in a first world coordinate system 20 Representing the world abscissa of the second fiducial point in the second world coordinate system.
In an embodiment of the first aspect, when the second calibration point is located on a ground projection line of an optical axis of the second camera, a pixel position of the second calibration point in the second image is the same as a pixel position of the first reference point.
In an embodiment of the first aspect, when the second calibration point is located outside a ground projection line of an optical axis of the second camera, the determining pixel coordinates of the second calibration point in the second image further comprises: processing a second triangle and a third triangle by using a triangle similarity theorem to obtain a pixel abscissa of the second calibration point in the second image; wherein the second triangle is composed of the optical center of the second camera, the second calibration point and a second reference point; the third triangle is composed of the optical center of the second camera, the pixel position of the second calibration point in the second image and the first reference point; the second reference point is located in the second world coordinate system, and in the second world coordinate system, a transverse axis component of the second reference point is the same as a transverse axis component of the second reference point, and a longitudinal axis component of the second reference point is zero.
In an embodiment of the first aspect, the second triangle is similar to the third triangle, wherein the first ratio is equal to the second ratio; the first ratio is a ratio between a third side in the second triangle and a fourth side in the third triangle; the third side is the side between the optical center of the second camera and the second reference point, and the fourth side is the side between the optical center of the second camera and the first reference point; the second ratio is a ratio between a fifth side in the second triangle and a sixth side in the third triangle; the fifth side is the side between the optical center of the second camera and the second calibration point, and the sixth side is the side between the optical center of the second camera and the pixel position of the second calibration point in the second image.
In an embodiment of the first aspect, the pixel abscissa of the second calibration point in the second image satisfies the following formula:
wherein u is 2i Representing the pixel ordinate of the ith second calibration point on the second image; l1 represents the distance between a first reference point and the second calibration point; f represents the focal length of the second camera; h 2 Representing the height of the second camera; x is x 2i Representing the world abscissa of the ith second calibration point in the second world coordinate system; c x A pixel abscissa representing a center point of the second image; γ2i represents a first angle, which is an angle between the optical axis of the second camera and a first straight line,wherein alpha is 1 Representing a pitch angle of the first camera; h 1 Representing the height of the first camera; x is x 10 =x 20 Wherein x is 10 Representing the world abscissa, x, of the first reference point in the first world coordinate system 20 Representing the world abscissa of the second fiducial point in the second world coordinate system.
In a second aspect, the present disclosure provides a positioning method, the method comprising: receiving a third image acquired by a second camera, wherein the third image contains a target object; acquiring a first pixel coordinate of the target object in the third image; processing the first pixel coordinates by utilizing a homography matrix of the second camera to obtain first world coordinates of the target object; the homography matrix is used for describing the mapping relation between the pixel coordinate system of the camera and the world coordinate system; the homography matrix is determined based on pixel coordinates and world coordinates of a plurality of second calibration points, wherein the pixel coordinates of the second calibration points are determined based on gesture parameters of a first camera, gesture parameters of the second camera, pixel coordinates and world coordinates of a first reference point and world coordinates of the second calibration points; wherein the internal parameters of the first camera and the second camera are the same; the first image is from the first camera, and the first image comprises a first datum point and a plurality of first datum points; a second image from the second camera, the second image comprising a second fiducial point and a plurality of second fiducial points; the pixel coordinates of the first datum point in the first image are the same as the pixel coordinates of the second datum point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera; the first calibration points are in one-to-one correspondence with the second calibration points; in the first calibration point and the second calibration point which have a corresponding relationship, the relative position of the first calibration point relative to the first camera is the same as the relative position of the second calibration point relative to the second camera.
In one embodiment of the second aspect, the method further comprises: adjusting the pose of the first camera; and/or adjusting the pose of the second camera; so that the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image.
In one embodiment of the second aspect, the method further comprises: acquiring a second world coordinate of the target object; correcting the homography matrix of the second camera when the error between the first world coordinate and the second world coordinate is greater than a preset threshold.
In one embodiment of the second aspect, the method further comprises: receiving a first message, wherein the first message is used for configuring the pose of a camera; based on the first message, pose parameters of the first camera and/or the second camera are determined.
In one embodiment of the second aspect, the method further comprises: receiving a second message, wherein the second message indicates that a homography matrix of one or more second cameras is acquired; and when the second camera indicated by the second message meets the preset calibration condition, determining a homography matrix of the second camera.
In a third aspect, the present disclosure provides a positioning device comprising at least one processor and a memory; the memory stores computer-executable instructions; the at least one processor executing computer-executable instructions stored in the memory causes the at least one processor to perform the method according to any one of the embodiments of the first or second aspect.
In a fourth aspect, the present disclosure provides a positioning system comprising: a first electronic device for performing the method of any one of the embodiments of the first or second aspects; a camera for capturing images, the camera comprising a first camera and one or more second cameras.
In one embodiment of the fourth aspect, the positioning system further comprises: the second electronic equipment is used for receiving instruction information from a user and sending the instruction information to the first electronic equipment; the first electronic device is further configured to perform an action indicated by the instruction information; wherein the first electronic device is integrated with the second electronic device or separately provided.
In one embodiment of the fourth aspect, the positioning system is a vehicle-to-everything V2X system.
In one possible design, the electronic device (the first electronic device or the second electronic device) referred to in the third aspect to the fourth aspect may be (a processor in) the second camera, a terminal, a server (or a node therein), a vehicle processor, or the like.
In a fifth aspect, the present disclosure provides a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement a positioning method according to any one of the embodiments of the first or second aspects.
In a sixth aspect, the present application provides a computer program for performing the method of any one of the embodiments of the first or second aspects when the computer program is executed by a computer.
In one possible design, the program in the sixth aspect may be stored in whole or in part on a storage medium packaged with the processor, or in part or in whole on a memory not packaged with the processor.
In summary, in a positioning scene of a traffic event implemented by using a plurality of cameras, internal references of the plurality of cameras are the same, and the plurality of cameras at least include a first camera calibrated with a homography matrix, so, for a second camera not calibrated with a homography matrix, when pixel coordinates of a pair of reference points with the same relative position relation to the cameras are the same, the gestures of the first camera and the second camera are the same, and at this time, pixel coordinates of a plurality of second calibration points in the second camera can be obtained based on a plurality of paired first calibration points and second calibration points with the same relative position relation in the first camera and the second camera, and further, the homography matrix of the second camera can be obtained based on the pixel coordinates and world coordinates of the second calibration points. Therefore, the calibration of a plurality of second cameras which are not calibrated with the homography matrix can be realized only based on one first camera which is calibrated with the homography matrix, namely, the homography matrix calibration task of the batch cameras can be realized, the camera calibration time and workload are reduced, and further, the positioning cost based on the cameras is also reduced.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the description of the prior art, it being obvious that the drawings in the following description are some embodiments of the present disclosure, and that other drawings may be obtained from these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1 is a schematic diagram of a positioning scenario provided in an embodiment of the present disclosure;
FIG. 2 is a side view of the positioning scenario shown in FIG. 1;
FIG. 3 is a schematic diagram of a pixel coordinate system according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of a world coordinate system provided by an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a positioning system according to an embodiment of the disclosure;
fig. 6 is a flowchart of a positioning method according to an embodiment of the disclosure;
FIG. 7 is a schematic view of camera pose adjustment according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of an implementation principle of obtaining a pitch angle of a second camera according to an embodiment of the disclosure;
FIG. 9 is a schematic diagram of a calibration principle provided by an embodiment of the disclosure;
FIG. 10 is a schematic diagram of another calibration principle provided by an embodiment of the present disclosure;
FIG. 11 is a flowchart of another positioning method according to an embodiment of the disclosure;
FIG. 12 is a schematic diagram of another positioning system according to an embodiment of the present disclosure;
FIG. 13 is a functional block diagram of an electronic device according to an embodiment of the disclosure;
FIG. 14 is a functional block diagram of another electronic device according to an embodiment of the present disclosure;
fig. 15 is a schematic entity structure diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
The positioning scheme provided by the embodiment of the disclosure is suitable for a positioning system comprising a plurality of cameras, and each camera can acquire images and position a target object in the images.
The camera may also be referred to as an image capturing device, and may be specifically a device with an image capturing function, such as a camera, a video recorder, or the like, and the type and the capturing precision of the present disclosure are not particularly limited.
For example, reference may be made to fig. 1, and fig. 1 is a schematic diagram of a positioning scenario provided in an embodiment of the present disclosure. As shown in fig. 1, a plurality of cameras 110 are disposed on a road, and fig. 1 exemplarily shows a camera 111, a camera 112, and a camera 113. The road is available for the vehicle 120 to travel, whereby during travel of the vehicle 120, the camera 110 may acquire images of the vehicle 120 and locate the vehicle 120 based on the acquired images. The positioning method based on the camera 110 will be specifically described later.
The effective detection distance of a single camera is about 100 meters, a large number of cameras are required to be arranged for realizing the function of identifying traffic events on a road, and the positioning capability of the cameras also directly determines the effectiveness and accuracy of identifying the traffic events. Thus, in the foregoing scenario, the plurality of cameras 110 are typically disposed consecutively on the same side of the road.
At this time, referring to fig. 2, fig. 2 shows a side view of the positioning scene shown in fig. 1, as shown in fig. 2, a plurality of cameras 110 may be disposed in a linear array, and the distances between any two adjacent cameras 110 may be equal. For example, in fig. 2, the distances between the cameras 111 and 112 are equal to each other, and the distances between the cameras 112 and 113 are equal to each other.
As shown in fig. 1 and 2, the camera is typically deployed on the roadway through a base 130. The material and shape of the base 130, the connection mode between the base 130 and the camera 110, and the like are not particularly limited in the embodiment of the present application. For example, the base 130 may be a metal rod base, and for example, the base 130 may be formed of a concrete column and alloy fixture, although this is not exhaustive.
In the positioning scene as shown in fig. 1 or fig. 2, if the positioning of the traffic event is to be achieved by the camera, it is necessary to acquire the association relationship between the pixel coordinate system and the world coordinate system of the image captured by the camera. Therefore, after the camera collects the image, the world coordinates of the object in the image can be determined based on the pixel coordinates of the object in the image and the association relationship between the pixel coordinates and the world coordinates.
The association between the pixel coordinate system and the world coordinate system in the camera can be characterized by a homography matrix. That is, the homography matrix of the camera is used to describe the mapping relationship between the camera's pixel coordinate system and the world coordinate system. The specific positioning method is described in detail later.
The pixel coordinate system is used to describe the position of the object in the image. Specifically, the pixel coordinate system is a plane coordinate system, and in an actual application scene, the pixel coordinate system can have various definition manners.
Exemplary, fig. 3 shows a schematic diagram of a pixel coordinate system provided by an embodiment of the disclosure. For the image shown in FIG. 3 (the image content is not limited herein), the upper left corner of the image plane may be taken as the origin O of the pixel coordinate system p While the two coordinate axes in the pixel coordinate system are represented as: o (O) p u-axis and O p v-axis, where O p The u-axis pointing to the right, and O p The v-axis is directed downward. Thus, in the pixel coordinate system, pixel coordinates (u, v) can be used to describe the pixel position of the object in the pixel coordinate system.
In addition, other definition methods of the pixel coordinate system are also possible. For example, the upper right corner of the image may be taken as the origin of the pixel coordinate system, with the two coordinate axes pointing to the left and below, respectively. For example, the image center point may be taken as the origin of the pixel coordinate system, with the two coordinate axes pointing to the right and above, respectively. Is not exhaustive.
The world coordinate system is used to describe the position of an object in a real three-dimensional space. The embodiment of the disclosure is used for realizing the positioning of the traffic event in the two-dimensional plane, and the third coordinate axis except the horizontal axis and the vertical axis in the world coordinate system is not discussed here. In this case, the world coordinate system can be regarded as a two-dimensional rectangular coordinate system. Specifically, the world coordinate system is generally a three-dimensional coordinate system, and in the positioning scene according to the embodiment of the present disclosure, the two-dimensional world coordinate (x w ,y w ) To describe the actual position of the object in space. Where w represents the world (world). The world coordinate system may also be defined differently.
For example, the longitude and latitude may be used as the abscissa and the ordinate of the world coordinate system, respectively, to construct the world coordinate system. The world coordinate system can be regarded as an absolute world coordinate system, and the longitude and latitude of the object are world coordinates of the object.
By way of example, a camera-related relative world coordinate system may also be constructed. It will be appreciated that the world coordinates of a fixed position object in the relative world coordinate system of the different cameras are different.
For example, in one possible embodiment, the ground where the camera is located may be projected as the center point of the world coordinate system, with the north (or south) direction being the vertical axis direction and the east (or west) direction being the horizontal axis direction, to construct a relative world coordinate system with respect to the camera.
In another possible embodiment, the intersection point of the base where the camera is located and the ground is taken as the origin of the relative world coordinate system, and directions indicated by two straight lines (the directions are not particularly limited) perpendicular to each other are respectively horizontal and vertical axes, so that the relative world coordinate system relative to the camera can be constructed.
For ease of understanding, fig. 4 illustrates a schematic diagram of a world coordinate system provided by an embodiment of the present disclosure, taking the relative world coordinate systems of two cameras as an example. Fig. 4 shows in particular the case of a first world coordinate system 41 of the camera 1 and a second world coordinate system 42 of the camera 2. The origin of coordinates (O1) of the first world coordinate system 41 is the ground projection of the camera 1, the origin of coordinates (O2) of the second world coordinate system 42 is the ground projection of the camera 2, the horizontal axis of the first world coordinate system 41 is parallel to the horizontal axis of the second world coordinate system 42, and the vertical axis of the first world coordinate system 41 is parallel to the vertical axis of the second world coordinate system 42. In this case, the world coordinates of the point a in the first world coordinate system 41 are different from the world coordinates of the point a in the second world coordinate system 42.
In the above-mentioned multi-camera positioning scene, the homography matrix of each camera is generally different based on the difference of the position, posture, and camera internal parameters set by the camera. For example, camera a and camera B have the same internal reference but different attitudes, and in this case, even if camera a and camera B are disposed at the same position, images of the target object at the same fixed position are acquired, respectively, and the pixel positions of the target object in image a (from camera a) and image B (from camera B) are different.
Then, in a multi-camera positioning scenario, the homography matrix of each camera needs to be calibrated. At present, homography matrix calibration is generally realized for a single camera, and the following two common methods are adopted:
first, homography matrices are solved by the camera's internal and external parameters. Wherein, the internal and external parameters can include but are not limited to: the position (world coordinates), pose, pixel size, focal length, etc. of the camera. In this calculation, it is necessary to finally acquire the homography matrix of the camera by means of the relationship among the pixel coordinate system, the image physical coordinate system, the camera coordinate system, and the world coordinate system. Not described in detail herein.
Secondly, the homography matrix is obtained by means of manual calibration. As before, in the image acquisition range of the camera, the calibration personnel can select a plurality of fixed positions in the road, such as lane line vertices, zebra crossing corner points, road two-side signboards and the like, and then manually measure world coordinates of the fixed positions, so that the homography matrix of the camera is calculated by combining pixel coordinates of the fixed positions in the image acquired by the camera. Further, the internal and external parameters of the camera can be calculated reversely.
Compared with the first mode, the second mode is favorable for improving the positioning precision, but the labor consumption is extremely high, and the road sealing operation is also needed. If the homography matrixes of the cameras are respectively calibrated in the mode, long-time road sealing operation is needed, the calibration workload is huge, the waste of resources of manpower and material resources is huge, and the positioning operation based on the cameras is also difficult.
In the multi-camera positioning scene, if the first mode is adopted for calibration, the internal and external parameters of each camera are required to be respectively acquired, and multiple times of calculation is performed based on the acquired data, so that the data acquisition process is complicated, and the calculated amount is huge; if the second mode is adopted for calibration, long-time road sealing operation is needed, a large amount of manpower and material resources are consumed, and the positioning requirement of a multi-camera positioning scene such as a highway is difficult to adapt.
In view of this situation, embodiments of the present disclosure provide a positioning method and a positioning system. Embodiments of the present application will be described below with reference to the accompanying drawings.
First, a positioning system provided by an embodiment of the present disclosure will be described. For example, reference may be made to the system architecture schematic shown in fig. 5. As shown in fig. 5, the positioning system 500 includes: the electronic device 510 and the camera 520, wherein the number of cameras 520 is a plurality, fig. 5 shows n cameras 520 altogether, n being an integer greater than 1. The camera 520 is used to capture images and may specifically include, but is not limited to: video images and/or photographs.
As before, the pose of the camera is doubly influenced by the internal participation and external parameters of the camera. The camera's internal parameters may include, but are not limited to: focal length, number of pixels per unit size, pixel coordinates of the center point of the image (definition of the pixel coordinate system is the same). And the camera's external parameters may include, but are not limited to: pitch angle, height, position, etc. of the camera. The camera's internal and external parameters are not limited to the foregoing, for example, the camera's internal parameters may also include the camera's attitude angle, which may include: slip angle, pitch angle and roll angle.
When the participation of the camera in the external parameters is identical, the gestures of the camera are identical. If the homography matrix is used to characterize the mapping relationship between the pixel coordinate system and the relative world coordinate system, the homography matrices of two cameras with the same pose are theoretically identical. However, in an actual scene, when a plurality of cameras are deployed, due to differences in pitch angle, installation height, installation position and the like of the cameras, homography matrixes of the cameras are different due to the posture differences of the homography matrixes.
In the positioning system 500 provided by the embodiments of the present disclosure, the internal references of the plurality of cameras 520 are identical. Under the premise, only the homography of one camera is calibrated, or only the homography matrix of one camera is calibrated, so that the migration calculation of the homography matrix of the other cameras can be realized based on the homography matrix of the calibrated camera, the homography matrix of the other cameras can be conveniently obtained, the manpower and material resource consumption in the multi-camera calibration process is reduced, and the positioning cost is reduced. And will be described in detail later.
In the positioning system 500 shown in fig. 5, the camera 520 may communicate with the electronic device 510 by wire or wirelessly. The wireless communication scheme may specifically include, but is not limited to, a wireless communication scheme such as 2G/3G/4G/5G. The communication mode may also have different designs based on the electronic device 510, which will be described in detail later.
Based on the communication relationship between the two, any one of the cameras 520 may transmit the captured image to the electronic device 510. Wherein camera 520 may actively transmit images to the electronic device, e.g., periodically transmit images, and, e.g., transmit images after a communication connection is established; alternatively, camera 520 may send an image to the electronic device in response to receiving a request from the electronic device. The embodiments of the present disclosure are not limited in this regard.
Correspondingly, the electronic device 520 may receive images from the camera 520 and give the received images a calibration function and a positioning function. By way of example, fig. 5 shows two processing modules of an electronic device 510: the calibration module 511 and the positioning module 512, wherein the calibration module 511 is configured to calibrate (or acquire) the homography matrix of each camera 520 according to the received image, and the positioning module 512 is configured to position the target object in the image according to the received image. The specific implementation of these two modules is described in detail later. It will be appreciated that the module division manner shown in fig. 5 is essentially a functional division, and in an actual implementation scenario, the calibration module 511 and the positioning module 512 may be separately disposed, or may be integrated into the same processor (or processing module).
In the positioning system 500 shown in fig. 5, the electronic device 510 may be further referred to as a first electronic device, for performing the positioning method provided in the embodiments of the present disclosure, which will be described in detail later.
For example, the electronic device 510 may be embodied as a processor or processing chip in one of the cameras 520, i.e., the electronic device 510 is embodied as one of the cameras 520. At this time, the electronic device 510 may communicate with other cameras 520 than the own camera 520 by wired communication. In addition, the wireless communication method may be adopted for communication with other cameras 520 within the coverage area of the wireless communication method. For example, in a WiFi coverage area where the electronic device 510 is located, the electronic device 510 may communicate with all cameras 520 in the coverage area through WiFi; for other cameras 520 outside the WiFi coverage area, indirect wireless communication may also be implemented by forwarding through the intermediate camera 520.
By way of example, the electronic device 510 may be embodied as any electronic device 510 in communication with the camera 520, or as a processor within the electronic device 510. As such, based on the communication connection of the two, the camera 520 may send the acquired image to the electronic device 510, and in turn, the electronic device 510 may locate the target object in the image based on the received image.
In the embodiment of the disclosure, the electronic device 510 (an execution body of the positioning method) may be specifically one or more of a vehicle, an unmanned aerial vehicle, a network device (e.g. a server), and a terminal.
A terminal, also called User Equipment (UE), is a device that provides voice and/or data connectivity to a User, such as a handheld device, an in-vehicle device, etc. that has a wireless connection function. Common terminals include, for example: a cell phone, tablet, notebook, palm top, mobile internet device (mobile internet device, MID), wearable device, such as a smart watch, smart bracelet, pedometer, etc.
The network device may be a network-side device, for example, an access point AP for Wireless-Fidelity (WIFI), a base station for next-generation communication, such as a 5G NR base station, such as: the 5G gNB or small station, micro station, transmission receiving point (Transmission Reception Point, TRP), may also be a relay station, access point, vehicle device, wearable device, etc. In this embodiment, base stations in communication systems of different communication schemes are different. For the sake of distinction, the base station of the 4G communication system is referred to as LTE eNB, the base station of the 5G communication system is referred to as NR gNB, the base station supporting both the 4G communication system and the 5G communication system is referred to as elet eNB, these names are for convenience of distinction only and are not meant to be limiting.
Taking the scenario illustrated in fig. 1 as an example, in the scenario, the electronic device 510 in the positioning system 500 illustrated in fig. 5 may be specific to the vehicle 120 in fig. 1, and the vehicle 120 may communicate with the plurality of cameras 110 to perform the positioning method provided by the embodiments of the present disclosure. For example, the vehicle 120 may locate its own position based on the image from the camera 110; for another example, the vehicle 120 may also locate the location of a traffic accident scene occurring in front of the vehicle in the image based on the image from the camera 110.
In one possible embodiment of fig. 1, the scenario shown in fig. 1 may further include: one or more of a terminal (e.g., a mobile phone) carried by a passenger sitting in the vehicle 120, an unmanned aerial vehicle performing a flight mission in the air, and other network devices, at this time, the vehicle 120 may also establish a communication connection with one or more of the terminal, the unmanned aerial vehicle, and the network devices to form a vehicle-to-everything (vehicle to everything, V2X) system.
Next, a positioning method performed by the electronic device 510 side is described.
For convenience of explanation, hereinafter, a camera for which the homography matrix has been calibrated is referred to as a first camera, and a camera for which the homography matrix is unknown is referred to as a second camera. It will be appreciated that in a multi-camera scenario, the number of first cameras may be at least one, and the number of second cameras may be at least one, as embodiments of the present disclosure are not particularly limited. For example, in the multi-camera positioning scenario shown in fig. 1, the camera 111 may be a first camera, and the cameras 112 and 113 may be used as second cameras, so that the homography matrices of the cameras 112 and 113 may be calibrated according to the embodiments of the present disclosure, and further, when any one of the cameras 111 to 113 acquires an image, an object in the image may be positioned.
The embodiment of the present disclosure is not particularly limited as to the source mode of the homography matrix of the first camera.
In an exemplary embodiment, the homography matrix of the first camera may be entered in advance by a user (e.g., a maintenance person).
In another exemplary embodiment, the homography matrix of the first camera may be obtained by calibration in advance.
For example, the homography matrix of the first camera may be obtained by acquiring world coordinates of a preset calibration point and pixel coordinates thereof in the image and then calculating a mapping relationship between the world coordinates and the pixel coordinates.
For another example, the calibration point may be dynamically determined by means of a movable calibration device, and the positioning of the calibration point may be achieved by a positioning device mounted in the calibration device, so that the pixel coordinates of the calibration point may be obtained based on the image collected by the first camera, and the world coordinates of the calibration point may be obtained based on the positioning device, thereby calculating the homography matrix of the first camera. The movable calibration equipment can be a vehicle, an unmanned aerial vehicle or a ground robot; the positioning device carried thereon may include, but is not limited to: one or more of a real-time kinematic (RTK) positioning tag, an Ultra Wideband (UWB) positioning tag, or a Global Positioning System (GPS) receiver, and the number of positioning devices may be one or more. Therefore, the calibration points can be dynamically determined in the moving process of the calibration equipment, the automatic calibration of the homography matrix can be realized without road sealing operation, and adverse effects of manual measurement on calibration precision and positioning precision are avoided, so that the positioning precision is improved.
On the premise that the homography matrix of the first camera is known, when the electronic device receives the image from the first camera, the homography matrix of the first camera can be utilized to locate the object in the image.
On the other hand, the electronic device may also receive images from the second camera, at which point the localization of the traffic event may be achieved based on the second camera in the manner shown in fig. 6. As shown in fig. 6, the positioning method may include the steps of:
s602, a first image from a first camera is received, wherein the first image comprises a first datum point and a plurality of first datum points.
S604, receiving a second image from a second camera, wherein the second image comprises a second datum point and a plurality of second datum points.
S606, determining the pixel coordinates of the second calibration point in the second image according to the attitude parameters of the first camera, the attitude parameters of the second camera, the pixel coordinates and the world coordinates of the first reference point and the world coordinates of the second calibration point.
And S608, determining a homography matrix of the second camera according to the pixel coordinates and the world coordinates of the second calibration points, wherein the homography matrix is used for describing the mapping relation between the pixel coordinate system and the world coordinate system of the camera.
S610, when a third image from the second camera is received, determining a first world coordinate of the target object in the third image by using the homography matrix of the second camera.
The first world coordinate may also be referred to as target world coordinate.
In the positioning method shown in fig. 6, S602 to S608 can be regarded as a calibration process of the homography matrix of the second camera, and S610 can be regarded as a scene for positioning the object in the image acquired later after the homography matrix of the second camera is calibrated.
The reference points and datum points referred to in fig. 6 will now be described in detail.
In one aspect, the fiducial point is used to indicate the pose of the camera. In the embodiment shown in fig. 6, the following fiducial point requirements are satisfied: the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera.
For example. The world coordinates of the first reference point in the first world coordinate system may be (a, b), the world coordinates of the second reference point in the second world coordinate system may be (a, b), and the first world coordinate system is a world coordinate system constructed with the ground projection position of the first camera as an origin, the second world coordinate system is a world coordinate system constructed with the ground projection position of the second camera as an origin, and coordinate axes of the first world coordinate system are parallel to coordinate axes of the second world coordinate system. In this case, the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera. In addition, the first reference point falls within the field of view of the image acquired by the first camera, that is, the first image acquired by the first camera includes the first reference point, and the pixel coordinates of the first reference point in the pixel coordinate system of the first image may be (c 1, d 1). Similarly, the second reference point falls within the field of view of the second image acquired by the second camera, that is, the second reference point is included in the image acquired by the second camera, and the pixel coordinate of the second reference point in the pixel coordinate system of the second image is (c 2, d 2). As before, the pixel coordinate systems in the image acquired by the first camera and the image acquired by the second camera are defined in the same manner, if c1=c2, d1=d2, the pixel coordinate of the first reference point in the first image is the same as the pixel coordinate of the second reference point in the second image.
When the aforementioned reference point condition is satisfied, the first camera and the second camera are the same in posture. Thus, when the calibration is performed based on the calibration point in the following process, the same characteristics of the camera can be utilized to realize the calibration. And will be described in detail later.
In the embodiment of the disclosure, the number and positions of the first reference points and the second reference points may be preset in advance in a customized manner, which is not particularly limited in the embodiment of the disclosure.
For example, one or more first fiducial points may be included in the first image and one or more second fiducial points may be included in the second image. In an actual scene, if there are a plurality of first reference points and a plurality of second reference points, the number of the first reference points and the number of the second reference points are the same and correspond to each other one by one. Thus, the pair of first reference points and second reference points satisfy the aforementioned reference point condition (the pixel positions in the respective images are the same, and the relative positions with respect to the camera are the same).
The reference point may be a preset ground marker, and the ground marker may be a self-contained marker in the scene or a marker set and marked by the user.
In a possible embodiment, the first reference point may be an intersection point of a base of another camera (for convenience of description, referred to as a third camera) in an image acquisition view of the first camera and the ground, and the second reference point may be an intersection point of a base of another camera (for convenience of description, referred to as a fourth camera) in an image acquisition view of the second camera and the ground, where a relative position of the third camera with respect to the first camera is the same as a relative position of the fourth camera with respect to the second camera.
In this embodiment, the third camera may be the second camera, or the first camera may be the fourth camera. For example, if the cameras 1, 2, 3 are arranged in a line at equal distances in this order, and the intersection point of the base of the camera 2 and the ground is located in the image acquisition field of the camera 1, the intersection point of the base of the camera 3 and the ground is located in the image acquisition field of the camera 2. Then, assuming that the camera 1 is the first camera, its homography matrix has been obtained; the camera 2 and the camera 3 can be respectively used as a second camera to execute the positioning method shown in the scheme. At this time, for the camera 2, the intersection point of the base of the camera 2 and the ground may be used as a first reference point of the camera 1, and the intersection point of the base of the camera 3 and the ground may be used as a second reference point of the camera 2.
In addition to the intersection point of the camera base and the ground, other objects with identification function in the scene can be used as reference points. For example, the end point of a lane line in the image acquisition field of the camera may also be taken as a reference point; for another example, a sign within the image acquisition field of view of the camera may be used as the reference point.
In addition to the fixed-position markers in the aforementioned scene, in another possible embodiment, the first reference point and the second reference point may be positions where the movable markers are located. The movable marker may be a movable marker with a prominent color or display effect, for example, a dot-like light source, a marker with an arbitrary shape with a fluorescent display effect, a vehicle with an indication effect, or the like, which is not exhaustive. The user can control the movable markers to move freely, and select a proper position to execute the scheme.
It should be noted that the first reference point and the second reference point need to satisfy the reference point requirement, but may be indicated by different objects. For example, the first reference point may be an intersection point of one lamp post with the ground within the image acquisition field of view of the first camera; and no natural marker meeting the requirement of the reference point exists in the image acquisition visual field range of the second camera, and the position of the dot-shaped marker with the protruding color can be preset as the second reference point.
In any of the foregoing embodiments, the relative positions of the first reference point and the second reference point with respect to the respective associated cameras in the actual three-dimensional space remain the same. In this case, if the reference point requirement is not satisfied, the posture of the first camera may be adjusted; and/or adjusting the pose of the second camera; such that the pixel coordinates of the first fiducial in the first image are the same as the pixel coordinates of the second fiducial in the second image.
In a specific implementation scenario, adjusting the pose of the camera (the first camera and/or the second camera) may include, but is not limited to: and adjusting the attitude angle of the camera. That is, one or more of the slip angle, pitch angle, roll angle are adjusted.
In specific adjustment, one of the first camera and the second camera can be used as a reference, and the posture of the other camera can be adjusted. For example, the attitude of the first camera is kept stationary, and the attitude of the second camera is adjusted to satisfy the aforementioned reference point condition. Alternatively, the pixel coordinates of the reference points may be preset, and the postures of the first camera and the second camera may be adjusted so that the pixel coordinates of the first reference point and the second reference point are consistent with the preset reference points.
By way of example, fig. 7 shows a schematic view of camera pose adjustment. Fig. 7 is a diagram of a point a, where the image coordinates of the intersection point of the nearest camera base and the ground in the image acquisition view of the camera are set as preset reference points, and the pixel coordinates of the preset reference points are preset. Thus, the posture of the camera can be adjusted accordingly. As shown in fig. 7A, the actual image of the intersection point of the nearest camera base and the ground among the images received from the cameras is marked as point B, and in this case, if the reference point condition is not satisfied in fig. 7A, the image is adjusted so that point a coincides with point B in fig. 7B after the adjustment, thereby satisfying the reference point condition.
In addition, the foregoing adjustment process may be automatically implemented, or may be implemented by outputting a prompt message to prompt the user to adjust.
For example, in one possible embodiment, before the positioning method shown in fig. 6 is performed, image data from the first camera and the second camera may be received respectively, and if the pixel coordinates of the first reference point and the second reference point in the respective images are different, prompt information may be output, where the prompt information is used to prompt the user to adjust the pose of the first camera and/or the second camera.
For another example, in another possible scenario, the camera is mounted on a motorized pan and tilt head and is secured to the base by the motorized pan and tilt head. In this case, the posture of the camera may be automatically adjusted by the electric pan/tilt head so that the aforementioned reference point condition can be satisfied after adjustment.
On the other hand, the calibration points are used for calibrating the homography matrix of the camera. The first image comprises a plurality of first calibration points, and the second image comprises a plurality of second calibration points. The first calibration points and the second calibration points are the same in number and in one-to-one correspondence; in a first calibration point and a second calibration point which have a corresponding relationship, the relative position of the first calibration point relative to the first camera is the same as the relative position of the second calibration point relative to the second camera.
The number of calibration points, unlike the reference points, is a plurality, and in a specific scenario, the number of the first calibration points and the second calibration points may be at least 4, which is related to the calibration requirement of the homography matrix, which will be described in detail later.
The calibration point is also a point preset in advance. This is similar to the reference point, and can be indicated by a fixed position marker or a movable marker within the image acquisition range, and is not repeated.
The positional relationship between the first plurality of calibration points (and similarly, the second plurality of calibration points) may also be customized and preset, and the embodiments of the present disclosure are not particularly limited thereto. For example, the plurality of first calibration points may be disposed in a rectangular array, and for example, the plurality of first calibration points may be disposed in a circular shape, and for example, the plurality of first calibration points may be irregularly arranged.
As before, the datum point and the datum point may be preset points in advance, and the datum point may be coincident. In an exemplary embodiment, the first image may include one first reference point and N first calibration points, where the first reference point is identical to a world coordinate of one of the first calibration points (in a first world coordinate system of the first camera); correspondingly, the second image may include a second reference point and N second calibration points, where the second reference point is identical to the world coordinate of one of the second calibration points (in the second world coordinate system of the second camera); the first reference point and the second reference point have the same pixel coordinates.
As before, the homography matrix of the camera is used to characterize the mapping relationship between the pixel coordinate system and the world coordinate system, based on which the pixel coordinates and the world coordinates of a plurality of second calibration points need to be acquired before the homography matrix of the second camera is acquired.
In one aspect, world coordinates of the second calibration point are obtained.
As before, in the embodiments of the present disclosure, the world coordinates of the second calibration point are actually the world coordinates of the second calibration point in the second world coordinate system. The second world coordinate may be an absolute world coordinate or a relative world coordinate.
Based on the relative position of the first calibration point with respect to the first camera, which is the same as the relative position of the second calibration point with respect to the second camera, world coordinates of the second calibration point corresponding to the first calibration point can be determined from world coordinates of the first calibration point in the first image.
Specifically, world coordinates of a second calibration point corresponding to the first calibration point may be determined based on a relative position of the first calibration point with respect to the first camera, world coordinates of the first camera in the first world coordinate system, and world coordinates of the second camera in the second world coordinate system.
As before, the first world coordinate system is the same as or different from the second world coordinate system, and this includes the following two cases:
first, the first world coordinate system is identical to the second world coordinate system, i.e., both are the same world coordinate system.
In this case, in the first world coordinate system, the world coordinates of the first camera, the world coordinates of the second camera, and the world coordinates of the first calibration point are all known, and the relative position of the first calibration point with respect to the first camera is the same as the relative position of the second calibration point with respect to the second camera, so that the world coordinates of the second calibration point (one corresponding to the first calibration point) can be calculated based on the relative positional relationship equality.
For exampleIn the first world coordinate system, the world coordinate of the first camera is (x 1 ,y 1 ) The coordinates of the i-th first calibration point are (x 1i ,y 1i ) The world coordinate of the second camera is (x 2 ,y 2 ) World seats at the ith second calibration point are marked as (x) 2i ,y 2i ) X is then 2i The method meets the following conditions: x is x 2i =x 1i -x 1 +x 2 ;y 2i The method meets the following conditions: y is 2i =y 1i -y 1 +y 2 That is, world coordinates of the ith second calibration point may be noted as: (x) 1i -x 1 +x 2 ,y 1i -y 1 +y 2 ). Wherein i is an integer greater than 0.
Second, the first world coordinate system is different from the second world coordinate system.
In this scenario, the definition of the first world coordinate system and the second world coordinate system may be the same.
In an exemplary embodiment, the origin of the first world coordinate system is the ground projection of the first camera and the origin of the second world coordinate system is the ground projection of the second camera; the lateral axis of the first world coordinate system is parallel to the lateral axis of the second world coordinate system, and the longitudinal axis of the first world coordinate system is parallel to the longitudinal axis of the second world coordinate system.
In another exemplary embodiment, the origin of the first world coordinate system is the intersection of the base where the first camera is located and the ground, and the origin of the second world coordinate system is the intersection of the base where the second camera is located and the ground; the lateral axis of the first world coordinate system is parallel to the lateral axis of the second world coordinate system, and the longitudinal axis of the first world coordinate system is parallel to the longitudinal axis of the second world coordinate system.
When the first world coordinate system is different from the second world coordinate system but is defined identically, for a corresponding pair of the first and second calibration points, the world coordinates of the first calibration point in the first world coordinate system are identical to the world coordinates of the second calibration point in the second world coordinate system. In this case, the world coordinates of the first calibration point in the first world coordinate system may be directly acquired and used as the world coordinates of the corresponding second calibration point. The method can effectively reduce the data processing amount, can conveniently obtain the world coordinates of the second calibration point without complex world coordinate conversion, and is beneficial to improving the calibration efficiency and the positioning efficiency.
On the other hand, the pixel coordinates of the second calibration point are acquired.
When the pixel coordinates of the second calibration point are obtained, the pitch angle of the second camera may be first determined according to the pose parameter of the first camera, the pose parameter of the second camera and the world coordinates of the first reference point, so that the pixel coordinates of the second calibration point in the second image are determined according to the pose parameter of the first camera, the pose parameter of the second camera, the pitch angle of the second camera and the world coordinates of the second calibration point (the obtaining manner may refer to the foregoing and is not repeated here).
Specifically, fig. 8 is a schematic diagram of an implementation principle of acquiring a pitch angle of the second camera. For simplicity of explanation, a world coordinate system is respectively constructed by each camera, the origin of the world coordinate system is the projection point of the optical center of the camera on the ground, the horizontal axis (i.e., the X axis) is the projection line of the ground on which the optical axis is located, and the vertical axis (i.e., the Y axis) is perpendicular to the horizontal axis in the horizontal plane. Thus, the first world coordinate system of the first camera is different from the origin of the second world coordinate system of the second camera, but the coordinate axes are parallel. Also, for simplicity of explanation, X-axis coincidence of the two is exemplarily described, and details are shown in fig. 8.
And, for simplicity of explanation, it is assumed that the first reference point B1 is located on the ground projection line (or an extension line thereof) of the optical axis of the first camera, the first reference point B1 and the second reference point B2 satisfy the aforementioned calibration point condition therebetween, and the coordinates thereof in the respective world coordinate systems are identical, so that the second reference point B2 is located on the ground projection line (or an extension line thereof) of the optical axis of the second camera. At this time, as shown in fig. 8, the first reference point B1 and the second reference point B2 both fall on the X axis. Thus, the world seat of the first reference point in the first world coordinate system can be marked as (x) 10 0), the world seat mark of the second reference point in the second world coordinate system is (x) 20 0), and, in addition,x 10 =x 20
based on the imaging principle of the camera, the optical axis is the center line of the light beam passing through the center point (i.e. the optical center) of the lens of the camera, so that the optical axis is actually perpendicular to the image formed by the camera, and the intersection point of the optical axis and the image is the center point of the image. As shown in fig. 8, the optical axis of the first camera is a straight line where the points O1, O1', and M1 are located, where the point O1 is the optical center of the first camera, the point O1' is the center point of the first image acquired by the first camera, the point M1 is the intersection point of the optical axis and the X axis, and the point O1' can be regarded as the pixel position of the point M1 in the first image. Similarly, the optical axis of the second camera is a straight line where the points O2, O2', and M2 are located, where the point O2 is the optical center of the second camera, the point O2' is the center point of the second image acquired by the second camera, the point M2 is the intersection point of the optical axis and the X axis, and the point O2' can be regarded as the pixel position of the point M2 in the second image.
As before, the pixel coordinate systems of the first image and the second image take the same definition. Illustratively, in the scene shown in fig. 8, taking the first image as an example, the origin of the pixel coordinate system is the upper left corner position of the first image, the horizontal axis (denoted as the U-axis) is directed to the right, and the vertical axis (denoted as the V-axis) is directed downward. The definition of the second image is the same and is not repeated. At this time, the pixel coordinates of the center point O1' of the first image may be expressed as (c x1 ,c y1 ) The pixel coordinates of the center point O1' of the second image may be expressed as (c) x ,c y )。
As shown in fig. 8, the pixel position of the first reference point B1 in the first image is denoted as B1', since the first reference point B1 and the point M1 fall on the X-axis, the pixel abscissa of the point B1' and the point O1 'are the same, and for convenience of explanation, the pixel coordinate of the point B1' is denoted as (c) x ,v 10 ). Similarly, the pixel position of the second reference point B2 in the second image is denoted as B2', since the second reference point B2 and the point M2 fall on the X-axis, the pixel abscissa of the point B2' and the point O2 'are the same, and for convenience of explanation, the pixel coordinate of the point B2' is denoted as (c) x ,v 20 )。
When the aforementioned reference point condition is satisfied, the first reference point B1The pixel coordinates (actually the pixel coordinates of the point B1 ') are also the same as the pixel coordinates of the second reference point B2 (actually the pixel coordinates of the point B2'). That is, v 20 =v 10
Based on the calibration point condition, the included angle γ1 and the included angle γ2 in fig. 8 are equal. The included angle gamma 1 is an included angle between a straight line where the point B1 and the point B1' are positioned and the optical axis of the first camera; the included angle gamma 2 is the included angle between the straight line where the point B2, the point B2', the point O2 are located and the optical axis of the second camera.
The included angle γ1 is related to the pitch angle (indicated as an included angle α1 in fig. 8) of the first camera and the included angle β1, and specifically satisfies: γ1=α 1 - β1. Wherein the included angle alpha 1 The pitch angle of the first camera can be obtained from internal parameters of the first camera; the included angle β1 is the included angle between the X axis and the straight line where the point B1, the point B1', and the point O1 are located.
As shown in fig. 8, the included angle β1 is actually one included angle in a triangle formed by the point O1, the point O10, and the point B1, where the point O10 is the origin of the first world coordinate system and is also the ground projection point of the point O1. In other words, the side length between the point O1 and the point O10 is actually the height of the first camera (or referred to as the mounting height), hereinafter H 1 Representing the height of the first camera. Then, in the triangle formed by the points O1, O10, and B1, there is the following triangle relationship: tan β1=h 1 /x 10 Thus, β1=arctan (H 1 /x 10 ). Further, γ1=α 1 -arctan(H 1 /x 10 )。
Similarly, the included angle β2 is actually an included angle in a triangle formed by the point O2, the point O20 and the point B2, where the point O20 is the origin of the second world coordinate system and is also the ground projection point of the point O2. In other words, the side length between the point O2 and the point O20 is actually the height of the second camera (or referred to as the mounting height), hereinafter H 2 Representing the height of the second camera. Then, in the triangle formed by the point O2, the origin O20 of the second world coordinate system, and the point B2, there is the following triangle relationship: tan β2=h 2 /x 20 Thus, β2=arctan (H 2 /x 20 ). Further, γ2=α 2 -arctan(H 2 /x 20 )。
As before, γ1=γ2, so that the following relationship can exist: alpha 1 -arctan(H 1 /x 10 )=α 2 -arctan(H 2 /x 20 ). Further, the pitch angle α of the second camera 2 Can be expressed as: alpha 2 =α 1 -arctan(H 1 /x 10 )+arctan(H 2 /x 20 )。
In this way, the pose parameter (α) of the first camera can be based on 1 And H is 1 ) Attitude parameter (H) of the second camera 2 ) World coordinates (x) 10 And x is 10 =x 20 ) Determining the pitch angle (alpha) of the second camera 2 )。
In addition, the side length between the optical center O2 and the second image center O2' may be actually expressed as a focal length f of the second camera, and when the focal length f is converted into the side length in the pixel coordinate system, may be specifically expressed as: f/dy. Wherein f represents the focal length of the second camera; dy represents the number of pixels per unit size. In the embodiment shown in fig. 8, the side length may also be calculated by trigonometric functions.
Specifically, in the triangle formed by the optical center 02, the point O2', and the point B2', the following trigonometric function relationship is satisfied:based on the foregoing relationship, the following relationship can be further obtained:
if the reference point condition is satisfied by the first camera and the second camera being identical, γ1=γ2, the relationship may be expressed as:
after determining the pitch angle of the second camera, the pixel coordinates of the second calibration point in the second image can be determined according to the attitude parameter of the first camera, the attitude parameter of the second camera, the pitch angle of the second camera and the world coordinates of the second calibration point.
At this time, reference may be made to the calibration principle schematic shown in fig. 9. The scene shown in fig. 9 is the same as that of fig. 8, and the definitions of the first world coordinate system, the second world coordinate system, the pixel coordinate system, and the respective same points are the same as those of fig. 8, and are not repeated here.
As before, a plurality of second calibration points may be included in the second image. For convenience of explanation, the point G is as follows 2i Represents the ith second calibration point, the ith second calibration point G 2i World coordinates in the second world coordinate system may be expressed as: g 2i (x 2i ,y 2i ). At this time, the ith second calibration point G 2i The pixels in the second image are noted as points G 2i ' Point G 2i The pixel coordinates of' can be noted as (u) 2i ,v 2i )。
For the sake of brief description, FIG. 9 shows the case where the ith second calibration point is located on the ground projection line of the optical axis of the second camera (or its extension), that is, the ith second calibration point G 2i Falling on the X-axis, at this time, the ith second calibration point G 2i The Y-axis component of (2) is 0, i.e. Y 2i 0, in this case, the ith second index point G 2i World coordinates in the second world coordinate system may be expressed as: g 2i (x 2i ,0). This results in a second index point at the pixel location in the second image (point G 2i ') is the same as the pixel abscissa of the center point (point O2') of the second image. I.e. point G 2i The pixel coordinates of' can be denoted as (c) x ,v 2i )。
As shown in fig. 9, point G 2i 'the point O2', the optical center O2 may form a right triangle. For ease of description, the right triangle will be referred to simply as a first triangle, with the first side of the first triangle being perpendicular to the second side.
Wherein the first side is the second imageCenter point O2' and point G 2i The edge between' is known based on the pixel coordinates between the two, and the edge length of the first edge in the pixel coordinate system is as follows: v 2i -c y
The second side is the side between the optical center O2 of the second camera and the center point O2' of the second image. As shown in fig. 9, the second side is the optical axis of the second camera, and the side length of the second side is related to the internal reference of the camera. Specifically, the side length of the second side may be denoted as the focal length of the second camera (denoted as f). To unify the side lengths, the side lengths of the second sides are converted into lengths in a pixel coordinate system: f/dy. Wherein f represents the focal length of the second camera; dy represents the number of pixels per unit size.
In addition, for ease of illustration, point G will be 2i The straight line defined by the' and the optical center O2 is simply referred to as a first straight line, and as shown in FIG. 9, the first straight line also passes through the second calibration point G 2i . The angle between the first straight line and the optical axis of the second camera is referred to as a first angle, that is, γ2i shown in fig. 9.
In this way, the first triangle may be processed using a trigonometric function resulting in the pixel ordinate of the second calibration point in the second image.
Specifically, the first included angle of the first triangle may be determined according to the world coordinates of the second calibration point, the height and the pitch angle of the second camera.
As shown in fig. 9, the first angle γ2i is set to be equal to the pitch angle (α of the second camera 2 ) The second included angle (beta 2 i) meets the following conditions: γ2i=α 2 -beta 2i, while the second angle beta 2i may be at a second calibration point G 2i The triangle formed by the optical center O2 and the origin O20 of the second world coordinate system is obtained by trigonometric function acquisition:as such, the first included angle γ2i can be expressed as:further, based on the calculations shown in FIG. 8, supra, alpha is introduced 2 The method can obtain the following steps:
then, pixel coordinates of the second calibration point in the second image are determined based on trigonometric function relationships satisfied between the first edge, the second edge, and the first included angle of the first triangle.
In the embodiment shown in FIG. 9, for the ith second index point G 2i For the first included angle γ2i, the first side and the second side satisfy the following relationship:
/>
thus, the ith second calibration point G is converted 2i Is of the pixel ordinate v of (2) 2i Satisfies any one of the following formulas:
or alternatively, the process may be performed,
or alternatively, the process may be performed,
in addition, the side length (f/dy) of the second side may have other representations. For example, in the embodiment shown in FIG. 8 described above, f/dy may also be expressed as:then, the ith second calibration point G 2i Is of the pixel ordinate v of (2) 2i The following formula is satisfied:
or alternatively, the process may be performed,
or alternatively, the process may be performed,
or alternatively, the process may be performed,
it is not exhaustive. To avoid ambiguity, the meaning of each of the identifiers in the above formula is repeated here. v 2i Representing the pixel ordinate of the ith second calibration point on the second image; f represents the focal length of the second camera; dy represents the number of pixels per unit size; alpha 2 Representing a pitch angle of the second camera; h 2 Representing the height of the second camera; x is x 2i Identifying world abscissas of the ith second calibration point in the second world coordinate system; v 10 The pixel ordinate, v, representing the first reference point 20 The ordinate of the pixel representing the second reference point, v 10 =v 20 ;c y A pixel ordinate representing a center point of the second image (or the first image); alpha 1 Representing a pitch angle of the first camera; h 1 Representing the height of the first camera; x is x 10 Representing the world abscissa, x, of a first reference point in a first world coordinate system 20 Representing the world abscissa, x, of the second reference point in the second world coordinate system 10 =x 20
Then, in the embodiment shown in fig. 9, for the second calibration point lying on the ground projection line of the optical axis of the second camera (or its extension, i.e. the X-axis), the image of the ith second calibration point in the second image can be determined in the manner described aboveThe plain coordinates are (c) x ,v 2i ),v 2i The expression of (2) is as above and will not be repeated.
In a real scene, however, it may also be involved that the second calibration point falls outside the ground projection line (or an extension thereof, i.e. the X-axis) of the optical axis of the second camera. In this case, the pixel coordinate of the ith second calibration point in the second image is (u) 2i ,v 2i )。
For example, reference may be made to the scenario illustrated in fig. 10. Second calibration point G 2i In addition to the X-axis component, the Y-axis component is also provided.
In this case, the second calibration point G 2i The pixel ordinate of (2) can be calculated in the manner shown in figure 9. As shown in fig. 10, at this time, v can be achieved by means of the first reference point and the second reference point 2i Is calculated by the computer.
Wherein the pixel ordinate of the first reference point is the same as the pixel ordinate of the second reference point, and the pixel abscissa of the first reference point is the same as the pixel abscissa of the second image center point. As shown in fig. 10, the first reference point is denoted as P1, and the pixel coordinates of the first reference point P1 in the pixel coordinate system may be denoted as (c) x ,v 2i )。
The second reference point is located in a second world coordinate system, and in the second world coordinate system, a transverse axis component of the second reference point is identical to a transverse axis component of the second reference point, and a longitudinal axis component of the second reference point is zero. As shown in fig. 10, the second reference point is denoted as P2, and the pixel coordinates of the second reference point P2 in the second world coordinate system may be denoted as (x 2i ,0)。
Based on this, and as can be seen by comparing fig. 9 and 10, the first reference point P1 in fig. 10 corresponds to the point G in fig. 9 2i ' and the second reference point P2 in fig. 10 is relative to point G in fig. 9 2i . In other words, FIG. 9 corresponds to a special case of the scene shown in FIG. 10, i.e., when the second calibration point G is located on the ground projection line of the optical axis of the second camera 2i I.e. the second reference point P2, the second reference point G 2i Pixel position G of (2) 2i ' is the first reference point P1.
In the embodiment shown in fig. 10, when calculating the pixel ordinate of the ith second calibration point in the second image, it may be implemented based on the trigonometric function relationship of the optical center O2 of the second camera, the center point O2 'of the second image, and the first triangle determined by the first reference point P1, where the first edge in the first triangle is the edge between the center point O2' of the second image and the first reference point P1. And the first included angle gamma 2i is still an included angle between the optical axis of the second camera and a first straight line, and at this time, the first straight line is a straight line determined by the first reference point and the optical center of the second camera.
In summary, for any one of the second calibration points within the image acquisition field of view of the second camera, whether or not it falls on the ground projection line of the optical axis of the camera (or its extension), the pixel ordinate of the second calibration point satisfies the above-mentioned v 2i The expression of (2) is not described in detail.
As shown in fig. 10, when the second calibration point is located outside the ground projection line of the optical axis of the second camera, the pixel abscissa of the second calibration point is different from the pixel abscissa of the second image center point, in which case the pixel abscissa of the second calibration point may be determined based on the triangle similarity theorem.
As shown in fig. 10, the second triangle and the third triangle satisfy the triangle similarity theorem. Wherein the second triangle is formed by the optical center O2 of the second camera and the second calibration point G 2i A second reference point P2; the third triangle is defined by the optical center O2 of the second camera, the pixel position G of the second calibration point in the second image 2i ' the first reference point P1.
Based on the second triangle being similar to the third triangle, then the first ratio is equal to the second ratio. Wherein the first ratio is the ratio between the third side of the second triangle and the fourth side of the third triangle; the third side is the side between the optical center O2 of the second camera and the second reference point P2 (may be denoted as O2P 2), and the fourth side is the side between the optical center O2 of the second camera and the first reference point P1 (may be denoted as O2P 1);
The second proportion is the fifth side and the third triangle in the second triangleThe ratio between the sixth sides in the shape; the fifth side is the second reference point P2 and the second reference point G 2i The edge between (can be denoted as G 2i P2), the sixth side is the pixel position G of the first reference point P1 and the second reference point in the second image 2i 'edge between (G' 2i P1)。
Then, the second calibration point shown in fig. 10 satisfies the following triangle-like relationship:
based on this, it can be seen from FIG. 10 that point G 2i World coordinates in the second world coordinate system are (x 2i ,y 2i ) The world coordinate of P2 in the second world coordinate system is (x 2i 0), thus, G 2i P2 is a distance y 2i . I.e. G 2i The distance of P2 is actually the longitudinal axis component of the second calibration point in the second world coordinate system. Based on the fact that the internal parameters of the first camera and the second camera are the same and the relative positions of the paired first calibration point and second calibration point relative to the respective cameras are the same, G 2i The distance of P2 may be calculated from the world coordinates of the i-th first calibration point in the first world coordinate system. For example, when the first world coordinate system and the second world coordinate system satisfy the settings shown in FIG. 8, G 2i The distance of P2 is the longitudinal axis component of the first calibration point in the first world coordinate system. For convenience of description, a distance between the first reference point and the second reference point is denoted by L1.
Point G 2i ' the pixel coordinates in the pixel coordinate system of the second image are (u) 2i ,v 2i ) The pixel coordinate of P1 in the pixel coordinate system of the second image is (c x ,v 2i ) Thus G 2i The distance of P1 is actually u 2i -c x
The distance between the optical center O2 and the second reference point P2 (i.e., O2P 2) can be calculated by the pythagorean theorem of the fourth triangle. Wherein the fourth triangle is composed of the optical center O2, the second reference point P2 and the second generationThe origin O20 of the world coordinate system. As shown in fig. 10, the edge O2O20 is perpendicular to the X-axis, and the fourth triangle is a right triangle, satisfying the pythagorean theorem:wherein the length of the O2O20 is actually the height (H 2 ) The length of O2P2 is the X-axis component of the second reference point P2, i.e. X 2i . Then (I)>
The distance between the optical center O2 and the first reference point P1 (i.e., O2P 1) may be calculated by the pythagorean theorem of the fifth triangle. The fifth triangle is composed of the optical center O2, the first reference point P1 and the center point O2' of the second image. At this time, as shown in fig. 10, the side O2' is the optical axis, perpendicular to the second image, and the fifth triangle is a right triangle, satisfying the pythagorean theorem:the length of O2 'is actually the focal length (f) of the second camera, and the length of O2' P1 may be expressed in various ways. In a possible embodiment, the length of 02'P1 is the difference between the V-axis component of the second reference point P2 and the V-axis component of the center point O2'. In another possible embodiment, the length of 02' p1 may be represented by the tan trigonometric function of the first angle γ2i described above. At this time, a- >Then, the length of O2' P1 can be expressed as: f is tan γ2i. The calculation of the first angle γ2i is referred to above and will not be repeated here. Thus, it can be specifically calculated
Then, after substituting the expressions of the respective side lengths into the similar theorem expressions, the following relationship can be obtained:
wherein u is 2i Representing the pixel ordinate of the ith second calibration point on the second image; l1 represents the distance between the first reference point and the second reference point; f represents the focal length of the second camera (or the first camera); h 2 Representing the height of the second camera; x is x 2i Representing the world abscissa of the ith second calibration point in the second world coordinate system; c x A pixel abscissa representing a second image center point (or first image center point); γ2i represents a first angle, which is an angle between the optical axis of the second camera and the first straight line,wherein alpha is 1 Representing a pitch angle of the first camera; h 1 Representing the height of the first camera; x is x 10 =x 20 Wherein x is 10 Representing the world abscissa, x, of the first reference point in the first world coordinate system 20 Representing the world abscissa of the second fiducial point in the second world coordinate system.
To sum up, for an ith second calibration point within the image acquisition field of view of the second camera, the world coordinates (x 2i ,y 2i ) The pixel coordinate of the ith second calibration point in the second image is (u) 2i ,v 2i )。
World coordinates (x) 2i ,y 2i ) Can be based on world coordinates (x 1i ,y 1i ) And (5) determining.
Pixel coordinates (u) of the ith second calibration point 2i ,v 2i ) In one aspect, the second index point i is located on the ground projection line (or its extension line) of the optical axis of the second camera, or not, by the above-mentioned v 2i The pixel ordinate v of the ith second calibration point is calculated 2i . On the other hand, when the ith second calibration point falls on the ground of the optical axis of the second cameraOn projection line (or extension thereof), pixel abscissa u of ith second calibration point 2i The same as the pixel abscissa of the center point of the second image, that is: u (u) 2i =c x The method comprises the steps of carrying out a first treatment on the surface of the Alternatively, when the ith second calibration point falls outside the ground projection line of the optical axis of the second camera (or its extension), then the above-mentioned u 2i The pixel abscissa u of the ith second calibration point is calculated 2i
Then, the pixel coordinates and world coordinates of the plurality of second calibration points in the second image are obtained in the above manner, respectively, and the homography matrix of the second camera can be determined accordingly.
Specifically, for a determined second camera, the pixel coordinates and world coordinates of any one object (e.g., the second calibration point) may satisfy the following formula:
wherein, (x) w ,y w ) World coordinates of the object (e.g., the second calibration point), (u, v) pixel coordinates of the object (e.g., the second calibration point),homography matrix of the second camera, h i,j Matrix parameters of homography matrix of the second camera, wherein i and j are used for distinguishing the matrix parameters, the value is 1-3, and z c Is a three-dimensional coordinate parameter.
Based on the above, the pixel coordinates and world coordinates of the plurality of second calibration points can be substituted into the formula, and each matrix parameter in the homography matrix can be obtained by solving the equation set, so that the homography matrix of the second camera is also obtained.
In this formula, 8 unknown parameters are involved, so that at least 4 (4 or more) second calibration points are required to calibrate the homography matrix of the second camera by this scheme.
On the basis of obtaining the homography matrix of the second camera, when the second camera acquires an image (for convenience of distinguishing and description, the second camera is marked as a third image), the first world coordinate of the target object in the second world coordinate system can be calculated only by acquiring the target pixel coordinate of the target object in the third image and substituting the target pixel coordinate of the target object into the above formula. In this way, the positioning of the target object is achieved.
In this process, if the second world coordinate system is an absolute world coordinate system, for example, a longitude and latitude coordinate system, the absolute world coordinate (for example, longitude and latitude) of the target object in the third image may be directly located based on the homography matrix of the second camera.
Alternatively, if the second world coordinate system is a relative world coordinate system, for example, as shown in fig. 8, world coordinates of the target object in the second world coordinate system in the third image may be obtained based on the homography matrix of the second camera. In this case, the world coordinate is directly taken as the first world coordinate. Alternatively, the world coordinates of the target object in the second world coordinate system may be converted based on the conversion relation between the second world coordinate system and the absolute world coordinate system, so as to obtain the world coordinates of the target object in the absolute world coordinate system as the first world coordinates.
In the embodiment of the present disclosure, when the target object in the second image is a target vehicle, if the target vehicle is in communication connection with the electronic device of the executing main body of the scheme, the electronic device may further send the first world coordinate to the target vehicle after locating the position of the target vehicle. The target vehicle can receive the first world coordinate and can know the position of the target vehicle according to the first world coordinate. Furthermore, the target vehicle can further realize automatic driving and obstacle avoidance based on the received first world coordinate.
The embodiment of the disclosure further provides another positioning method, please refer to a flowchart shown in fig. 11, which includes the following steps:
s1102, receiving a third image acquired by the second camera, wherein the third image contains a target object.
S1104, acquiring first pixel coordinates of the target object in the third image.
S1106, the homography matrix of the second camera is utilized to process the first pixel coordinates, and the first world coordinates of the target object are obtained.
It should be noted that, in the embodiment shown in fig. 11, the execution body (electronic device) of the positioning method may directly call the homography matrix of the second camera to implement positioning of the target object in the image.
In this embodiment, the homography matrix of the second camera may be stored anywhere readable by the electronic device, which may include, but is not limited to: the memory of the electronic device, another electronic device (for example, a camera) communicatively connected to the electronic device, a memory (including an entity memory, a cloud memory, etc.) of the electronic device with data access rights, and the like are not exhaustive.
Thus, when the positioning method is executed, the electronic equipment can read the homography matrix of the second camera, and then the homography matrix is utilized to process the coordinates of the target pixel, so that the first world coordinate of the target object can be obtained through calculation.
In the embodiment shown in fig. 11, the homography matrix of the second camera may be calibrated in the manner shown in S602-S608 in fig. 6, and reference is made to the foregoing description for brevity.
In one possible embodiment as in fig. 11, a homography matrix is used to describe the mapping between the camera's pixel coordinate system and the world coordinate system; the homography matrix is determined based on pixel coordinates and world coordinates of a plurality of second calibration points, the pixel coordinates of the second calibration points are determined based on the pose parameters of the first camera, the pose parameters of the second camera, the pixel coordinates and world coordinates of the first reference point, and the world coordinates of the second calibration points.
Wherein the internal parameters of the first camera and the second camera are the same; the first image is from a first camera, and the first image comprises a first datum point and a plurality of first datum points; the second image is from a second camera, and the second image comprises a second datum point and a plurality of second datum points.
The pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera.
The first calibration points are in one-to-one correspondence with the second calibration points; in a first calibration point and a second calibration point which have a corresponding relationship, the relative position of the first calibration point relative to the first camera is the same as the relative position of the second calibration point relative to the second camera.
As before, in the positioning method, the pose of the first camera may also be adjusted, and/or the pose of the second camera may be adjusted, so that the aforementioned reference point condition is satisfied. That is, when the relative position of the first reference point compared to the first camera is the same as the relative position of the second reference point compared to the second camera, the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image.
In an embodiment of the present disclosure, in executing the positioning method of any one of the embodiments, the homography matrix of the second camera may also be corrected based on an error condition between the first world coordinate and the second world coordinate of the target object. The second world coordinate is the world coordinate of the target object in the second world coordinate system obtained through other modes, and the first world coordinate is the world coordinate of the target object in the second world coordinate system obtained through any one of the embodiments.
Illustratively, the second world coordinate system may be obtained by one or more of a global positioning system (Global Position System, GPS), by a network communication query, and the like.
Thus, in one possible embodiment, the second world coordinate of the target object may be acquired by other means than the present solution, so that the homography matrix of the second camera is corrected when the error between the first world coordinate and the second world coordinate is greater than a preset threshold. The error between the first world coordinate and the second world coordinate can be represented by the distance between the first world coordinate and the second world coordinate, and at this time, the preset threshold is also a distance threshold, and the specific numerical value of the preset threshold can be preset in a self-defining manner.
In any of the foregoing embodiments, the electronic device may further receive a first message, where the first message is used to implement pose configuration for the camera, so that the electronic device may determine pose parameters of the first camera and/or the second camera based on the first message.
In a specific implementation scenario, the first message may be from a user. In this case, the electronic device is provided with a man-machine interaction interface with the user, and thus can directly receive a message from the user.
Alternatively, the first message may come from a forwarding of other electronic devices. For ease of description, all of the electronic devices described above may be considered a first electronic device, and the first message received by the first electronic device may be from a second electronic device. Specifically, the second electronic device is provided with a man-machine interaction interface, and can receive instruction information (for example, a first message) from a user through the man-machine interaction interface such as touch control, sound control, sensor and the like, and send the instruction information to the first electronic device.
In any of the foregoing embodiments, the electronic device may further receive a second message, where the second message indicates that the homography matrix of the one or more second cameras is obtained, and thus, when the second camera indicated by the second message meets a preset calibration condition, the electronic device determines the homography matrix of the second camera. In this embodiment, the second message corresponds to a calibration instruction for instructing to calibrate the homography matrix of the second camera, and after the electronic device (or referred to as the first electronic device) receives the second message, if it is determined that the preset calibration condition is satisfied, the calibration process shown in S602 to S608 is executed.
The preset calibration condition may be that the second camera is working normally. Further, the preset calibration conditions may further include the aforementioned reference point conditions. At this time, if the calibration condition is not satisfied, the gesture of the first camera and/or the second camera is adjusted until the preset calibration condition is satisfied, and then the calibration process is started.
In addition, the second message, like the first message, may be directly from the user or may be from the second electronic device.
By way of example, fig. 12 shows a schematic diagram of another positioning system provided by an embodiment of the present disclosure, as shown in fig. 12, the positioning system comprising:
a first electronic device 1210 configured to perform the positioning method of any of the foregoing embodiments;
a camera 1220 for capturing images, the camera comprising a first camera and one or more second cameras;
a second electronic device 1230 for receiving instruction information from a user and transmitting the instruction information to the first electronic device 1210;
at this time, the first electronic device 1210 is further configured to perform an action indicated by the instruction information.
In the positioning system shown in fig. 12, the first electronic device 1210 is provided integrally with or separately from the second electronic device 1230.
It is to be understood that some or all of the steps or operations in the above-described embodiments are merely examples, and that embodiments of the present application may also perform other operations or variations of the various operations. Furthermore, the various steps may be performed in a different order presented in the above embodiments, and it is possible that not all of the operations in the above embodiments are performed.
The embodiment of the disclosure also provides electronic equipment.
By way of example, fig. 13 shows a schematic diagram of an electronic device, as shown in fig. 13, the electronic device 1300 includes: transceiver module 1310, calibration module 1320, and positioning module 1330.
The transceiver module 1310 is configured to receive a first image from a first camera, where the first image includes a first reference point and a plurality of first calibration points;
the transceiver module 1310 is further configured to receive a second image from a second camera, where the second image includes a second reference point and a plurality of second calibration points; the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera;
a calibration module 1320, configured to determine a pixel coordinate of the second calibration point in the second image according to the pose parameter of the first camera, the pose parameter of the second camera, the pixel coordinate and the world coordinate of the first reference point, and the world coordinate of the second calibration point; the first calibration points are in one-to-one correspondence with the second calibration points; in a first calibration point and a second calibration point which have corresponding relations, the relative position of the first calibration point relative to the first camera is the same as the relative position of the second calibration point relative to the second camera;
The calibration module 1320 is further configured to determine a homography matrix of the second camera according to the pixel coordinates and the world coordinates of the plurality of second calibration points; the homography matrix is used for describing the mapping relation between the pixel coordinate system of the camera and the world coordinate system;
the positioning module 1330 is configured to determine, when receiving the third image from the second camera, a first world coordinate of the target object in the third image using the homography matrix of the second camera.
In one possible embodiment, the calibration module 1320 is specifically configured to: determining a pitch angle of the second camera according to the attitude parameter of the first camera, the attitude parameter of the second camera and the world coordinates of the first reference point; according to the world coordinates of the first calibration point in the first image, determining the world coordinates of the second calibration point corresponding to the first calibration point; and determining the pixel coordinates of the second calibration point in the second image according to the attitude parameters of the first camera, the attitude parameters of the second camera, the pitch angle of the second camera and the world coordinates of the second calibration point.
In another possible embodiment, the calibration module 1320 is specifically configured to: determining world coordinates of a second calibration point corresponding to the first calibration point based on the relative position of the first calibration point with respect to the first camera, world coordinates of the first camera in a first world coordinate system, and world coordinates of the second camera in a second world coordinate system; the first world coordinate system is the same as or different from the second world coordinate system.
In another possible embodiment, the calibration module 1320 is specifically configured to: processing the first triangle by using a trigonometric function to obtain the pixel ordinate of the second calibration point in the second image; the first triangle is determined by the optical center of the second camera, the center point of the second image and the first reference point, the pixel ordinate of the first reference point is the same as the pixel ordinate of the second reference point, and the pixel abscissa of the first reference point is the same as the pixel abscissa of the center point of the second image.
In another possible embodiment, the calibration module 1320 is specifically configured to: determining a first included angle of a first triangle according to world coordinates of a second calibration point, the height and the pitch angle of a second camera, wherein the first included angle is an included angle between an optical axis of the second camera and a first straight line, and the first straight line is determined by the first reference point and the optical center of the second camera; determining pixel coordinates of a second calibration point in the second image based on trigonometric function relationships satisfied between the first edge, the second edge and the first included angle of the first triangle; the first side is the side between the center point of the second image and the first reference point, the second side is the side between the center point of the second image and the optical center, the second side is related to the internal reference of the second camera, and the first side is perpendicular to the second side.
In another possible embodiment, the pixel ordinate of the second calibration point in the second image satisfies the following formula:
or alternatively, the process may be performed,
wherein v is 2i Indicating that the ith second index point is in the second imagePixel ordinate on; f represents the focal length of the second camera; dy represents the number of pixels per unit size; alpha 2 Representing a pitch angle of the second camera; h 2 Representing the height of the second camera; x is x 2i Identifying world abscissas of the ith second calibration point in the second world coordinate system; v 10 A pixel ordinate representing a first reference point; c y A pixel ordinate representing a center point of the first image; alpha 1 Representing a pitch angle of the first camera; h 1 Representing the height of the first camera; x is x 10 =x 20 Wherein x is 10 Representing the world abscissa, x, of a first reference point in a first world coordinate system 20 Representing the world abscissa of the second fiducial point in the second world coordinate system.
In another possible embodiment, when the second calibration point is located on the ground projection line of the optical axis of the second camera, the pixel position of the second calibration point in the second image is the same as the pixel position of the first reference point.
In another possible embodiment, the calibration module 1320 is specifically configured to: processing the second triangle and the third triangle by using a triangle similarity theorem to obtain the pixel abscissa of the second calibration point in the second image; the second triangle consists of an optical center of the second camera, a second calibration point and a second reference point; the third triangle is composed of the optical center of the second camera, the pixel position of the second calibration point in the second image and the first reference point; the second reference point is located in a second world coordinate system, and in the second world coordinate system, a transverse axis component of the second reference point is identical to a transverse axis component of the second reference point, and a longitudinal axis component of the second reference point is zero.
In another possible embodiment, the second triangle is similar to the third triangle, wherein the first ratio is equal to the second ratio; the first ratio is the ratio between the third side of the second triangle and the fourth side of the third triangle; the third side is the side between the optical center of the second camera and the second reference point, and the fourth side is the side between the optical center of the second camera and the first reference point; the second ratio is the ratio between the fifth side in the second triangle and the sixth side in the third triangle; the fifth side is the side between the optical center of the second camera and the second calibration point, and the sixth side is the side between the optical center of the second camera and the pixel position of the second calibration point in the second image.
In another possible embodiment, the pixel abscissa of the second index point in the second image satisfies the following formula:
wherein u is 2i Representing the pixel ordinate of the ith second calibration point on the second image; l1 represents the distance between the first reference point and the second reference point; f represents the focal length of the second camera; h 2 Representing the height of the second camera; x is x 2i Representing the world abscissa of the ith second calibration point in the second world coordinate system; c x A pixel abscissa representing a center point of the second image; γ2i represents a first angle, which is an angle between the optical axis of the second camera and the first straight line, Wherein alpha is 1 Representing a pitch angle of the first camera; h 1 Representing the height of the first camera; x is x 10 =x 20 Wherein x is 10 Representing the world abscissa, x, of the first reference point in the first world coordinate system 20 Representing the world abscissa of the second fiducial point in the second world coordinate system.
In another possible embodiment, the transceiver module 1310 is further configured to send the first world coordinate to the target vehicle when the target object in the second image is the target vehicle.
The electronic device of the embodiment shown in fig. 13 may be used to implement the technical solution of the method embodiment shown in fig. 6, and the implementation principle and technical effects may be further referred to in the related description of the method embodiment.
Fig. 14 shows a schematic diagram of another electronic device, as shown in fig. 14, the electronic device 1400 includes: transceiver module 1410 and positioning module 1420. Wherein, the liquid crystal display device comprises a liquid crystal display device,
the transceiver module 1410 is configured to receive a third image acquired by the second camera, where the third image includes a target object;
a positioning module 1420, configured to acquire a first pixel coordinate of the target object in the third image;
the positioning module 1420 is further configured to process the first pixel coordinate by using a homography matrix of the second camera to obtain a first world coordinate of the target object;
The homography matrix is used for describing the mapping relation between the pixel coordinate system of the camera and the world coordinate system; the homography matrix is determined based on pixel coordinates and world coordinates of a plurality of second calibration points, and the pixel coordinates of the second calibration points are determined based on the gesture parameters of the first camera, the gesture parameters of the second camera, the pixel coordinates and world coordinates of the first reference points and the world coordinates of the second calibration points;
wherein the internal parameters of the first camera and the second camera are the same; the first image is from a first camera, and the first image comprises a first datum point and a plurality of first datum points; the second image is from a second camera and comprises a second datum point and a plurality of second datum points;
the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera;
the first calibration points are in one-to-one correspondence with the second calibration points; in a first calibration point and a second calibration point which have a corresponding relationship, the relative position of the first calibration point relative to the first camera is the same as the relative position of the second calibration point relative to the second camera.
In another possible embodiment, the electronic device 1400 further includes an adjustment module (not shown in fig. 14), specifically configured to: adjusting the posture of the first camera; and/or adjusting the pose of the second camera; such that the pixel coordinates of the first fiducial in the first image are the same as the pixel coordinates of the second fiducial in the second image.
In another possible embodiment, the positioning module 1420 is further configured to: acquiring a second world coordinate of the target object; and correcting the homography matrix of the second camera when the error between the first world coordinate and the second world coordinate is larger than a preset threshold value.
In another possible embodiment, the positioning module 1410 is further configured to receive a first message, where the first message is used to implement pose configuration for the camera; at this time, the positioning module 1420 is further configured to determine an attitude parameter of the first camera and/or the second camera based on the first message.
In another possible embodiment, the transceiver module 1410 is further configured to receive a second message, where the second message indicates that the homography matrix of the one or more second cameras is acquired; at this time, the positioning module 1420 is further configured to determine a homography matrix of the second camera when the second camera indicated by the second message meets a preset calibration condition.
The electronic device of the embodiment shown in fig. 14 may be used to implement the technical solution of the method embodiment shown in fig. 11, and the implementation principle and technical effects may be further referred to in the related description of the method embodiment.
It should be understood that the above division of the modules of the electronic device shown in fig. 13 and 14 is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; it is also possible that part of the modules are implemented in the form of software called by the processing element and part of the modules are implemented in the form of hardware. For example, the calibration module in fig. 13 may be a processing element that is set up separately, or may be implemented in an electronic device, for example, a chip of a terminal, or may be stored in a memory of the electronic device in a program form, and the functions of the above modules are called and executed by a certain processing element of the electronic device. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
For example, the modules above may be one or more integrated circuits configured to implement the methods above, such as: one or more specific integrated circuits (Application Specific Integrated Circuit, ASIC), or one or more microprocessors (digital singnal processor, DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler, the processing element may be a general purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 15 shows a physical structure diagram of an electronic device. As shown in fig. 15, the electronic device 1500 includes: at least one processor 152 and memory 154; the memory 154 stores computer-executable instructions; the at least one processor 152 executes computer-executable instructions stored in the memory 154 such that the at least one processor 152 performs the positioning method as provided in any one of the embodiments described above.
The processor 152 may also be referred to as a processing unit, and may implement a certain control function. The processor 152 may be a general purpose processor or a special purpose processor, etc.
In an alternative design, the processor 152 may also have instructions stored thereon that are executable by the processor 152 to cause the electronic device 1500 to perform the positioning method described in the method embodiments above.
In yet another possible design, electronic device 1500 may include circuitry that may implement the functions of transmitting or receiving or communicating in the foregoing method embodiments.
Optionally, the electronic device 1500 may include one or more memories 154 with instructions or intermediate data stored thereon that are executable on the processor to cause the electronic device 1500 to perform the positioning method described in the method embodiments above. Optionally, other relevant data may also be stored in the memory 154. Optionally, instructions and/or data may also be stored in processor 152. The processor 152 and the memory 154 may be provided separately or may be integrated.
Optionally, the electronic device 1500 may also include a transceiver 156. The transceiver 156 may also be referred to as a transceiver unit, a transceiver circuit, a transceiver, etc. for implementing the transceiver function of the electronic device.
For example, if the electronic device 1500 is used to implement operations corresponding to receiving the first image and the second image in the embodiment shown in fig. 6, for example, the transceiver may receive the first image from the first camera, and the transceiver may also receive the second image from the second camera. The transceiver 156 may further perform other corresponding communication functions. And the processor 156 is configured to perform corresponding determining or controlling operations, and optionally, may store corresponding instructions in the memory. For a specific manner of processing of the individual components, reference may be made to the relevant description of the previous embodiments.
The processor 152 and transceiver 156 described in the present application may be implemented on an integrated circuit (integrated circuit, IC), analog IC, radio frequency integrated circuit RFIC, mixed signal IC, application specific integrated circuit (application specific integrated circuit, ASIC), printed circuit board (printed circuit board, PCB), electronic device, etc. The processor and transceiver may also be fabricated using various 1C process technologies such as complementary metal oxide semiconductor (complementary metal oxide semiconductor, CMOS), N-type metal oxide semiconductor (NMOS), P-type metal oxide semiconductor (positive channel metal oxide semiconductor, PMOS), bipolar junction transistor (Bipolar Junction Transistor, BJT), bipolar CMOS (BiCMOS), silicon germanium (SiGe), gallium arsenide (GaAs), etc.
Alternatively, the electronic device 1500 may be a stand-alone device or may be part of a larger device. For example, the device may be:
(1) A stand-alone integrated circuit IC, or chip, or a system-on-a-chip or subsystem;
(2) A set of one or more ICs, optionally including storage means for storing data and/or instructions;
(3) An ASIC, such as a modem (MSM);
(4) Modules that may be embedded within other devices;
(5) Receivers, terminals, cellular telephones, wireless devices, handsets, mobile units, network devices, etc.;
(6) Others, and so on.
The embodiment of the disclosure also provides a positioning system. The description of the positioning system is referred to in the previous descriptions of fig. 5 and 12, and will not be repeated here.
The embodiment of the present application also provides a computer-readable storage medium having a computer program stored therein, which when run on a computer causes the computer to perform the positioning method described in the above embodiment.
Furthermore, an embodiment of the present application provides a computer program product, which includes a computer program, which when run on a computer causes the computer to perform the positioning method described in the above embodiment.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk), etc.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.

Claims (18)

1. A positioning method, comprising: the method is applied to a positioning system comprising a first camera and a second camera, wherein the internal parameters of the first camera and the second camera are the same; the method comprises the following steps:
receiving a first image from a first camera, the first image including a first fiducial point and a plurality of first fiducial points;
receiving a second image from a second camera, the second image including a second fiducial point and a plurality of second fiducial points; the pixel coordinates of the first datum point in the first image are the same as the pixel coordinates of the second datum point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera;
determining pixel coordinates of the second calibration point in the second image according to the gesture parameters of the first camera, the gesture parameters of the second camera, the pixel coordinates and world coordinates of the first reference point and the world coordinates of the second calibration point; the first calibration points are in one-to-one correspondence with the second calibration points; in the first calibration point and the second calibration point which have a corresponding relationship, the relative position of the first calibration point relative to the first camera is the same as the relative position of the second calibration point relative to the second camera;
Determining a homography matrix of the second camera according to the pixel coordinates and the world coordinates of the plurality of second calibration points; the homography matrix is used for describing the mapping relation between the pixel coordinate system of the camera and the world coordinate system;
when a third image from the second camera is received, determining a first world coordinate of a target object in the third image using a homography matrix of the second camera.
2. The method of claim 1, wherein the determining the pixel coordinates of the second calibration point in the second image comprises:
determining a pitch angle of the second camera according to the attitude parameter of the first camera, the attitude parameter of the second camera and the world coordinates of the first reference point;
determining world coordinates of the second calibration point corresponding to the first calibration point according to the world coordinates of the first calibration point in the first image;
and determining pixel coordinates of the second calibration point in the second image according to the attitude parameters of the first camera, the attitude parameters of the second camera, the pitch angle of the second camera and the world coordinates of the second calibration point.
3. The method of claim 2, wherein determining world coordinates of the second setpoint corresponding to the first setpoint based on world coordinates of the first setpoint in the first image comprises:
determining world coordinates of the second calibration point corresponding to the first calibration point based on a relative position of the first calibration point with respect to the first camera, world coordinates of the first camera in a first world coordinate system, and world coordinates of the second camera in a second world coordinate system;
the first world coordinate system is the same as or different from the second world coordinate system.
4. A method according to claim 2 or 3, wherein said determining the pixel coordinates of the second calibration point in the second image comprises:
processing the first triangle by using a trigonometric function to obtain the pixel ordinate of the second calibration point in the second image;
the first triangle is determined by the optical center of the second camera, the center point of the second image and a first reference point, the pixel ordinate of the first reference point is the same as the pixel ordinate of the second calibration point, and the pixel abscissa of the first reference point is the same as the pixel abscissa of the center point of the second image.
5. The method of claim 4, wherein processing the first triangle with a trigonometric function results in the pixel ordinate of the second calibration point in the second image, comprising:
determining a first included angle of the first triangle according to world coordinates of the second standard point, the height and the pitch angle of the second camera, wherein the first included angle is an included angle between an optical axis of the second camera and a first straight line, and the first straight line is determined by the first standard point and the optical center of the second camera;
determining pixel coordinates of the second calibration point in the second image based on trigonometric function relationships satisfied between the first edge, the second edge and the first included angle of the first triangle; the first edge is an edge between the center point of the second image and the first reference point, the second edge is an edge between the center point of the second image and the optical center, the second edge is related to an internal reference of the second camera, and the first edge is perpendicular to the second edge.
6. The method of claim 4 or 5, wherein the pixel ordinate of the second calibration point in the second image satisfies the following formula:
Or alternatively, the process may be performed,
wherein v is 2i Representing the pixel ordinate of the ith second calibration point on the second image; f represents the focal length of the second camera; dy represents the number of pixels per unit size; alpha 2 Represent the firstPitch angle of the two cameras; h 2 Representing the height of the second camera; x is x 2i Identifying world abscissas of the ith second calibration point in the second world coordinate system; v 10 A pixel ordinate representing a first reference point; c y A pixel ordinate representing a center point of the first image; alpha 1 Representing a pitch angle of the first camera; h 1 Representing the height of the first camera; x is x 10 =x 20 Wherein x is 10 Representing the world abscissa, x, of a first reference point in a first world coordinate system 20 Representing the world abscissa of the second fiducial point in the second world coordinate system.
7. The method of any of claims 4-6, wherein a pixel location of the second calibration point in the second image is the same as a pixel location of the first reference point when the second calibration point is located on a ground projection line of an optical axis of the second camera.
8. The method of any of claims 4-6, wherein the determining pixel coordinates of the second calibration point in the second image when the second calibration point is outside of a ground projection line of an optical axis of the second camera further comprises:
Processing a second triangle and a third triangle by using a triangle similarity theorem to obtain a pixel abscissa of the second calibration point in the second image;
wherein the second triangle is composed of the optical center of the second camera, the second calibration point and a second reference point; the third triangle is composed of the optical center of the second camera, the pixel position of the second calibration point in the second image and the first reference point; the second reference point is located in a second world coordinate system, and in the second world coordinate system, a transverse axis component of the second reference point is identical to a transverse axis component of the second reference point, and a longitudinal axis component of the second reference point is zero.
9. The method of claim 8, wherein the second triangle is similar to the third triangle, wherein the first ratio is equal to the second ratio;
the first ratio is a ratio between a third side in the second triangle and a fourth side in the third triangle; the third side is the side between the optical center of the second camera and the second reference point, and the fourth side is the side between the optical center of the second camera and the first reference point;
The second ratio is a ratio between a fifth side in the second triangle and a sixth side in the third triangle; the fifth side is the side between the optical center of the second camera and the second calibration point, and the sixth side is the side between the optical center of the second camera and the pixel position of the second calibration point in the second image.
10. The method according to claim 8 or 9, wherein the pixel abscissa of the second calibration point in the second image satisfies the following formula:
wherein u is 2i Representing the pixel ordinate of the ith second calibration point on the second image; l1 represents the distance between a first reference point and the second calibration point; f represents the focal length of the second camera; h 2 Representing the height of the second camera; x is x 2i Representing the world abscissa of the ith second calibration point in the second world coordinate system; c x A pixel abscissa representing a center point of the second image; γ2i represents a first angle, which is an angle between the optical axis of the second camera and a first straight line,wherein alpha is 1 Representing a pitch angle of the first camera; h 1 Representing the height of the first camera; x is x 10 =x 20 Which is provided withWherein x is 10 Representing the world abscissa, x, of the first reference point in the first world coordinate system 20 Representing the world abscissa of the second fiducial point in the second world coordinate system.
11. A positioning method, comprising:
receiving a third image acquired by a second camera, wherein the third image contains a target object;
acquiring a first pixel coordinate of the target object in the third image;
processing the first pixel coordinates by utilizing a homography matrix of the second camera to obtain first world coordinates of the target object;
the homography matrix is used for describing the mapping relation between the pixel coordinate system of the camera and the world coordinate system; the homography matrix is determined based on pixel coordinates and world coordinates of a plurality of second calibration points, wherein the pixel coordinates of the second calibration points are determined based on gesture parameters of a first camera, gesture parameters of the second camera, pixel coordinates and world coordinates of a first reference point and world coordinates of the second calibration points;
wherein the internal parameters of the first camera and the second camera are the same; the first image is from the first camera, and the first image comprises a first datum point and a plurality of first datum points; a second image from the second camera, the second image comprising a second fiducial point and a plurality of second fiducial points;
The pixel coordinates of the first datum point in the first image are the same as the pixel coordinates of the second datum point in the second image; the relative position of the first reference point with respect to the first camera is the same as the relative position of the second reference point with respect to the second camera;
the first calibration points are in one-to-one correspondence with the second calibration points; in the first calibration point and the second calibration point which have a corresponding relationship, the relative position of the first calibration point relative to the first camera is the same as the relative position of the second calibration point relative to the second camera.
12. The method of claim 11, wherein the method further comprises:
adjusting the pose of the first camera; and/or adjusting the pose of the second camera; so that the pixel coordinates of the first reference point in the first image are the same as the pixel coordinates of the second reference point in the second image.
13. The method according to claim 11 or 12, characterized in that the method further comprises:
acquiring a second world coordinate of the target object;
Correcting the homography matrix of the second camera when the error between the first world coordinate and the second world coordinate is greater than a preset threshold.
14. The method according to any one of claims 11-13, further comprising:
receiving a first message, wherein the first message is used for configuring the pose of a camera;
based on the first message, pose parameters of the first camera and/or the second camera are determined.
15. The method according to any one of claims 11-14, further comprising:
receiving a second message, wherein the second message indicates that a homography matrix of one or more second cameras is acquired;
and when the second camera indicated by the second message meets the preset calibration condition, determining a homography matrix of the second camera.
16. An electronic device comprising at least one processor and memory; the memory stores computer-executable instructions; the at least one processor executing computer-executable instructions stored in the memory cause the at least one processor to perform the method of any one of claims 1-15.
17. A positioning system, comprising:
a first electronic device for performing the method of any of claims 1-15;
a camera for capturing images, the camera comprising a first camera and one or more second cameras.
18. The positioning system of claim 17, wherein the positioning system further comprises:
the second electronic equipment is used for receiving instruction information from a user and sending the instruction information to the first electronic equipment;
the first electronic device is further configured to perform an action indicated by the instruction information;
wherein the first electronic device is integrated with the second electronic device or separately provided.
CN202010554485.4A 2020-06-17 2020-06-17 Positioning method, electronic equipment and positioning system Active CN113808199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010554485.4A CN113808199B (en) 2020-06-17 2020-06-17 Positioning method, electronic equipment and positioning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010554485.4A CN113808199B (en) 2020-06-17 2020-06-17 Positioning method, electronic equipment and positioning system

Publications (2)

Publication Number Publication Date
CN113808199A CN113808199A (en) 2021-12-17
CN113808199B true CN113808199B (en) 2023-09-08

Family

ID=78943189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010554485.4A Active CN113808199B (en) 2020-06-17 2020-06-17 Positioning method, electronic equipment and positioning system

Country Status (1)

Country Link
CN (1) CN113808199B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114791282A (en) * 2022-03-04 2022-07-26 广州沃定新信息科技有限公司 Road facility coordinate calibration method and device based on vehicle high-precision positioning

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008014940A (en) * 2006-06-08 2008-01-24 Fast:Kk Camera calibration method for camera measurement of planar subject and measuring device applying same
CN106251334A (en) * 2016-07-18 2016-12-21 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN107038722A (en) * 2016-02-02 2017-08-11 深圳超多维光电子有限公司 A kind of equipment localization method and device
CN107481283A (en) * 2017-08-01 2017-12-15 深圳市神州云海智能科技有限公司 A kind of robot localization method, apparatus and robot based on CCTV camera
CN109015630A (en) * 2018-06-21 2018-12-18 深圳辰视智能科技有限公司 Hand and eye calibrating method, system and the computer storage medium extracted based on calibration point
CN110599548A (en) * 2019-09-02 2019-12-20 Oppo广东移动通信有限公司 Camera calibration method and device, camera and computer readable storage medium
CN110650427A (en) * 2019-04-29 2020-01-03 国网浙江省电力有限公司物资分公司 Indoor positioning method and system based on fusion of camera image and UWB

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008014940A (en) * 2006-06-08 2008-01-24 Fast:Kk Camera calibration method for camera measurement of planar subject and measuring device applying same
CN107038722A (en) * 2016-02-02 2017-08-11 深圳超多维光电子有限公司 A kind of equipment localization method and device
CN106251334A (en) * 2016-07-18 2016-12-21 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN107481283A (en) * 2017-08-01 2017-12-15 深圳市神州云海智能科技有限公司 A kind of robot localization method, apparatus and robot based on CCTV camera
CN109015630A (en) * 2018-06-21 2018-12-18 深圳辰视智能科技有限公司 Hand and eye calibrating method, system and the computer storage medium extracted based on calibration point
CN110650427A (en) * 2019-04-29 2020-01-03 国网浙江省电力有限公司物资分公司 Indoor positioning method and system based on fusion of camera image and UWB
CN110599548A (en) * 2019-09-02 2019-12-20 Oppo广东移动通信有限公司 Camera calibration method and device, camera and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于单应矩阵的摄像机标定方法及应用;张雪波 等;《控制工程》;第17卷(第02期);248-255 *

Also Published As

Publication number Publication date
CN113808199A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
US20190147647A1 (en) System and method for determining geo-location(s) in images
US11321869B2 (en) Accurate positioning system using attributes
CN113196197A (en) Movable object performing real-time map building using payload components
US20220237738A1 (en) Information processing device, information processing method, information processing program, image processing device, and image processing system for associating position information with captured images
EP2992354B1 (en) Methods and apparatuses for characterizing and affecting mobile device location accuracy and/or uncertainty
US8542368B2 (en) Position measuring apparatus and method
CN111656144A (en) Sensor device, electronic device, sensor system, and control method
CN106027960A (en) Positioning system and method
CN113808199B (en) Positioning method, electronic equipment and positioning system
JP2019027799A (en) Positioning accuracy information calculation device and positioning accuracy information calculation method
KR20170049953A (en) Indoor Positioning Device Using a Single Image Sensor and Method Thereof
CN112068567A (en) Positioning method and positioning system based on ultra-wideband and visual image
CN113393520A (en) Positioning method and system, electronic device and computer readable storage medium
CN112816939B (en) Substation unmanned aerial vehicle positioning method based on Internet of things
US11579237B2 (en) Determining a plurality of installation positions of a plurality of radio devices
CN110969704B (en) Mark generation tracking method and device based on AR guide
US11210957B2 (en) Systems and methods for generating views of unmanned aerial vehicles
CN111899298B (en) Location sensing system based on live-action image machine learning
CN113237464A (en) Positioning system, positioning method, positioner, and storage medium
JP7061933B2 (en) Mutual position acquisition system
Zhang et al. Visual-inertial fusion based positioning systems
CN111210471B (en) Positioning method, device and system
WO2018079043A1 (en) Information processing device, image pickup device, information processing system, information processing method, and program
CN110264521A (en) A kind of localization method and system based on binocular camera
CN111612904B (en) Position sensing system based on three-dimensional model image machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220215

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Applicant after: Huawei Cloud Computing Technology Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

GR01 Patent grant
GR01 Patent grant