CN113393520A - Positioning method and system, electronic device and computer readable storage medium - Google Patents

Positioning method and system, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN113393520A
CN113393520A CN202010171397.6A CN202010171397A CN113393520A CN 113393520 A CN113393520 A CN 113393520A CN 202010171397 A CN202010171397 A CN 202010171397A CN 113393520 A CN113393520 A CN 113393520A
Authority
CN
China
Prior art keywords
positioning
image
calibration
point
world coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010171397.6A
Other languages
Chinese (zh)
Inventor
张竞
姜波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010171397.6A priority Critical patent/CN113393520A/en
Publication of CN113393520A publication Critical patent/CN113393520A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the disclosure provides a positioning method and system, an electronic device and a computer-readable storage medium, in the scheme, a calibration point is determined based on a movable calibration device, and positioning of the calibration point is realized through a positioning device mounted in the calibration device, so that pixel coordinates of the calibration point can be obtained based on a first image, world coordinates of the calibration point can be obtained based on the positioning device, and therefore a homography matrix of a camera device can be obtained through calculation, and positioning of a target object is realized through the homography matrix. In the process, the automatic calibration of the homography matrix can be realized without the need of road sealing operation, and the adverse effect of manual measurement on the calibration precision and the positioning precision is avoided, thereby being beneficial to improving the positioning precision.

Description

Positioning method and system, electronic device and computer readable storage medium
Technical Field
The disclosed embodiments relate to the field of communication technologies and the field of computer vision technologies, and in particular, to a positioning method and system, an electronic device, and a computer-readable storage medium.
Background
The traffic incident directly influences the life and property safety of vehicles and pedestrians running on the road, and the traffic incident is accurately positioned, so that the traffic incident can be avoided or processed in time, and the safety risk is reduced.
Through the camera of arranging on the road, can realize the location to the traffic incident. This is achieved based on a homography matrix of the camera, which can be used to describe the mapping relationship between the pixel coordinate system and the world coordinate system of the camera. At present, homography matrices are generally implemented by means of manual calibration. In the image acquisition range of the camera, a calibration person can select a plurality of fixed positions in a road, such as the top of a lane line, the corner points of a zebra crossing, the signboards on two sides of the road, and the like, and then manually measure world coordinates of the fixed positions, so that the homography matrix of the camera is calculated by combining the pixel coordinates of the fixed positions in the image acquired by the camera.
The positioning is realized by depending on the homography matrix obtained by manual calibration in the prior art, so that the camera can acquire images containing the fixed positions, the road sealing operation is needed when the homography matrix is calibrated, and the calibration mode of manual operation has the problem of poor calibration precision, thereby further influencing the positioning precision.
Disclosure of Invention
In view of the above problems, the present disclosure provides a positioning method and system, an electronic device, and a computer-readable storage medium, which are used to implement automatic calibration of a homography matrix and improve positioning accuracy on the premise of no road sealing operation.
In a first aspect, the present disclosure provides a positioning method. In the scheme, a first image can be acquired by using a camera device in the moving process of the calibration equipment, and the pixel coordinates of the calibration points in the first image are acquired in the first image, wherein the calibration points are determined by the calibration equipment in the first image, and the calibration equipment is also provided with positioning equipment, so that the world coordinates of each calibration point can also be acquired; then, based on the pixel coordinates and the world coordinates of the plurality of calibration points, a homography matrix of the image pickup device can be acquired, wherein the homography matrix of the image pickup device is used for describing a mapping relation between a pixel coordinate system and a world coordinate system of the image pickup device; furthermore, when another image, namely a second image, is acquired by the camera device, the target world coordinate of the target object in the second image can be acquired by using the homography matrix of the camera device. Through the scheme that this embodiment provided, need not the operation of closing a way, can realize the automatic calibration to camera device's homography matrix to, avoided artifical measurement to calibration accuracy, positioning accuracy's harmful effects, be favorable to improving positioning accuracy.
In an embodiment of the first aspect, a coordinate marking device is further mounted on the calibration device, and the coordinate marking device is configured to determine a virtual coordinate system, where the calibration point is located on an intersection line of a plane where the coordinate marking device is located and the ground.
In this embodiment, when the pixel coordinates of the calibration point in the first image are obtained, the virtual coordinates of the calibration point in the virtual coordinate system may be obtained, and a homography matrix of the first image is obtained, where the homography matrix of the first image is used to describe a mapping relationship between the virtual coordinate system and the pixel coordinate system in the first image, and further, the homography matrix of the first image is used to process the virtual coordinates of the calibration point to obtain the pixel coordinates of the calibration point. The homography matrix of the first image may be obtained by calculation when the step is executed, or may be stored in advance, and the homography matrix of the first image stored in advance may be directly obtained when the step is executed.
Specifically, the homography matrix of the first image may be obtained as follows: acquiring pixel coordinates of a plurality of reference points in a first image; wherein the plurality of reference points are located on the coordinate marking device, and the virtual coordinates of each reference point are acquired in a virtual coordinate system, thereby calculating the homography matrix of the first image based on the virtual coordinates and the pixel coordinates of the plurality of reference points. Because the reference point is positioned on the coordinate marking equipment, the accurate virtual coordinate of the reference point can be obtained in the first image, and the calibration precision of the homography matrix of the first image is improved.
In an embodiment of the first aspect, the world coordinates of the positioning device at the time indicated by the timestamp may be obtained from the timestamp of the first image in the positioning data of the positioning device, and thus the world coordinates of the calibration point may be obtained based on the world coordinates of the positioning device.
Wherein, the number of the positioning devices can be one or more.
In particular, in an embodiment of the first aspect, when the positioning device is one and the index point is a ground projection of the positioning device, the world coordinates of the index point are the world coordinates of the positioning device. At this time, the world coordinate of the calibration point can be obtained only by obtaining the world coordinate of the positioning device at the time indicated by the timestamp of the first image.
In an embodiment of the first aspect, when there are multiple positioning devices, the world coordinates of the multiple positioning devices may be calculated based on the position relationships of the multiple positioning devices, so as to obtain the world coordinates of the calibration point. In this case, the positioning devices are mounted on the calibration device, and the world coordinates of the calibration point can be calculated by fixing the positional relationship between the positioning devices during the movement of the calibration device. In this embodiment, the positions of the positioning devices and the calibration points are not particularly limited, for example, the calibration point may also be a ground projection of one of the positioning devices, or the calibration point may not coincide with the ground projection of any one of the positioning devices.
In another embodiment of the first aspect, the calibration point and the optical center and the positioning point of the camera device are in the same straight line; the anchor point is located on the positioning device, which can refer to fig. 10 or fig. 11. At this time, the pixel coordinates of the positioning point B in the first image are the same as the pixel coordinates of the positioning point a in the first image. Therefore, the pixel coordinates of the positioning point can be obtained only by acquiring the pixel coordinates of the positioning point in the first image.
In this embodiment, when the world coordinates of the calibration point are acquired, the world coordinates of the calibration point may be acquired based on the positioning data of the positioning apparatus and the world coordinates of the image pickup device, whereby the world coordinates of the calibration point may be acquired based on the world coordinates of the positioning point and the world coordinates of the image pickup device.
In another embodiment of the first aspect, the world coordinates of the positioning point and the world coordinates of the camera device may be calculated based on a triangle similarity theorem to obtain the world coordinates of the positioning point.
When the world coordinate of the positioning point is obtained, firstly, the world coordinate of the positioning device at the moment shown by the timestamp of the first image can be obtained in the positioning data of the positioning device according to the timestamp of the first image, so that the world coordinate of the positioning point is obtained based on the world coordinate of the positioning device.
In another embodiment of the first aspect, it is determined whether there is positioning data that is the same as the time stamp of the first image in the positioning data, and thus, when there is no positioning data that is the same as the time stamp of the first image in the positioning data, a plurality of world coordinates are acquired, and a time difference between the time stamp of each world coordinate and the time stamp of the first image is within a preset difference value, and thus, interpolation processing is performed on the plurality of world coordinates to obtain the world coordinates of the positioning apparatus at the time indicated by the time stamp of the first image.
In another embodiment of the first aspect, the pixel coordinates of any one of the index points and the world coordinates of the index point satisfy the following formula:
Figure BDA0002409316330000031
wherein (x)w,yw) Is the world coordinate of the index point, (u, v) is the pixel coordinate of the index point,
Figure BDA0002409316330000032
is a homography matrix of the camera device, hi,jThe matrix parameters are homography matrix parameters of the camera device, wherein i and j are used for distinguishing the matrix parameters, the value is 1-3, and z iscIs a three-dimensional coordinate parameter.
Therefore, in any image acquired by the camera device, the homography matrix of the camera device is also satisfied between the pixel coordinate and the world coordinate of any object, so that the world coordinate of the target object can be obtained by processing the pixel coordinate of the target object by using the homography matrix of the camera device, and the positioning function for the target object is realized.
In another embodiment of the first aspect, when the target object in the second image is a target vehicle, the target world coordinates may also be transmitted to the target vehicle. Therefore, the interaction between the camera device and the target vehicle can be realized, and the target vehicle can also realize the positioning of the target vehicle based on the interaction.
In another embodiment of the first aspect, the calibration apparatus includes: a vehicle, a drone, or a ground robot.
In another embodiment of the first aspect, the positioning apparatus comprises: one or more of a Real Time Kinematic (RTK) positioning tag, an Ultra Wideband (UWB) positioning tag, or a Global Positioning System (GPS) receiver.
In another embodiment of the first aspect, the coordinate marking device may be a light pipe. In addition, other visual markers may be used.
In a second aspect, the present disclosure provides a positioning method, including: and acquiring a second image by using the camera device, wherein the second image comprises a target object, then acquiring a target pixel coordinate of the target object in the second image, and further processing the target pixel coordinate by using a homography matrix of the camera device to obtain a target world coordinate of the target object. The homography matrix of the camera is used for describing the mapping relation between the pixel coordinate system and the world coordinate system of the camera. Through the scheme that this embodiment provided, can realize the location based on single camera device, convenient high efficiency.
In an embodiment of the second aspect, the homography matrix of the image capturing apparatus may be calibrated and stored in advance, and when the scheme is executed, the homography matrix of the image capturing apparatus stored in advance may be directly acquired.
In another embodiment of the second aspect, the homography matrix of the image capture device may be obtained by: the method comprises the steps that a first image is collected by a camera device in the moving process of a calibration device, the pixel coordinates of calibration points in the first image are obtained in the first image, wherein the calibration points are determined by the calibration device in the first image, and positioning devices are carried on the calibration device, so that the world coordinates of the calibration points can also be obtained; then, based on the pixel coordinates and world coordinates of the plurality of calibration points, a homography matrix of the imaging apparatus can be calculated. In the embodiment, the automatic calibration of the homography matrix of the camera device can be realized without the need of road sealing operation, and the adverse effects of manual measurement on the calibration precision and the positioning precision are avoided, thereby being beneficial to improving the positioning precision.
In an embodiment of the second aspect, a coordinate marking device is further mounted on the calibration device, and the coordinate marking device is configured to determine a virtual coordinate system, where the calibration point is located on an intersection line of a plane where the coordinate marking device is located and the ground.
In this embodiment, when the pixel coordinates of the calibration point in the first image are obtained, the virtual coordinates of the calibration point in the virtual coordinate system may be obtained, and a homography matrix of the first image is obtained, where the homography matrix of the first image is used to describe a mapping relationship between the virtual coordinate system and the pixel coordinate system in the first image, and further, the homography matrix of the first image is used to process the virtual coordinates of the calibration point to obtain the pixel coordinates of the calibration point. The homography matrix of the first image may be obtained by calculation when the step is executed, or may be stored in advance, and the homography matrix of the first image stored in advance may be directly obtained when the step is executed.
Specifically, the homography matrix of the first image may be obtained as follows: acquiring pixel coordinates of a plurality of reference points in a first image; wherein the plurality of reference points are located on the coordinate marking device, and the virtual coordinates of each reference point are acquired in a virtual coordinate system, thereby calculating the homography matrix of the first image based on the virtual coordinates and the pixel coordinates of the plurality of reference points. Because the reference point is positioned on the coordinate marking equipment, the accurate virtual coordinate of the reference point can be obtained in the first image, and the calibration precision of the homography matrix of the first image is improved.
In an embodiment of the second aspect, the world coordinates of the positioning device at the time indicated by the timestamp may be obtained from the timestamp of the first image in the positioning data of the positioning device, and thus the world coordinates of the calibration point may be obtained based on the world coordinates of the positioning device.
Wherein, the number of the positioning devices can be one or more.
In particular, in an embodiment of the second aspect, when the positioning device is one and the index point is a ground projection of the positioning device, the world coordinates of the index point are the world coordinates of the positioning device. At this time, the world coordinate of the calibration point can be obtained only by obtaining the world coordinate of the positioning device at the time indicated by the timestamp of the first image.
In an embodiment of the second aspect, when there are multiple positioning devices, the world coordinates of the multiple positioning devices may be calculated based on the position relationships of the multiple positioning devices, so as to obtain the world coordinates of the calibration point. In this case, the positioning devices are mounted on the calibration device, and the world coordinates of the calibration point can be calculated by fixing the positional relationship between the positioning devices during the movement of the calibration device. In this embodiment, the positions of the positioning devices and the calibration points are not particularly limited, for example, the calibration point may also be a ground projection of one of the positioning devices, or the calibration point may not coincide with the ground projection of any one of the positioning devices.
In another embodiment of the second aspect, the calibration point and the optical center and the positioning point of the camera device are in the same straight line; the anchor point is located on the positioning device, which can refer to fig. 10 or fig. 11. At this time, the pixel coordinates of the positioning point B in the first image are the same as the pixel coordinates of the positioning point a in the first image. Therefore, the pixel coordinates of the positioning point can be obtained only by acquiring the pixel coordinates of the positioning point in the first image.
In this embodiment, when the world coordinates of the calibration point are acquired, the world coordinates of the calibration point may be acquired based on the positioning data of the positioning apparatus and the world coordinates of the image pickup device, whereby the world coordinates of the calibration point may be acquired based on the world coordinates of the positioning point and the world coordinates of the image pickup device.
In another embodiment of the second aspect, the world coordinates of the positioning point and the world coordinates of the camera device may be calculated based on a triangle similarity theorem to obtain the world coordinates of the positioning point.
When the world coordinate of the positioning point is obtained, firstly, the world coordinate of the positioning device at the moment shown by the timestamp of the first image can be obtained in the positioning data of the positioning device according to the timestamp of the first image, so that the world coordinate of the positioning point is obtained based on the world coordinate of the positioning device.
In another embodiment of the second aspect, it is determined whether there is positioning data that is the same as the time stamp of the first image in the positioning data, and thus, when there is no positioning data that is the same as the time stamp of the first image in the positioning data, a plurality of world coordinates are acquired, and a time difference between the time stamp of each world coordinate and the time stamp of the first image is within a preset difference value, and thus, interpolation processing is performed on the plurality of world coordinates, and the world coordinates of the time positioning device shown by the time stamp of the first image are obtained.
In another embodiment of the second aspect, the pixel coordinates of any one of the index points and the world coordinates of the index point satisfy the following formula:
Figure BDA0002409316330000061
wherein (x)w,yw) Is the world coordinate of the index point, (u, v) is the pixel coordinate of the index point,
Figure BDA0002409316330000062
is a homography matrix of the camera device, hi,jThe matrix parameters are homography matrix parameters of the camera device, wherein i and j are used for distinguishing the matrix parameters, the value is 1-3, and z iscIs a three-dimensional coordinate parameter.
Therefore, in any image acquired by the camera device, the homography matrix of the camera device is also satisfied between the pixel coordinate and the world coordinate of any object, so that the world coordinate of the target object can be obtained by processing the pixel coordinate of the target object by using the homography matrix of the camera device, and the positioning function for the target object is realized.
In another embodiment of the second aspect, when the target object in the second image is a target vehicle, the target world coordinates may also be transmitted to the target vehicle. Therefore, the interaction between the camera device and the target vehicle can be realized, and the target vehicle can also realize the positioning of the target vehicle based on the interaction.
In another embodiment of the second aspect, the calibration apparatus includes: a vehicle, a drone, or a ground robot.
In another embodiment of the second aspect, the positioning apparatus comprises: one or more of a Real Time Kinematic (RTK) positioning tag, an Ultra Wideband (UWB) positioning tag, or a Global Positioning System (GPS) receiver.
In another embodiment of the second aspect, the coordinate marking device may be a light pipe. In addition, other visual markers may be used.
In a third aspect, the present disclosure provides an electronic device comprising: the device comprises a first acquisition module, a second acquisition module, a first calculation module and a second calculation module. The first obtaining module is used for obtaining the pixel coordinates of the calibration point in the first image in the process of moving the calibration equipment; the first image comprises calibration equipment, and the calibration point is determined by the calibration equipment; the second acquisition module is used for acquiring the world coordinates of the calibration point, the world coordinates are determined by the positioning equipment, and the positioning equipment is carried on the calibration equipment; the first calculation module is used for acquiring a homography matrix of the camera device based on the pixel coordinates and the world coordinates of the plurality of calibration points, and the homography matrix of the camera device is used for describing the mapping relation between the pixel coordinate system and the world coordinate system of the camera device; and the second calculation module is used for acquiring the target world coordinates of the target object in the second image by using the homography matrix of the camera device when the second image is acquired. Through the scheme that this embodiment provided, need not the operation of closing a way, can realize the automatic calibration to camera device's homography matrix to, avoided artifical measurement to calibration accuracy, positioning accuracy's harmful effects, be favorable to improving positioning accuracy.
In an embodiment of the third aspect, the calibration device is equipped with a coordinate marking device, and the coordinate marking device is used for determining a virtual coordinate system; the calibration point is positioned on the intersection line of the plane where the coordinate marking equipment is positioned and the ground.
In this embodiment, the first obtaining module is specifically configured to: acquiring a virtual coordinate of the calibration point in a virtual coordinate system; acquiring a homography matrix of a first image, wherein the homography matrix of the first image is used for describing a mapping relation between a virtual coordinate system and a pixel coordinate system in the first image; and processing the virtual coordinate of the calibration point by using the homography matrix of the first image to obtain the pixel coordinate of the calibration point.
In another embodiment of the third aspect, the first obtaining module is specifically configured to: acquiring pixel coordinates of a plurality of reference points in a first image; a plurality of reference points are located on the coordinate marking device; acquiring a virtual coordinate of each reference point in a virtual coordinate system; based on the virtual coordinates and pixel coordinates of the plurality of reference points, a homography matrix of the first image is calculated.
In another embodiment of the third aspect, the second obtaining module is specifically configured to: in the positioning data of the positioning equipment, acquiring world coordinates of the positioning equipment at the moment indicated by a timestamp according to the timestamp of the first image; and acquiring the world coordinates of the calibration point based on the world coordinates of the positioning equipment.
Wherein, the number of the positioning devices can be one or more.
In particular, in an embodiment of the third aspect, when the positioning device is one and the calibration point is a ground projection of the positioning device, the world coordinates of the calibration point are the world coordinates of the positioning device. At this time, the second obtaining module may be configured to: and acquiring the world coordinate of the positioning equipment at the time shown by the time stamp of the first image to obtain the world coordinate of the calibration point.
In another embodiment of the third aspect, when there are a plurality of positioning devices, the second obtaining module is specifically configured to: and calculating the world coordinates of the positioning devices based on the position relations of the positioning devices to obtain the world coordinates of the calibration points. In this case, the positioning devices are mounted on the calibration device, and the world coordinates of the calibration point can be calculated by fixing the positional relationship between the positioning devices during the movement of the calibration device. In this embodiment, the positions of the positioning devices and the calibration points are not particularly limited, for example, the calibration point may also be a ground projection of one of the positioning devices, or the calibration point may not coincide with the ground projection of any one of the positioning devices.
In another embodiment of the third aspect, the calibration point and the optical center and the positioning point of the camera device are in the same straight line; the anchor point is located on the positioning device, which can refer to fig. 10 or fig. 11. At this time, the pixel coordinates of the positioning point B in the first image are the same as the pixel coordinates of the positioning point a in the first image. Therefore, the pixel coordinates of the positioning point can be obtained only by acquiring the pixel coordinates of the positioning point in the first image. In other words, the first obtaining module is specifically configured to: and acquiring the pixel coordinates of the positioning point in the first image to obtain the pixel coordinates of the positioning point.
In this embodiment, the second obtaining module is specifically configured to: the world coordinates of the positioning point are acquired based on the positioning data of the positioning device, and the world coordinates of the camera device are acquired, whereby the world coordinates of the positioning point can be acquired based on the world coordinates of the positioning point and the world coordinates of the camera device.
In another embodiment of the third aspect, the second obtaining module may be specifically configured to: the world coordinates of the positioning point and the world coordinates of the camera device can be calculated based on the triangle similarity theorem to obtain the world coordinates of the positioning point.
The second obtaining module is specifically configured to: in the positioning data of the positioning equipment, according to the time stamp of the first image, the world coordinate of the positioning equipment at the moment shown by the time stamp of the first image is obtained; and acquiring the world coordinates of the positioning points based on the world coordinates of the positioning equipment.
In another embodiment of the third aspect, the second obtaining module is specifically configured to: judging whether positioning data identical to the time stamp of the first image exists in the positioning data; when the positioning data does not exist, acquiring a plurality of world coordinates, wherein the time difference between the time stamp of each world coordinate and the time stamp of the first image is within a preset difference value; and carrying out interpolation processing on the plurality of world coordinates to obtain the world coordinates of the positioning equipment at the time indicated by the time stamp of the first image.
In another embodiment of the third aspect, the pixel coordinates of any one of the index points and the world coordinates of the index point satisfy the following formula:
Figure BDA0002409316330000081
wherein (x)w,yw) Is the world coordinate of the index point, (u, v) is the pixel coordinate of the index point,
Figure BDA0002409316330000082
is a homography matrix of the camera device, hi,jThe matrix parameters are homography matrix parameters of the camera device, wherein i and j are used for distinguishing the matrix parameters, the value is 1-3, and z iscIs a three-dimensional coordinate parameter.
Therefore, in any image acquired by the camera device, the homography matrix of the camera device is also satisfied between the pixel coordinate and the world coordinate of any object, so that the world coordinate of the target object can be obtained by processing the pixel coordinate of the target object by using the homography matrix of the camera device, and the positioning function for the target object is realized.
In another embodiment of the third aspect, the electronic device further comprises: and the transceiver module is used for transmitting the target world coordinates to the target vehicle when the target object in the second image is the target vehicle. Therefore, the interaction between the camera device and the target vehicle can be realized, and the target vehicle can also realize the positioning of the target vehicle based on the interaction.
In another embodiment of the third aspect, the calibration apparatus includes: a vehicle, a drone, or a ground robot.
In another embodiment of the third aspect, the positioning apparatus comprises: one or more of a Real Time Kinematic (RTK) positioning tag, an Ultra Wideband (UWB) positioning tag, or a Global Positioning System (GPS) receiver.
In another embodiment of the third aspect, the coordinate marking device may be a light pipe. In addition, other visual markers may be used.
In a fourth aspect, the present disclosure provides an electronic device comprising: the device comprises an acquisition module, an acquisition module and a processing module. The acquisition module is used for acquiring a second image by using the camera device, and the second image comprises a target object; the acquisition module is used for acquiring target pixel coordinates of the target object in the second image; the processing module is used for processing the target pixel coordinate by utilizing the homography matrix of the camera device to obtain a target world coordinate of a target object; the homography matrix of the camera is used for describing the mapping relation between the pixel coordinate system and the world coordinate system of the camera. Through the scheme that this embodiment provided, can realize the location based on single camera device, convenient high efficiency.
In an embodiment of the fourth aspect, the homography matrix of the image capturing apparatus may be calibrated and stored in advance, and when the scheme is executed, the homography matrix of the image capturing apparatus stored in advance may be directly acquired.
In another embodiment of the fourth aspect, the electronic device may further include a calibration module, and the calibration module is configured to acquire a homography matrix of the image capturing apparatus.
In another embodiment of the fourth aspect, the calibration module is specifically configured to: the method comprises the steps that a first image is collected by a camera device in the moving process of a calibration device, the pixel coordinates of calibration points in the first image are obtained in the first image, wherein the calibration points are determined by the calibration device in the first image, and positioning devices are carried on the calibration device, so that the world coordinates of the calibration points can also be obtained; then, based on the pixel coordinates and world coordinates of the plurality of calibration points, a homography matrix of the imaging apparatus can be calculated. In the embodiment, the automatic calibration of the homography matrix of the camera device can be realized without the need of road sealing operation, and the adverse effects of manual measurement on the calibration precision and the positioning precision are avoided, thereby being beneficial to improving the positioning precision.
In an embodiment of the fourth aspect, the calibration device is equipped with a coordinate marking device, and the coordinate marking device is used for determining a virtual coordinate system; the calibration point is positioned on the intersection line of the plane where the coordinate marking equipment is positioned and the ground.
In this embodiment, the calibration module is specifically configured to: acquiring a virtual coordinate of the calibration point in a virtual coordinate system; acquiring a homography matrix of a first image, wherein the homography matrix of the first image is used for describing a mapping relation between a virtual coordinate system and a pixel coordinate system in the first image; and processing the virtual coordinate of the calibration point by using the homography matrix of the first image to obtain the pixel coordinate of the calibration point.
In another embodiment of the fourth aspect, the calibration module is specifically configured to: acquiring pixel coordinates of a plurality of reference points in a first image; a plurality of reference points are located on the coordinate marking device; acquiring a virtual coordinate of each reference point in a virtual coordinate system; based on the virtual coordinates and pixel coordinates of the plurality of reference points, a homography matrix of the first image is calculated.
In another embodiment of the fourth aspect, the calibration module is specifically configured to: in the positioning data of the positioning equipment, acquiring world coordinates of the positioning equipment at the moment indicated by a timestamp according to the timestamp of the first image; and acquiring the world coordinates of the calibration point based on the world coordinates of the positioning equipment.
Wherein, the number of the positioning devices can be one or more.
Specifically, in one embodiment of the fourth aspect, when the positioning device is one and the index point is a ground projection of the positioning device, the world coordinates of the index point are the world coordinates of the positioning device. In this case, the calibration module may be configured to: and acquiring the world coordinate of the positioning equipment at the time shown by the time stamp of the first image to obtain the world coordinate of the calibration point.
In another embodiment of the fourth aspect, when there are a plurality of positioning devices, the calibration module is specifically configured to: and calculating the world coordinates of the positioning devices based on the position relations of the positioning devices to obtain the world coordinates of the calibration points. In this case, the positioning devices are mounted on the calibration device, and the world coordinates of the calibration point can be calculated by fixing the positional relationship between the positioning devices during the movement of the calibration device. In this embodiment, the positions of the positioning devices and the calibration points are not particularly limited, for example, the calibration point may also be a ground projection of one of the positioning devices, or the calibration point may not coincide with the ground projection of any one of the positioning devices.
In another embodiment of the fourth aspect, the calibration point and the optical center and the positioning point of the camera device are in the same straight line; the anchor point is located on the positioning device, which can refer to fig. 10 or fig. 11. At this time, the pixel coordinates of the positioning point B in the first image are the same as the pixel coordinates of the positioning point a in the first image. Therefore, the pixel coordinates of the positioning point can be obtained only by acquiring the pixel coordinates of the positioning point in the first image. In other words, the calibration module is specifically configured to: and acquiring the pixel coordinates of the positioning point in the first image to obtain the pixel coordinates of the positioning point.
In this embodiment, the calibration module is specifically configured to: the world coordinates of the positioning point are acquired based on the positioning data of the positioning device, and the world coordinates of the camera device are acquired, whereby the world coordinates of the positioning point can be acquired based on the world coordinates of the positioning point and the world coordinates of the camera device.
In another embodiment of the fourth aspect, the calibration module may be specifically configured to: the world coordinates of the positioning point and the world coordinates of the camera device can be calculated based on the triangle similarity theorem to obtain the world coordinates of the positioning point.
Wherein, the calibration module is specifically used for: in the positioning data of the positioning equipment, according to the time stamp of the first image, the world coordinate of the positioning equipment at the moment shown by the time stamp of the first image is obtained; and acquiring the world coordinates of the positioning points based on the world coordinates of the positioning equipment.
In another embodiment of the fourth aspect, the calibration module is specifically configured to: judging whether positioning data identical to the time stamp of the first image exists in the positioning data; when the positioning data does not exist, acquiring a plurality of world coordinates, wherein the time difference between the time stamp of each world coordinate and the time stamp of the first image is within a preset difference value; and carrying out interpolation processing on the plurality of world coordinates to obtain the world coordinates of the positioning equipment at the time indicated by the time stamp of the first image.
In another embodiment of the fourth aspect, the pixel coordinates of any one of the index points and the world coordinates of the index point satisfy the following formula:
Figure BDA0002409316330000101
wherein (x)w,yw) Is the world coordinate of the index point, (u, v) is the pixel coordinate of the index point,
Figure BDA0002409316330000102
is a homography matrix of the camera device, hi,jThe matrix parameters are homography matrix parameters of the camera device, wherein i and j are used for distinguishing the matrix parameters, the value is 1-3, and z iscIs a three-dimensional coordinate parameter.
Therefore, in any image acquired by the camera device, the homography matrix of the camera device is also satisfied between the pixel coordinate and the world coordinate of any object, so that the world coordinate of the target object can be obtained by processing the pixel coordinate of the target object by using the homography matrix of the camera device, and the positioning function for the target object is realized.
In another embodiment of the fourth aspect, the electronic device further comprises: and the transceiver module is used for transmitting the target world coordinates to the target vehicle when the target object in the second image is the target vehicle. Therefore, the interaction between the camera device and the target vehicle can be realized, and the target vehicle can also realize the positioning of the target vehicle based on the interaction.
In another embodiment of the fourth aspect, the calibration apparatus includes: a vehicle, a drone, or a ground robot.
In another embodiment of the fourth aspect, the positioning apparatus comprises: one or more of a Real Time Kinematic (RTK) positioning tag, an Ultra Wideband (UWB) positioning tag, or a Global Positioning System (GPS) receiver.
In another embodiment of the fourth aspect, the coordinate marking device may be a light pipe. In addition, other visual markers may be used.
In a fifth aspect, the present disclosure provides an electronic device comprising: at least one processor and memory; the memory stores computer-executable instructions; the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the method according to any one of the embodiments of the first aspect.
In a sixth aspect, the present disclosure provides an electronic device comprising: at least one processor and memory; the memory stores computer-executable instructions; the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the method according to any one of the embodiments of the second aspect.
In a seventh aspect, the present disclosure provides a positioning system, comprising: the system comprises calibration equipment, a camera device and electronic equipment; the calibration equipment is loaded with one or more positioning devices; the camera device is used for collecting images; an electronic device configured to perform the positioning method according to any one of the embodiments of the first aspect or the second aspect.
In one embodiment of the seventh aspect, the positioning system is an in-vehicle everything V2X system; the calibration equipment is a vehicle and is communicated with the camera device and the electronic equipment.
In another embodiment of the seventh aspect, the positioning system is an in-vehicle everything V2X system; the system further comprises a target vehicle, wherein the target vehicle is communicated with the camera device and the electronic equipment.
In an eighth aspect, the present disclosure provides a positioning system comprising: an image pickup device and an electronic apparatus; the camera device is used for collecting images; an electronic device configured to perform the positioning method according to any of the embodiments of the second aspect.
In one embodiment of the eighth aspect, the positioning system is an in-vehicle everything V2X system.
In one possible design, the electronic device referred to in the third aspect to the eighth aspect may be a camera (a processor therein), a terminal, a server (or a node therein), a vehicle processor, or the like.
In a ninth aspect, the present disclosure provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement a method as defined in any one of the embodiments of the first or second aspects.
In a tenth aspect, the present application provides a computer program for performing the method of any one of the embodiments of the first or second aspect when the computer program is executed by a computer.
In a possible design, the program in the tenth aspect may be stored in whole or in part on a storage medium packaged with the processor, or in part or in whole on a memory not packaged with the processor.
In summary, the present disclosure provides a positioning method and system, an electronic device, and a computer-readable storage medium, in which a calibration point is determined based on a movable calibration device, and positioning of the calibration point is implemented by a positioning device mounted in the calibration device, so that pixel coordinates of the calibration point can be obtained based on a first image, world coordinates of the calibration point can be obtained based on the positioning device, and thus a homography matrix of an image capturing apparatus can be obtained by calculation, and positioning of a target object can be implemented by using the homography matrix. In the process, the automatic calibration of the homography matrix can be realized without the need of road sealing operation, and the adverse effect of manual measurement on the calibration precision and the positioning precision is avoided, thereby being beneficial to improving the positioning precision.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic diagram of a positioning scenario provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a communication relationship of a positioning system according to an embodiment of the disclosure;
fig. 3 is a schematic diagram of a pixel coordinate system according to an embodiment of the disclosure;
fig. 4 is a schematic flowchart of a positioning method according to an embodiment of the disclosure;
fig. 5 is a scene schematic diagram illustrating calibration of a homography matrix of the image capturing apparatus according to the embodiment of the present disclosure;
fig. 6 is a schematic diagram of a calibration apparatus provided in an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a virtual coordinate system provided by an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of another virtual coordinate system provided by embodiments of the present disclosure;
fig. 9 is a schematic view of a carrying manner of a positioning device according to an embodiment of the present disclosure;
fig. 10 is a schematic diagram of another positioning scenario provided by an embodiment of the present disclosure;
fig. 11 is a schematic diagram of another positioning scenario provided by an embodiment of the present disclosure;
fig. 12 is a schematic flow chart of another positioning method provided in the embodiments of the present disclosure;
fig. 13 is a functional block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 14 is a functional block diagram of another electronic device according to an embodiment of the disclosure;
fig. 15 is a schematic physical structure diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 16 is a schematic view of another positioning system provided by embodiments of the present disclosure;
fig. 17 is a schematic diagram of another positioning system provided in an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
First, specific scenarios related to the embodiments of the present disclosure will be described. The positioning scheme provided by the embodiment of the disclosure is suitable for a scene in which a single camera device is used for collecting images and positioning a target object in the images.
For example, referring to fig. 1, fig. 1 is a schematic diagram of a positioning scenario provided in an embodiment of the present disclosure. As shown in fig. 1, on the road 110, the image pickup device 120 may be provided, and the number of the image pickup devices 120 may be one or more, and fig. 1 shows 1 image pickup device 120 for convenience of explanation. The camera device 120 may be a camera, a video recorder, or other equipment with image capturing function, and the present disclosure has no particular limitation on the type and capturing precision thereof. It should be understood that the camera 120 may be erected above the road, as shown in fig. 1; alternatively, the present disclosure may be directly disposed on both sides of the road, which is not particularly limited in the embodiments of the present disclosure.
The dotted rectangle in fig. 1 shows an image capturing range of the camera 120, that is, an image within the image capturing range can be captured by the camera 120. Therefore, in the embodiment of the disclosure, any one object in the image acquired by the camera device can be used as the target object, and the target object can be positioned.
For example, if there are two vehicles 130 in the image capturing range shown in fig. 1, the camera 120 can capture an image containing the two vehicles 130, and then the positioning scheme provided by the present disclosure can achieve positioning of one or both of the two vehicles 130.
Next, an execution body of the positioning scheme provided by the embodiment of the present disclosure is explained. Specifically, the execution body may be an electronic device, which is described in detail below.
In one embodiment of the present disclosure, the electronic device may be an image pickup apparatus, or further specifically, a processing chip or a processor in the image pickup apparatus. In this embodiment, the camera 120 shown in fig. 1 may capture an image and locate the vehicle 130 in the image.
In another embodiment of the present disclosure, the electronic device may be any electronic device that is in communication with the image capturing apparatus, or a processor in the electronic device. In this way, based on the communication connection between the two devices, the camera device can send the acquired image to the electronic device, and then the electronic device can locate the target object in the image based on the received image.
It should be understood that the camera may actively transmit images to the electronic device, for example, periodically transmit images, and for example, transmit images after establishing a communication connection; alternatively, the image pickup apparatus may transmit an image to the electronic device in response to a request received from the electronic device. The disclosed embodiments are not so limited.
The disclosed embodiment is not particularly limited with respect to the type of the electronic device. For example, referring to fig. 2, fig. 2 is a schematic diagram of a communication relationship of a positioning system according to an embodiment of the present disclosure. As shown in fig. 2, in the positioning system, the camera 120 may communicate with the vehicle 130, the terminal 140, the drone 150, and the network device 160. The communication method may specifically include, but is not limited to, wireless communication schemes such as 2G/3G/4G/5G. In this scenario, the electronic device (execution subject of the positioning method) provided by the present disclosure may specifically be: one or more of a vehicle 130, a terminal 140, a drone 150, a network device 160. In this way, the electronic device may acquire the image captured by the camera 120 through communication with the camera 120, and accordingly, may locate the target object in the image.
The terminal 140, also referred to as a User Equipment (UE), is a device that provides voice and/or data connectivity to a User, for example, a handheld device with a wireless connection function, a vehicle-mounted device, and the like. Common terminals include, for example: the mobile phone includes a mobile phone, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), and a wearable device such as a smart watch, a smart bracelet, a pedometer, and the like.
The network device 160 may be a network side device, for example, a Wireless Fidelity (WIFI) access point AP, a next generation communication base station, such as a 5G NR base station, for example: 5G gNB or small station, micro station, Transmission Reception Point (TRP), relay station, access Point, vehicle-mounted device, wearable device, etc. In this embodiment, the base stations in the communication systems of different communication systems are different. For the sake of distinction, a base station of the 4G communication system is referred to as an LTE eNB, a base station of the 5G communication system is referred to as an NR gNB, and a base station supporting both the 4G communication system and the 5G communication system is referred to as an LTE eNB, and these names are for convenience of distinction only and are not intended to be limiting.
In fig. 2, the electronic devices, i.e., one or more of the vehicle 130, the terminal 140, the drone 150, and the network device 160, may also communicate directly.
Illustratively, the vehicle 130, the drone 150, and the terminal 140 may be connected and communicate via bluetooth.
For example, the terminal 140 and the network device 160 may communicate via a wireless communication scheme such as 2G/3G/4G/5G, which is not exhaustive.
For example, in one possible embodiment shown in fig. 2, the positioning system may be embodied as a vehicle to aircraft (V2X) system, in which V2X vehicle 130 may communicate with camera device 120. In addition, the vehicle 130 may establish a communication connection with one or more of the terminal 140, the drone 150, and the network device 160. For example, referring to fig. 1 and fig. 2, when the vehicle 130 travels on the road 110, the camera 120 may capture an image including the vehicle 130, so that the camera 120 may further send the captured image to the vehicle 130, and after receiving the image, the vehicle 130 may obtain its own position according to the present scheme.
The principle of achieving positioning based on a single camera device will now be briefly described. When positioning is implemented based on a single image capturing device, the positioning is generally implemented by using a homography matrix of the image capturing device, and the homography matrix of the image capturing device is used for describing a mapping relation between a pixel coordinate system and a world coordinate system of the image capturing device. The specific positioning mode is detailed later.
Wherein the world coordinate system is used to describe the actual position of the object in space. Specifically, the world coordinate system is generally a three-dimensional coordinate system, and in the positioning scene related to the embodiment of the present disclosure, the two-dimensional world coordinate (x) may be passedw,yw) To describe the actual position of the object in space. In which world (world) is represented.
And the pixel coordinate system is a planar coordinate system for describing the position of the object in the image. In practical application scenarios, the pixel coordinate system can be defined in various ways.
For example, fig. 3 shows a schematic diagram of a pixel coordinate system provided by the embodiment of the present disclosure. For the image shown in fig. 3 (the image content is not limited here), the upper left corner of the image plane can be used as the origin O of the pixel coordinate systempAnd two coordinate axes in the pixel coordinate system are expressed as: o ispu axis and Opv axis, wherein OpThe u-axis points to the right, and OpThe v-axis is directed downward. As such, in the pixel coordinate system, the pixel position of the object in the pixel coordinate system can be described by the pixel coordinates (u, v).
Besides, the pixel coordinate system can also be defined in other ways. For example, the upper right corner of the image may be taken as the origin of the pixel coordinate system, with the two coordinate axes pointing to the left and below, respectively. Illustratively, the image center point may be taken as the origin of the pixel coordinate system, with the two coordinate axes pointing to the right and above, respectively. And is not exhaustive.
The homography matrix of the image pickup apparatus can be acquired in the following two ways:
first, a homography matrix is solved by internal and external parameters of an imaging device. Wherein, the internal and external parameters involved may include but are not limited to: position, attitude, pixel size, focal length, etc. of the imaging device.
In such a calculation method, it is necessary to finally acquire the homography matrix of the imaging apparatus by using the relationship among the pixel coordinate system, the image physical coordinate system, the camera coordinate system, and the world coordinate system. And will not be described in detail herein.
However, when the homography matrix is obtained using the internal and external parameters, the influence of the distortion factor and the posture of the imaging device needs to be considered. On one hand, the distortion factor generally needs to be calculated and acquired in advance by shooting a checkerboard, and the image is subjected to distortion elimination processing. However, the distortion removal processing cannot completely solve the influence of the distortion factor on the image, and the image after the distortion removal processing still has distortion of different degrees, which affects the calibration precision and the positioning precision of the homography matrix. On the other hand, the attitude of the camera directly affects the accuracy of the homography matrix, and the attitude of the camera is difficult to measure and has a large error, so that the accuracy of the homography matrix and the accuracy of the positioning accuracy are poor.
Second, homography matrices are typically obtained by manual calibration. As mentioned above, within the image capturing range of the camera, the calibration personnel can select a plurality of fixed positions in the road, such as the top of the lane line, the corner points of the zebra crossing, the signs on both sides of the road, etc., and then manually measure the world coordinates of the fixed positions, so as to calculate the homography matrix of the camera by combining the pixel coordinates of the fixed positions in the image captured by the camera. Furthermore, internal and external parameters of the camera device can be obtained through reverse calculation.
Compared with the first method, the second method is advantageous for improving the positioning accuracy, but the labor consumption is very large, and the road sealing operation is also required. However, this positioning method is also greatly affected by manual calibration errors, and the calibration accuracy for the homography matrix is poor, which results in poor positioning accuracy.
The calibration mode aiming at the homography matrix has the problem of poor precision, and further the positioning precision achieved by the calibration mode is poor.
Technical solutions of embodiments of the present application will be described in detail below with specific embodiments, and positioning accuracy can be improved by these technical solutions. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
The embodiment of the disclosure provides a positioning method. Referring to fig. 4, fig. 4 is a schematic flow chart illustrating a positioning method according to an embodiment of the disclosure, where the method includes the following steps:
s402, in the process of moving the calibration equipment, acquiring the pixel coordinates of the calibration point in the first image; the first image comprises a calibration device, and the calibration point is determined by the calibration device.
In this step, the first image is from the imaging device. In particular, the camera device may capture video or pictures. When the video from the camera device is acquired, the electronic device may perform subsequent processing on each frame of picture in the video as the first image, or may perform frame extraction processing on the video to obtain an image of a partial frame in the video, and perform subsequent processing on the image as the first image.
In the embodiment of the present disclosure, the electronic device may acquire image data acquired by the camera in real time, or the electronic device may also acquire image data at intervals (for example, periodically or periodically).
As before, the electronic device may actively send an image acquisition request to the positioning device to request image data for a period of time, and receive image data fed back by the camera device based on the image acquisition request. Alternatively, the image pickup device may directly transmit image data to the electronic apparatus, and the electronic apparatus may directly receive the image data.
Embodiments of the present disclosure utilize a movable calibration device to indicate a calibration point. Therefore, in the process of moving the calibration equipment, a plurality of first images can be acquired. Each first image contains the calibration device, and therefore, based on the plurality of first images, the pixel coordinates of the plurality of calibration points in the first image where the calibration points are located can be obtained.
The calibration device may include, but is not limited to: one or more of a ground mobile device or a drone. Among others, terrestrial mobile devices may include, but are not limited to: vehicles, ground robots, etc.
In addition, in order to enable the calibration equipment to be contained in the first image, the calibration equipment can be started to move in an image acquisition area of the camera equipment. For example, in the scene shown in fig. 1, if the vehicle 130 on the left side is a calibration device, the vehicle 130 moves in the area indicated by the dashed line frame, and thus the image captured by the camera 120 includes the vehicle 130.
The position of the calibration point may be different depending on the indication mode of the calibration device, which will be described later with reference to specific embodiments.
S404, acquiring world coordinates of the calibration point, wherein the world coordinates are determined by positioning equipment, and the positioning equipment is carried on the calibration equipment.
The positioning device is mounted on the calibration device, and moves along with the movement of the calibration device. During the movement of the calibration device, the positioning device may continuously acquire positioning data of the position of the positioning device (i.e., the world coordinates of the positioning device). Then, based on the positional relationship between the positioning device and the calibration point, the world coordinates of the calibration point may be acquired.
In the embodiment of the present disclosure, the calibration device may include, but is not limited to, one or more of the following: one or more of a Global Positioning System (GPS) receiver or a location tag. Wherein, the location label includes: one or more of a Real-time kinematic (RTK) positioning tag, an Ultra Wide Band (UWB) positioning tag.
The execution main body (electronic device) and the positioning device in the scheme can communicate in a wired or wireless mode. Specifically, the two devices can communicate wirelessly through one or more of 2G/3G/4G/5G, Bluetooth Communication, and Near Field Communication (NFC). It should be understood that bluetooth and NFC communication are applicable to the case where the communication condition (e.g., the distance between the two) between the positioning device and the electronic device meets the communication requirement required by the communication mode.
In the embodiment of the present disclosure, the electronic device may acquire the positioning data acquired by the positioning device in real time, or the electronic device may also acquire the positioning data at intervals (for example, periodically acquire or regularly acquire) within a period of time.
In addition, the electronic device may actively send a data acquisition request to the positioning device to request positioning data for a period of time, and receive positioning data fed back by the positioning device based on the data acquisition request. Alternatively, in another embodiment, the positioning device may directly send data to the electronic device, and the electronic device may directly receive the positioning data.
Therefore, the positioning equipment loaded in the calibration equipment can automatically acquire the world coordinates of the calibration point without road sealing operation or manual measurement data of calibration personnel, and the adverse effect of manual operation on calibration precision and positioning precision is reduced.
And S406, acquiring a homography matrix of the image pickup device based on the pixel coordinates and the world coordinates of the plurality of calibration points, wherein the homography matrix of the image pickup device is used for describing the mapping relation between the pixel coordinate system and the world coordinate system of the image pickup device.
And S408, acquiring target world coordinates of the target object in the second image by using the homography matrix of the camera device when the second image is acquired.
In the embodiment of the disclosure, the first image and the second image are images acquired by the same camera device. The first image is used for calibrating the homography matrix of the camera device; and the second image is used for realizing the positioning of the target object. In other words, in the embodiment shown in fig. 4, the homography matrix of the camera device can be calibrated based on the image (first image) captured by the camera device, so that when the camera device captures the image (second image) again, the target object in the image (second image) can be located.
It should be noted that the target object may be any object in the image, including but not limited to an object in a road. In an exemplary embodiment, the target object may be a vehicle that normally travels in the image, such as the scene shown in fig. 1. In another exemplary embodiment, the target object may be a faulty vehicle in the image, for example, a vehicle stopped on a highway due to a fault, a vehicle in which a traffic accident occurs, or the like. In another exemplary embodiment, the target object may be any pedestrian, road sign, building, etc. in the image.
When the scheme is specifically implemented, the target object may be preset or designated in advance (for example, the target object is designated by using a vehicle identifier, and for example, the target object is designated by using a vehicle color), or the image may be processed in a preset processing manner to determine whether the target object exists in the image by screening, and the target object determined by screening is located. The determining mode of the target object is not particularly limited in the embodiment of the disclosure, and the target object can be designed in a self-defined manner in an actual scene.
In summary, as shown in the embodiment shown in fig. 4, the pixel coordinates and the world coordinates of the plurality of calibration points can be obtained through the movable calibration device, the positioning device mounted on the calibration device, and the image capturing device, and compared with the positioning method in the prior art, the method and the device do not need to perform road closing operation or manual calibration, which is beneficial to improving the calibration accuracy of the homography matrix of the image capturing device and improving the positioning accuracy.
On the basis of the embodiment shown in fig. 4, the embodiment of the present disclosure may also provide the following two positioning manners based on different manners of indicating the calibration point by the calibration device.
The first positioning mode is as follows: and carrying coordinate marking equipment in the calibration equipment, and constructing a virtual coordinate system through the coordinate marking equipment, so that the mapping between the pixel coordinate system and the world coordinate system is realized through the virtual coordinate system.
In a first positioning manner, referring to fig. 5, a scene for calibrating the homography matrix of the image capturing apparatus 120 may be referred to, and fig. 5 is a scene schematic diagram for calibrating the homography matrix of the image capturing apparatus provided in the embodiment of the present disclosure. As shown in fig. 5, in this scenario, the calibration facility 170 is specifically a vehicle, and a positioning facility 180 and a coordinate marking facility 190 are also mounted on the calibration facility 170. The calibration device 170 moves within an image capturing range (a dashed line frame) of the camera device 120, so that the camera device 120 can capture a plurality of first images including the calibration device, and the positioning device 180 mounted in the calibration device 170 can also capture corresponding positioning data, so that the electronic device 200 can obtain pixel coordinates and world coordinates of a plurality of calibration points based on communication with the positioning device 180 and the camera device 120, respectively.
It should be understood that the electronic device 200 in fig. 5 is represented as a terminal, and is only one possible implementation manner of the embodiment of the disclosure, and is not described in detail. The scene diagrams of fig. 1 and 5 are merely schematic diagrams, and the shape, size, model, and the like of the actual electronic device are not particularly limited.
And, in the scenario shown in fig. 5, the positioning device 180 and the coordinate marking device 190 are disposed at the rear of the calibration device 170, but in an actual scenario, the present invention is not limited thereto. For example, the positioning device 180 and the coordinate marking device 190 may also be disposed on the side of the vehicle, so that the side of the vehicle where the coordinate marking device 190 is disposed is only required to face the camera 120, and it is ensured that the first image captured by the camera 120 includes the coordinate marking device 190.
In the embodiment of the present disclosure, the coordinate marking device 190 has various structures and materials. For example, the plane for bearing the coordinate marking function in the coordinate marking device 190 may have any structure, such as a rectangle, a circle, a polygon, and the like, and the embodiment of the present disclosure is not particularly limited thereto. The coordinate marking device 190 may be made of a lamp tube, and in other embodiments, the coordinate marking device 190 may also be made of a metal material, for example, a single metal element material, or an alloy material; alternatively, the coordinate marking apparatus 190 may also be of a plastic material.
For example, please refer to fig. 6 and fig. 7. Fig. 6 is a schematic diagram of a calibration apparatus provided in an embodiment of the present disclosure; fig. 7 is a schematic diagram of a virtual coordinate system according to an embodiment of the disclosure. Specifically, fig. 6 specifically shows a schematic diagram of one calibration apparatus in the calibration scenario shown in fig. 5, and fig. 7 shows a schematic diagram of a virtual coordinate system constructed by the coordinate marking apparatus 190 in fig. 6.
In the embodiment shown in fig. 6, the coordinate marking device 190 has a rectangular structure, the intersection line of the plane where the coordinate marking device 190 is located and the ground is a line 1, and the calibration point a is located on the line 1.
As shown in fig. 7, the upper left corner of the coordinate marking device 190 may be taken as the origin O of the pixel coordinate system, and two coordinate axes in the pixel coordinate system are represented as: an OX axis and an OY axis, wherein the OX axis is directed to the right and the OY axis is directed to the down. As such, in the virtual coordinate system, the virtual position of the object in the virtual coordinate system can be described by using the virtual coordinates (X, Y).
For example, if the coordinate marking device 190 is a rectangular frame of a × b in fig. 7, then, in the virtual coordinate system, the four vertices of the coordinate marking device 190 can be represented as: upper left corner (0, 0), upper right corner (b, 0), lower left corner (0, a), lower right corner (b, a).
In the embodiment shown in FIG. 7, index point A is located on the central axis of the coordinate marking apparatus 190 with a component b/2 on the OX axis; the component of the calibration point a on the OY axis is a + h when the distance between the calibration point a and the lower frame of the coordinate marking apparatus 190 is h, and thus the virtual coordinates of the calibration point a in the virtual coordinate system are: (b/2, a + h).
It should be noted that fig. 7 is only an embodiment of the present disclosure, and in an actual implementation scenario of the present solution, the calibration point a may also be disposed at any position on the line 1. For example, the calibration point may be a position on the line 1 where the OX axis component is 0, and at this time, the virtual coordinate of the calibration point a is (0, a + h). For another example, the calibration point may be a position on the line 1 where the OX axis component is 2a, and in this case, the virtual coordinate of the calibration point a is (2a, a + h). For another example, the calibration point may be a position on the line 1 where the OX axis component is 0.8b, and in this case, the virtual coordinate of the calibration point a is (0.8b, a + h).
Furthermore, the calibration device may indicate one or more calibration points, and thus one or more calibration points may be present in one first image. For example, in the scenario shown in fig. 7, (0, a + h), (b/2, a + h), (b, a + h) may also be selected as the calibration points on the line 1.
In addition, in the embodiment shown in fig. 7, the vertices in the coordinate marking device 190 are used as reference points, so the number of vertices in the coordinate marking device 190 may be increased to improve the calibration accuracy of the homography matrix of the first image.
Exemplarily, referring to fig. 8, fig. 8 shows a schematic diagram of another virtual coordinate system provided by the embodiment of the present disclosure. As shown in fig. 8, the coordinate marking device is composed of a plurality of lamps (or iron pipes). Wherein, there are n and m lamps in the horizontal direction and vertical direction respectively, and there are n × m vertexes correspondingly. Thus, the top left corner of the coordinate marking apparatus is still used as the origin O of the pixel coordinate system, and of the two coordinate axes, the OX axis is directed to the right and the OY axis is directed to the lower. In this way, in the virtual coordinate system, the virtual coordinates of the respective vertices and the virtual coordinates of the index point a can be obtained. Compared with fig. 7, fig. 8 includes more vertices, and if the vertices are all used as reference points to calibrate the homography matrix of the first image, it is beneficial to improve the calibration accuracy.
Based on the embodiments shown in fig. 6 to 8, since the calibration point a is not actually a point in the coordinate marking device 190, the electronic device cannot directly read the pixel coordinates of the calibration point a in the first image after acquiring the first image.
Thus, the pixel coordinates of the index point a can be acquired as follows:
the method comprises the steps of obtaining virtual coordinates of a calibration point in a virtual coordinate system, and obtaining a homography matrix of a first image. The homography matrix of the first image is used for describing the mapping relation between the virtual coordinate system and the pixel coordinate system in the first image. Then, the virtual coordinates of the index point can be processed by using the homography matrix of the first image, and the pixel coordinates of the index point a can be obtained.
Since the first image is acquired during the movement of the calibration device 170, the positions of the calibration point a in the first images acquired by the electronic device are different due to the different positions of the calibration device 170 in the first images. Therefore, when the scheme is implemented, for each calibration point, a homography matrix of the first image to which the calibration point belongs needs to be acquired.
Specifically, when the homography matrix of any one first image is acquired, the homography matrix can be realized through a plurality of reference points selected from the coordinate marking device, in other words, the reference points are all located on the coordinate marking device. In specific implementation, pixel coordinates of a plurality of reference points on the coordinate marking device can be obtained in the first image; acquiring the virtual coordinate of each reference point in a virtual coordinate system; thus, a homography matrix of the first image is calculated based on the virtual coordinates and the pixel coordinates of the plurality of reference points. The calculation method is the same as that of the homography matrix of the image pickup apparatus, and will be described in detail later.
In the disclosed embodiment, the reference point may be any point on the coordinate marking device 190. In an exemplary embodiment, the reference point may be a vertex of the coordinate marking device 190, and in this embodiment, the pixel coordinates of the reference point can be directly obtained in the first image without additional marking. Alternatively, in another exemplary embodiment, the position of the reference point may also be indicated by a mark having a display indication function, such as a color, a light, and the like, and thus, the pixel coordinates of the reference point may also be directly acquired in the first image.
The virtual coordinate of the reference point may be obtained based on the virtual coordinate system constructed by the coordinate marking device 190 and the position of the reference point in the coordinate marking device 190, and is not described again.
In the embodiments shown in fig. 5 to fig. 8, during the movement of the calibration device 170, the positioning device 180 may continuously acquire the positioning data, so that the electronic device needs to acquire the positioning data of the corresponding time of each first image from the positioning data of the positioning device. This may be done based on a time stamp of the positioning data and a time stamp of the image data, both time stamps being the time instants when the respective acquisition devices acquired the data.
In other words, when the electronic device executes step S404 in fig. 4, the electronic device may acquire, in the positioning data of the positioning device, the world coordinates of the positioning device at the time indicated by the timestamp from the timestamp of the first image, and thereby acquire the world coordinates of the calibration point based on the world coordinates of the positioning device.
In this case, the positioning data that matches the time stamp of the first image may be acquired in the positioning data, and thus, the world coordinates of the positioning device acquired at the same time as the first image may be obtained.
However, the frequency with which the camera device and the positioning device acquire data may not be consistent, or there may be accumulated errors, which may result in the positioning data not having the same world coordinates as the time indicated by the time stamp of the first image. In consideration of this, the electronic device may determine whether there is positioning data that is the same as the time stamp of the first image among the positioning data, thereby, when there is no positioning data that is the same as the time stamp of the first image among the positioning data, acquiring a plurality of world coordinates, a time difference between the time stamp of each world coordinate and the time stamp of the first image being within a preset difference, and further, performing interpolation processing on the plurality of world coordinates to obtain world coordinates of the positioning device at the time indicated by the time stamp of the first image.
For example, if the timestamp of the first image is 15 o 'clock and 40 min 0 sec, it may be determined whether there is a timestamp of the world coordinates of 15 o' clock and 40 min 0 sec in the positioning data from the positioning device. If the world coordinate exists, the world coordinate is directly acquired, namely the world coordinate of the positioning equipment at the moment is obtained. On the other hand, if the time stamp does not exist, the world coordinates corresponding to the time stamp near the time may be acquired, for example, 15 o ' clock 39 min 40 sec world coordinates and 15 o ' clock 40 min 30 sec world coordinates may be acquired, and the two world coordinates may be interpolated to acquire 15 o ' clock 40 min 0 sec world coordinates.
In this embodiment, the acquired plurality of world coordinates used for interpolation processing may be positioning data on the same side of the timestamp of the first image (before the timestamp of the first image, or after the timestamp of the first image), or may be positioning data on both sides of the timestamp of the first image (both before the timestamp of the first image and after the timestamp of the first image). And are not exhaustive or repeated.
In addition, in the plurality of acquired world coordinates used for interpolation processing, the time difference between the time stamp of each world coordinate and the time stamp of the first image is within a preset difference value, so that the world coordinates obtained by interpolation are closer to the actual position of the positioning device, and the calibration precision is improved.
After the world coordinates of the positioning device are obtained, the relationship between the positioning device and the calibration point needs to be considered when the world coordinates of the calibration point are obtained.
As before, in the embodiments of the present disclosure, the number of the positioning devices may be one or more.
In the scenarios shown in fig. 5 to 7, one positioning device is shown. In the scene, the accurate positioning of the calibration point is realized through a single positioning device, and the world coordinates of the calibration point can be controlled to be consistent.
In one specific embodiment, the pointing device 180 is in the same plane as the coordinate marking device 190, which is perpendicular to the ground, and the location indicated by the ground projection of the pointing device is determined as the pointing point, where the world coordinates of the pointing point are the world coordinates of the pointing device. In this case, the positioning data collected by the positioning device 180 is actually the positioning data of the calibration point. In the implementation of the present solution, since the world coordinate of the positioning device 180 is consistent with the world coordinate of the calibration point a, at this time, the electronic device only needs to acquire the world coordinate of the positioning device 180, and can obtain the world coordinate of the calibration point. This implementation is simple and feasible and requires no additional computation.
In the embodiment of the present disclosure, the number of the positioning devices 180 may also be multiple. In this case, the positional relationship between the positioning devices 180 and the coordinate marking device 190 is not particularly limited. For example, all of the pointing devices 180 may be co-planar with the coordinate marking devices 190, or some of the pointing devices 180 may be co-planar with the coordinate marking devices 190, or all of the pointing devices 180 may not be co-planar with the coordinate marking devices 190.
In this case, the positioning data acquired by each positioning device is actually the position (world coordinate) of the positioning device, and at this time, the world coordinates of the plurality of positioning devices 180 may be calculated based on the positional relationship between the plurality of positioning devices 180, so as to obtain the world coordinate of the calibration point a. The implementation mode acquires the positioning data through the plurality of positioning devices 180, which is beneficial to improving the precision of the positioning data and further beneficial to improving the calibration precision and the positioning precision.
Exemplarily, please refer to fig. 9, and fig. 9 shows a schematic diagram of a mounting manner of a positioning device according to an embodiment of the present disclosure. As shown in fig. 9, a plurality of positioning devices 180 and coordinate marking devices 190 are included, and in this case, the positioning devices 180 are not located in the plane of the coordinate marking devices 190.
In the scenario shown in fig. 9, the OZ axis is constructed and made perpendicular to the plane where the coordinate system device 190 is located, and at this time, a three-dimensional virtual coordinate system as shown in fig. 9 can be obtained. In the three-dimensional virtual coordinate system, since the position of the coordinate marking device 190 is fixed, the position of the calibration point a with respect to the virtual coordinate system and the position with respect to the calibration device 170 are also fixed. In this manner, the world coordinates of the index point A may be calculated based on the positional relationship of the plurality of positioning devices 180 within the plane defined by the OX axis and the OZ axis. For example, in FIG. 9, virtual coordinates (X) in a plane defined by the OX axis and the OZ axis are taken using k positioning devices 1801,Z1)、(X2,Z2)……(Xk,Zk) And the world coordinates of each positioning device 180, to calculate the world coordinates of the index point a. Where k is greater than or equal to 2, fig. 9 schematically illustrates 3 positioning devices 180.
As mentioned above, when the world coordinates of the calibration point a are obtained by using the world coordinates of a plurality of positioning devices 180, the plane where the coordinate marking device 190 is located may or may not be perpendicular to the ground. When the two are not perpendicular, the world coordinate of the calibration point a can be calculated according to the driving direction of the calibration device 170, the included angle between the plane where the coordinate marking device 190 is located and the ground, the height of the positioning device 180 from the ground, the position of the calibration point a, and the world coordinate of the positioning device 180. In this case, the number of the pointing device 180 may be one or more.
Based on the foregoing processing, the pixel coordinates and world coordinates of the plurality of calibration points may be acquired during the movement of the calibration device, so that the homography matrix of the imaging apparatus may be obtained through calculation, and the calculation manner is described in detail later.
The second positioning mode is as follows: during the movement of the mobile device, the calibration point may be determined based on the camera 120 and the positioning device 180.
For example, fig. 10 is a schematic diagram illustrating another positioning scenario provided by the embodiment of the present disclosure. As shown in fig. 10, in this positioning scene, a denotes a calibration point; b represents a positioning point, the positioning point is located on the positioning device 180, the positioning point in fig. 10 is specifically the position of the positioning device 180 in the calibration device 170, and B' is the world coordinate of the positioning device; c denotes the position of the optical center of the image pickup device 120, and C' is the world coordinate position of the image pickup device 120.
At this time, as shown in fig. 10, the index point a is on the same straight line as the optical center C and the index point B of the imaging device 120. More specifically, the index point a is located at an intersection point of a straight line formed by the optical center C of the imaging device 120 and the positioning point B and the ground.
Thus, in this embodiment, in the image data acquired by the camera device 120, the position of the positioning point B coincides with the position of the positioning point a, and the positioning point a is completely covered by the positioning point B, at this time, in the first image, the pixel coordinate of the positioning point B is completely consistent with the pixel coordinate of the positioning point a. Thus, in the embodiment shown in fig. 10, when S402 is executed, the pixel coordinates of the positioning point B in the first image may be directly obtained, that is, the pixel coordinates of the positioning point a may be obtained.
On this basis, the positioning device 180 can be marked by marking devices such as colors and lights, so that the positioning device 180 is more obvious and prominent in the first image, and more accurate pixel coordinates of the positioning device can be conveniently acquired.
In the embodiment shown in fig. 10, the world coordinates of the index point a also need to be acquired. In a specific implementation, the world coordinates of the positioning point may be acquired, the world coordinates of the positioning point being determined by the positioning data of the positioning device, and the world coordinates of the camera device may be acquired, thereby acquiring the world coordinates of the positioning point based on the world coordinates of the positioning point and the world coordinates of the camera device.
The world coordinates of the positioning point are derived from the positioning data of the positioning device 180, and will not be described in detail herein. In addition, the positioning device 180 is fixedly arranged on the calibration device 170, and the vertical distance between the positioning device 180 and the storefront (i.e. the distance between B and B') is also fixed.
The camera 120 itself is fixed on the road (on both sides), so that the world coordinate of the camera 120 (i.e., the world coordinate of the point C ') and the vertical distance between the optical center of the camera 120 and the ground (the distance between C and C') can be obtained. In this scheme, the data may be derived from manual measurement or manual input of a calibration person, or the distance between C and C 'may also be automatically measured by other length measuring devices, and the world coordinate of C' is automatically measured by a positioning device.
Based on words, when the world coordinates of the calibration points are specifically acquired, the world coordinates can be realized by using a triangle similarity theorem. As shown in fig. 10, since the triangle formed by a-B 'is similar to the triangle formed by a-C', the world coordinates of the camera 120 and the world coordinates of the pointing device 180 can be processed based on the triangle similarity theorem to obtain the world coordinates of the calibration point a.
In the second positioning method provided in the embodiment of the present disclosure, in addition to the scheme shown in fig. 10, one positioning device 180 may be used to obtain world coordinates of the calibration point, or a plurality of positioning devices may be mounted on the calibration device 170.
For example, fig. 11 is a schematic diagram illustrating another positioning scenario provided by the embodiment of the present disclosure. As shown in fig. 11, the calibration device 170 used in fig. 11 is a drone, and the position where the calibration device 170 is located is determined as a positioning point. Moreover, a plurality of (4 shown in fig. 11) positioning devices 180 are further disposed on the calibration device 170, and these positioning devices 180 can acquire positioning data during the flight operation of the unmanned aerial vehicle, so that the positioning data of the unmanned aerial vehicle can be determined.
In the embodiment shown in fig. 11, the flight height of the drone may be obtained through communication with the calibration device 170, so that the distance between B and B' may be known. And based on the positioning data of a plurality of unmanned aerial vehicles, the world coordinate of positioning point B can also be obtained.
The manner of obtaining the world coordinates of the positioning point B will now be described with respect to the embodiments shown in fig. 10 and 11. Specifically, in the positioning data of the positioning device, the world coordinates of the positioning device at the time indicated by the time stamp of the first image may be acquired from the time stamp of the first image, and thus, the world coordinates of the positioning point may be acquired based on the world coordinates of the positioning device.
Here, similarly to the first positioning method, there may be a case where the timestamp of the image data does not correspond to the timestamp of the positioning data, and at this time, the world coordinates of the positioning apparatus at the time indicated by the timestamp of the first image may be obtained by interpolation.
The manner of obtaining the world coordinates of the positioning device by interpolation is the same as the interpolation provided in the first positioning manner. That is, it is possible to determine whether there is positioning data that is the same as the time stamp of the first image in the positioning data, and thereby, when there is no positioning data that is the same as the time stamp of the first image in the positioning data, a plurality of world coordinates are acquired, and a time difference between the time stamp of each world coordinate and the time stamp of the first image is within a preset difference value, and then, interpolation processing is performed on the plurality of world coordinates, and the world coordinate of the positioning apparatus at the time indicated by the time stamp of the first image is obtained. And will not be described in detail.
When a plurality of positioning devices exist, the world coordinates of each positioning device can be obtained according to the above manner, which is not described in detail.
After the world coordinates of the positioning devices are obtained, the world coordinates of the positioning points are determined according to the position relationship among the positioning devices, and the description is omitted.
Based on any one of the positioning modes, the pixel coordinates and world coordinates of the plurality of calibration points can be obtained through the movable calibration equipment. On this basis, the homography matrix of the image pickup apparatus can be calculated.
For a specific imaging device, the pixel coordinates and world coordinates of any object, such as a calibration point, may satisfy the following formula:
Figure BDA0002409316330000251
wherein (x)w,yw) World coordinates of an object (e.g., a index point), (u, v) pixel coordinates of an object (e.g., an index point),
Figure BDA0002409316330000252
is a homography matrix of the camera device, hi,jThe matrix parameters are homography matrix parameters of the camera device, wherein i and j are used for distinguishing the matrix parameters, the value is 1-3, and z iscIs a three-dimensional coordinate parameter.
Based on this, the pixel coordinates and world coordinates of a plurality of calibration points can be substituted into the formula, and each matrix parameter in the homography matrix can be obtained through solving the equation system, so that the homography matrix of the camera device is obtained.
In this formula, 8 unknown parameters are involved, so that at least 4 (4 or more) calibration points are required to calibrate the homography matrix of the image pickup apparatus by this scheme.
It should be further noted that, in the foregoing first positioning manner, when the homography matrix of the first image is obtained, the world coordinates in the above formula may be replaced with the virtual coordinates of the reference points, the virtual coordinates and the pixel coordinates of each reference point are substituted into the above formula, and the equation set is solved, so that the homography matrix of the first image can be obtained through calculation. Thus, at least 4 reference points are also required in implementing the calibration of the homography matrix of the first image. And will not be described in detail.
In addition, when the image (referred to as a second image in this disclosure) is acquired by the imaging device, only the target pixel coordinate of the target object needs to be acquired in the second image, and the target pixel coordinate of the target object is substituted into the above formula, so that the target world coordinate of the target object in the world coordinate system can be calculated. Thus, the target object is positioned.
In the embodiment of the present disclosure, when the target object in the second image is the target vehicle, if the target vehicle is in communication connection with the electronic device of the scheme execution subject, the electronic device may further send the target world coordinate to the target vehicle after locating the position of the target vehicle. The target vehicle can receive the target world coordinates and can know the position of the target vehicle according to the target world coordinates. Furthermore, the target vehicle can further realize automatic driving and obstacle avoidance based on the received target world coordinates.
The embodiment of the present disclosure further provides another positioning method, please refer to the flowchart shown in fig. 12, and the method includes the following steps:
and S1202, acquiring a second image by using the camera device, wherein the second image comprises the target object.
As mentioned above, the target object may be any object in the second image, which is not described in detail.
And S1204, acquiring target pixel coordinates of the target object in the second image.
After the target object is determined, the coordinates of the target object in the pixel coordinate system only need to be read in the pixel coordinate system of the second image.
S1206, processing the target pixel coordinate by using the homography matrix of the camera device to obtain a target world coordinate of the target object; the homography matrix of the camera is used for describing the mapping relation between the pixel coordinate system and the world coordinate system of the camera.
In the embodiment shown in fig. 12, the execution subject (electronic device) of the positioning method may directly call the homography matrix of the image capturing apparatus to position the target object.
In this embodiment, the homography matrix of the camera may be stored in any location readable by the electronic device, and may include, but is not limited to: the memory of the electronic device, another electronic device (for example, an image capturing apparatus) in communication connection with the electronic device, a storage (including a physical storage and a cloud storage) of the electronic device with data access authority, and the like are not exhaustive.
In this way, when the present positioning method is executed, the electronic apparatus can read the homography matrix of the imaging device, and then process the target pixel coordinates using the homography matrix, thereby obtaining the target world coordinates of the target object.
In the embodiment shown in fig. 12, the homography matrix of the imaging device can be calibrated in the manner shown in S402 to S406 in fig. 4, and the following brief description is given, and reference may be made to the foregoing description for the inexhaustible points.
In other words, in one possible embodiment of fig. 12, the homography matrix of the image capturing apparatus may be acquired in the following manner: in the process of moving the calibration equipment, acquiring the pixel coordinates of the calibration point in the first image; the first image includes a calibration device, the calibration points are determined by the calibration device, and world coordinates of the calibration points are acquired, the world coordinates are determined by a positioning device, and the positioning device is mounted on the calibration device, so that a homography matrix of the camera device is acquired based on pixel coordinates and the world coordinates of the plurality of calibration points.
In a possible embodiment, the calibration device is provided with a coordinate marking device, and the coordinate marking device is used for determining a virtual coordinate system; the calibration point is positioned on the intersection line of the plane where the coordinate marking equipment is positioned and the ground.
In this embodiment, when acquiring the pixel coordinates of the calibration point in the first image, the virtual coordinates of the calibration point in the virtual coordinate system may be acquired, and a homography matrix of the first image may be acquired, where the homography matrix of the first image is used to describe a mapping relationship between the virtual coordinate system and the pixel coordinate system in the first image, so that the pixel coordinates of the calibration point are obtained by processing the virtual coordinates of the calibration point with the homography matrix of the first image.
In this embodiment, when the world coordinates of the calibration point are acquired, the world coordinates of the positioning device at the time indicated by the time stamp may be acquired from the time stamp of the first image in the positioning data of the positioning device, and then the world coordinates of the calibration point may be acquired based on the world coordinates of the positioning device.
In another possible embodiment, the calibration point and the optical center and the positioning point of the camera device are in the same straight line; the positioning point is located on the positioning device.
In this embodiment, the pixel coordinates of the calibration point are consistent with the pixel coordinates of the positioning point in the first image, and therefore, the pixel coordinates of the positioning point in the first image can be obtained, and the pixel coordinates of the calibration point can be obtained.
In this embodiment, when the world coordinates of the calibration point are acquired, the world coordinates of the positioning point, which are determined by the positioning data of the positioning apparatus, may be acquired, and the world coordinates of the image pickup device may be acquired, so that the world coordinates of the calibration point are acquired based on the world coordinates of the positioning point and the world coordinates of the image pickup device.
It is to be understood that some or all of the steps or operations in the above-described embodiments are merely examples, and other operations or variations of various operations may be performed by the embodiments of the present application. Further, the various steps may be performed in a different order presented in the above-described embodiments, and it is possible that not all of the operations in the above-described embodiments are performed.
The embodiment of the disclosure also provides an electronic device.
Illustratively, fig. 13 shows a schematic diagram of an electronic device, as shown in fig. 13, the electronic device 1300 includes: a first acquisition module 132, a second acquisition module 134, a first calculation module 136, and a second calculation module 138.
The first obtaining module 132 is configured to obtain a pixel coordinate of the calibration point in the first image in the process of moving the calibration device; the calibration device is contained in the first image, and the calibration point is determined by the calibration device;
a second obtaining module 134, configured to obtain world coordinates of the calibration point, where the world coordinates are determined by a positioning device, and the positioning device is mounted on the calibration device;
a first calculating module 136, configured to obtain a homography matrix of the image capturing apparatus based on the pixel coordinates and the world coordinates of the plurality of calibration points, where the homography matrix of the image capturing apparatus is used to describe a mapping relationship between a pixel coordinate system of the image capturing apparatus and a world coordinate system;
and the second calculating module 138 is configured to, when a second image is acquired, acquire target world coordinates of a target object in the second image by using the homography matrix of the camera.
In a possible embodiment, the calibration device is equipped with a coordinate marking device, and the coordinate marking device is used for determining a virtual coordinate system; the calibration point is positioned on the intersection line of the plane where the coordinate marking equipment is positioned and the ground.
At this time, the first obtaining module 132 is specifically configured to:
acquiring a virtual coordinate of the calibration point in the virtual coordinate system;
acquiring a homography matrix of the first image, wherein the homography matrix of the first image is used for describing a mapping relation between the virtual coordinate system and the pixel coordinate system in the first image;
and processing the virtual coordinate of the calibration point by using the homography matrix of the first image to obtain the pixel coordinate of the calibration point.
Further, the first obtaining module 132 is specifically configured to:
acquiring pixel coordinates of a plurality of reference points on a coordinate marking device in the first image;
acquiring a virtual coordinate of each reference point in the virtual coordinate system;
calculating a homography matrix of the first image based on the virtual coordinates and the pixel coordinates of the plurality of reference points.
In this embodiment, the second obtaining module 134 is specifically configured to:
in the positioning data of the positioning equipment, according to the timestamp of the first image, the world coordinate of the positioning equipment at the moment shown by the timestamp is acquired;
and acquiring the world coordinates of the calibration point based on the world coordinates of the positioning equipment.
In a possible mode, when the positioning device is one and the index point is a ground projection of the positioning device, the world coordinate of the index point is a world coordinate of the positioning device.
In another possible manner, when there are a plurality of positioning devices, the second obtaining module 134 is specifically configured to:
and calculating the world coordinates of the positioning devices based on the position relations of the positioning devices to obtain the world coordinates of the calibration point.
In another possible embodiment, the calibration point and the optical center and the positioning point of the camera device are in the same straight line; the positioning point is located on the positioning device.
At this time, the first obtaining module 132 is specifically configured to:
and acquiring the pixel coordinates of the positioning point in the first image to obtain the pixel coordinates of the positioning point.
In this embodiment, the second obtaining module 134 is specifically configured to:
acquiring the world coordinates of the positioning points, wherein the world coordinates of the positioning points are determined by the positioning data of the positioning equipment;
acquiring world coordinates of the camera device;
and acquiring the world coordinates of the positioning point based on the world coordinates of the positioning point and the world coordinates of the camera device.
Further, the second obtaining module 134 is specifically configured to:
in the positioning data of the positioning equipment, acquiring the world coordinate of the positioning equipment at the moment shown by the timestamp of the first image according to the timestamp of the first image;
and acquiring the world coordinates of the positioning point based on the world coordinates of the positioning equipment.
Further, the second obtaining module 134 is specifically configured to:
judging whether positioning data with the same timestamp as the first image exists in the positioning data;
when the positioning data which is the same as the timestamp of the first image does not exist in the positioning data, acquiring a plurality of world coordinates, wherein the time difference between the timestamp of each world coordinate and the timestamp of the first image is within a preset difference value;
and performing interpolation processing on the plurality of world coordinates to obtain the world coordinates of the positioning equipment at the time indicated by the timestamp of the first image.
In the embodiment of the present disclosure, the pixel coordinate of any one of the calibration points and the world coordinate of the calibration point satisfy the following formula:
Figure BDA0002409316330000291
wherein (x)w,yw) Is the world coordinate of the index point, (u, v) is the pixel coordinate of the index point,
Figure BDA0002409316330000292
is a homography matrix of the camera device, hi,jThe matrix parameters are homography matrix parameters of the camera device, wherein i and j are used for distinguishing the matrix parameters, the value is 1-3, and z iscIs a three-dimensional coordinate parameter.
In another possible embodiment, the electronic device 1300 further includes: a transceiver module (not shown in fig. 13); the transceiver module is configured to send the target world coordinate to the target vehicle when the target object in the second image is the target vehicle.
The electronic device of the embodiment shown in fig. 13 may be used to implement the technical solution of the method embodiment shown in fig. 4, and the implementation principle and technical effect may further refer to the related description in the method embodiment.
Fig. 14 shows a schematic diagram of another electronic device, and as shown in fig. 14, the electronic device 1400 includes: an acquisition module 142, an acquisition module 144, and a processing module 146. Wherein the content of the first and second substances,
the acquisition module 142 is configured to acquire a second image by using the camera device, where the second image includes a target object;
an obtaining module 144, configured to obtain target pixel coordinates of the target object in the second image;
a processing module 146, configured to process the target pixel coordinate by using a homography matrix of the camera device, so as to obtain a target world coordinate of the target object; the homography matrix of the camera device is used for describing the mapping relation between the pixel coordinate system and the world coordinate system of the camera device.
In a possible embodiment of the present disclosure, the electronic device 1400 may further include: a calibration module (not shown in FIG. 14) for:
in the process of moving the calibration equipment, acquiring the pixel coordinates of the calibration point in the first image; the calibration device is contained in the first image, and the calibration point is determined by the calibration device;
acquiring world coordinates of the calibration point, wherein the world coordinates are determined by positioning equipment, and the positioning equipment is carried on the calibration equipment;
and acquiring a homography matrix of the camera device based on the pixel coordinates and world coordinates of the plurality of calibration points.
In one possible design, the calibration device is equipped with a coordinate marking device, and the coordinate marking device is used for determining a virtual coordinate system; the calibration point is positioned on the intersection line of the plane where the coordinate marking equipment is positioned and the ground.
In this embodiment, on the one hand, the calibration module is specifically configured to:
acquiring a virtual coordinate of the calibration point in the virtual coordinate system;
acquiring a homography matrix of the first image, wherein the homography matrix of the first image is used for describing a mapping relation between the virtual coordinate system and the pixel coordinate system in the first image;
and processing the virtual coordinate of the calibration point by using the homography matrix of the first image to obtain the pixel coordinate of the calibration point.
In another aspect, the calibration module is specifically configured to:
in the positioning data of the positioning equipment, according to the timestamp of the first image, the world coordinate of the positioning equipment at the moment shown by the timestamp is acquired;
and acquiring the world coordinates of the calibration point based on the world coordinates of the positioning equipment.
In another possible design, the calibration point and the optical center and the positioning point of the camera device are in the same straight line; the positioning point is located on the positioning device.
In this embodiment, on the one hand, the calibration module is specifically configured to: and acquiring the pixel coordinates of the positioning point in the first image to obtain the pixel coordinates of the positioning point.
In another aspect, the calibration module is specifically configured to:
acquiring the world coordinates of the positioning points, wherein the world coordinates of the positioning points are determined by the positioning data of the positioning equipment;
acquiring world coordinates of the camera device;
and acquiring the world coordinates of the positioning point based on the world coordinates of the positioning point and the world coordinates of the camera device.
The electronic device of the embodiment shown in fig. 14 may be used to implement the technical solution of the method embodiment shown in fig. 12, and the implementation principle and the technical effect may be further referred to in the related description of the method embodiment.
It should be understood that the division of the modules of the electronic device shown in fig. 13 and fig. 14 is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling by the processing element in software, and part of the modules can be realized in the form of hardware. For example, the first computing module in fig. 13 may be a separately established processing element, or may be implemented by being integrated in a chip of an electronic device, such as a terminal, or may be stored in a memory of the electronic device in the form of a program, and the function of each of the above modules is called and executed by a processing element of the electronic device. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. As another example, when one of the above modules is implemented in the form of a Processing element scheduler, the Processing element may be a general purpose processor, such as a Central Processing Unit (CPU) or other processor capable of invoking programs. As another example, these modules may be integrated together, implemented in the form of a system-on-a-chip (SOC).
Fig. 15 shows a physical structure diagram of an electronic device. As shown in fig. 15, the electronic device 1500 includes: at least one processor 152 and memory 154; the memory 154 stores computer-executable instructions; the at least one processor 152 executes computer-executable instructions stored by the memory 154 to cause the at least one processor 152 to perform a positioning method as provided by any of the preceding embodiments.
The processor 152 may also be referred to as a processing unit, and may implement a certain control function. The processor 152 may be a general purpose processor, a special purpose processor, or the like.
In an alternative design, the processor 152 may also store instructions, which can be executed by the processor 152, so that the electronic device 1500 executes the positioning method described in the above method embodiment.
In yet another possible design, electronic device 1500 may include circuitry that may implement the functionality of transmitting or receiving or communicating in the foregoing method embodiments.
Optionally, the electronic device 1500 may include one or more memories 154, on which instructions or intermediate data are stored, and the instructions may be executed on the processor, so that the electronic device 1500 performs the positioning method described in the above method embodiment. Optionally, other relevant data may also be stored in the memory 154. Optionally, instructions and/or data may also be stored in the processor 152. The processor 152 and the memory 154 may be provided separately or may be integrated together.
Optionally, the electronic device 1500 may also include a transceiver 156. The transceiver 156 may also be referred to as a transceiver unit, a transceiver, a transceiving circuit, a transceiver, or the like, for implementing transceiving functions of the electronic device.
For example, if the electronic device 1500 is used to implement the operation of acquiring the positioning data and the image data corresponding to the embodiment shown in fig. 5, for example, the transceiver may receive the image data from the camera, and the transceiver may also receive the positioning data from the positioning device. The transceiver 156 may further perform other corresponding communication functions. And the processor 156 is configured to perform the corresponding determination or control operations, and optionally, may store corresponding instructions in the memory. The specific processing manner of each component can be referred to the related description of the previous embodiment.
The processor 152 and transceiver 156 described herein may be implemented on an Integrated Circuit (IC), an analog IC, a Radio Frequency Integrated Circuit (RFIC), a mixed signal IC, an Application Specific Integrated Circuit (ASIC), a Printed Circuit Board (PCB), an electronic device, or the like. The processor and transceiver may also be fabricated using various 1C process technologies, such as Complementary Metal Oxide Semiconductor (CMOS), N-type metal oxide semiconductor (NMOS), P-type metal oxide semiconductor (PMOS), Bipolar Junction Transistor (BJT), Bipolar CMOS (bicmos), silicon germanium (SiGe), gallium arsenide (GaAs), and the like.
Alternatively, electronic device 1500 may be a stand-alone device or may be part of a larger device. For example, the device may be:
(1) a stand-alone integrated circuit IC, or chip, or system-on-chip or subsystem;
(2) a set of one or more ICs, which optionally may also include storage components for storing data and/or instructions;
(3) an ASIC, such as a modem (MSM);
(4) a module that may be embedded within other devices;
(5) receivers, terminals, cellular telephones, wireless devices, handsets, mobile units, network devices, and the like;
(6) others, and so forth.
The embodiment of the disclosure also provides a positioning system. Referring to fig. 16, the positioning system 1600 includes:
a calibration device 170 carrying one or more positioning devices 180;
a camera 120 for capturing images;
the electronic device 200 is configured to perform the positioning method provided in any of the foregoing embodiments.
The functional architecture and the hardware architecture of the electronic device 200 may refer to fig. 13 to 15, which are not described in detail.
In a possible embodiment, the positioning system 1600 may further include: coordinate marking device 180, as shown in FIG. 5.
In one possible embodiment, the positioning system 1600 may be a V2X system; in this case, the calibration apparatus 170 is a vehicle, and the calibration apparatus 170 communicates with the imaging device 120 and the electronic apparatus 200.
In one possible embodiment, the positioning system 1600 may be a V2X system; at this time, the system may further include a target vehicle, such as the vehicle 130 in the positioning scene shown in fig. 1, which is in communication with the camera 120 and the electronic device 200.
The embodiment of the disclosure also provides a positioning system. Referring to fig. 17, the positioning system 1700 includes:
a camera 120 for capturing images;
an electronic device 200 for performing the positioning method according to any one of the embodiments covered by fig. 12.
At this time, the functional architecture of the electronic device 200 may refer to fig. 14, and the hardware architecture may refer to fig. 15, which is not described in detail.
In one possible embodiment, the positioning system 1700 may be a V2X system.
In yet another embodiment, when the positioning system 1700 may be a V2X system, the positioning system 1700 may further include a target vehicle, such as the vehicle 130 in the positioning scene shown in fig. 1, which is in communication with the camera 120 and the electronic device 200.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is enabled to execute the positioning method described in the foregoing embodiment.
In addition, the present application also provides a computer program product, which includes a computer program, when the computer program runs on a computer, the computer executes the positioning method described in the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions described in accordance with the present application are generated, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk), among others.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.

Claims (36)

1. A method of positioning, comprising:
in the process of moving the calibration equipment, acquiring the pixel coordinates of the calibration point in the first image; the calibration device is contained in the first image, and the calibration point is determined by the calibration device;
acquiring world coordinates of the calibration point, wherein the world coordinates are determined by positioning equipment, and the positioning equipment is carried on the calibration equipment;
acquiring a homography matrix of the camera device based on the pixel coordinates and the world coordinates of the plurality of calibration points, wherein the homography matrix of the camera device is used for describing the mapping relation between a pixel coordinate system and a world coordinate system of the camera device;
and when a second image is acquired, acquiring the target world coordinates of the target object in the second image by using the homography matrix of the camera device.
2. The method according to claim 1, characterized in that the calibration equipment is loaded with coordinate marking equipment, and the coordinate marking equipment is used for determining a virtual coordinate system; the calibration point is positioned on the intersection line of the plane where the coordinate marking equipment is positioned and the ground;
the obtaining of the pixel coordinates of the index point in the first image includes:
acquiring a virtual coordinate of the calibration point in the virtual coordinate system;
acquiring a homography matrix of the first image, wherein the homography matrix of the first image is used for describing a mapping relation between the virtual coordinate system and the pixel coordinate system in the first image;
and processing the virtual coordinate of the calibration point by using the homography matrix of the first image to obtain the pixel coordinate of the calibration point.
3. The method of claim 2, wherein said obtaining a homography matrix for the first image comprises:
acquiring pixel coordinates of a plurality of reference points in the first image; the plurality of reference points are located on the coordinate marking device;
acquiring a virtual coordinate of each reference point in the virtual coordinate system;
calculating a homography matrix of the first image based on the virtual coordinates and the pixel coordinates of the plurality of reference points.
4. The method according to any one of claims 1-3, wherein said obtaining world coordinates of said index point comprises:
in the positioning data of the positioning equipment, according to the timestamp of the first image, the world coordinate of the positioning equipment at the moment shown by the timestamp is acquired;
and acquiring the world coordinates of the calibration point based on the world coordinates of the positioning equipment.
5. The method of claim 4, wherein when the positioning device is one and the index point is a ground projection of the positioning device, the world coordinates of the index point are world coordinates of the positioning device.
6. The method according to claim 4, wherein when the positioning device is plural, the obtaining the world coordinates of the index point comprises:
and calculating the world coordinates of the positioning devices based on the position relations of the positioning devices to obtain the world coordinates of the calibration point.
7. The method according to claim 1, wherein the calibration point is located on the same straight line with the optical center and the positioning point of the camera device; the positioning point is positioned on the positioning equipment;
wherein the obtaining of the pixel coordinates of the calibration point in the first image comprises:
and acquiring the pixel coordinates of the positioning point in the first image to obtain the pixel coordinates of the positioning point.
8. The method of claim 7, wherein the obtaining world coordinates of the index point comprises:
acquiring the world coordinates of the positioning points, wherein the world coordinates of the positioning points are determined by the positioning data of the positioning equipment;
acquiring world coordinates of the camera device;
and acquiring the world coordinates of the positioning point based on the world coordinates of the positioning point and the world coordinates of the camera device.
9. The method of claim 8, wherein the obtaining world coordinates of the positioning point comprises:
in the positioning data of the positioning equipment, acquiring the world coordinate of the positioning equipment at the moment shown by the timestamp of the first image according to the timestamp of the first image;
and acquiring the world coordinates of the positioning point based on the world coordinates of the positioning equipment.
10. The method according to claim 4 or 9, wherein the obtaining, in the positioning data of the positioning device, the world coordinates of the positioning device at the time indicated by the timestamp of the first image according to the timestamp of the first image comprises:
judging whether positioning data with the same timestamp as the first image exists in the positioning data;
when the positioning data which is the same as the timestamp of the first image does not exist in the positioning data, acquiring a plurality of world coordinates, wherein the time difference between the timestamp of each world coordinate and the timestamp of the first image is within a preset difference value;
and performing interpolation processing on the plurality of world coordinates to obtain the world coordinates of the positioning equipment at the time indicated by the timestamp of the first image.
11. The method according to any one of claims 1-10, wherein the pixel coordinates of any one of the index points and the world coordinates of the index point satisfy the following formula:
Figure FDA0002409316320000031
wherein (x)w,yw) Is the world coordinate of the index point, (u, v) is the pixel coordinate of the index point,
Figure FDA0002409316320000032
is a homography matrix of the camera device, hi,jIs a matrix parameter of a homography matrix of the camera device, wherein,i and j are used for distinguishing the matrix parameters, and the value of z is 1-3cIs a three-dimensional coordinate parameter.
12. The method according to any one of claims 1-11, wherein when the target object in the second image is a target vehicle, the method further comprises:
and sending the target world coordinates to the target vehicle.
13. A method of positioning, comprising:
acquiring a second image by using a camera device, wherein the second image comprises a target object;
acquiring target pixel coordinates of the target object in the second image;
processing the target pixel coordinate by utilizing a homography matrix of a camera device to obtain a target world coordinate of the target object;
the homography matrix of the camera device is used for describing the mapping relation between the pixel coordinate system and the world coordinate system of the camera device.
14. The method of claim 13, further comprising:
in the process of moving the calibration equipment, acquiring the pixel coordinates of the calibration point in the first image; the calibration device is contained in the first image, and the calibration point is determined by the calibration device;
acquiring world coordinates of the calibration point, wherein the world coordinates are determined by positioning equipment, and the positioning equipment is carried on the calibration equipment;
and acquiring a homography matrix of the camera device based on the pixel coordinates and world coordinates of the plurality of calibration points.
15. The method according to claim 14, characterized in that the calibration equipment is loaded with coordinate marking equipment for determining a virtual coordinate system; the calibration point is positioned on the intersection line of the plane where the coordinate marking equipment is positioned and the ground;
the obtaining of the pixel coordinates of the index point in the first image includes:
acquiring a virtual coordinate of the calibration point in the virtual coordinate system;
acquiring a homography matrix of the first image, wherein the homography matrix of the first image is used for describing a mapping relation between the virtual coordinate system and the pixel coordinate system in the first image;
and processing the virtual coordinate of the calibration point by using the homography matrix of the first image to obtain the pixel coordinate of the calibration point.
16. The method of claim 14 or 15, wherein the obtaining world coordinates of the index point comprises:
in the positioning data of the positioning equipment, according to the timestamp of the first image, the world coordinate of the positioning equipment at the moment shown by the timestamp is acquired;
and acquiring the world coordinates of the calibration point based on the world coordinates of the positioning equipment.
17. The method according to claim 14, wherein the calibration point is located on the same straight line with the optical center and the positioning point of the camera device; the positioning point is positioned on the positioning equipment;
wherein the obtaining of the pixel coordinates of the calibration point in the first image comprises:
and acquiring the pixel coordinates of the positioning point in the first image to obtain the pixel coordinates of the positioning point.
18. The method of claim 17, wherein the obtaining world coordinates of the index point comprises:
acquiring the world coordinates of the positioning points, wherein the world coordinates of the positioning points are determined by the positioning data of the positioning equipment;
acquiring world coordinates of the camera device;
and acquiring the world coordinates of the positioning point based on the world coordinates of the positioning point and the world coordinates of the camera device.
19. An electronic device, comprising:
the first acquisition module is used for acquiring the pixel coordinates of the calibration point in the first image in the process of moving the calibration equipment; the calibration device is contained in the first image, and the calibration point is determined by the calibration device;
the second acquisition module is used for acquiring world coordinates of the calibration point, the world coordinates are determined by positioning equipment, and the positioning equipment is carried on the calibration equipment;
the first calculation module is used for acquiring a homography matrix of the camera device based on the pixel coordinates and the world coordinates of the plurality of calibration points, wherein the homography matrix of the camera device is used for describing the mapping relation between the pixel coordinate system and the world coordinate system of the camera device;
and the second calculation module is used for acquiring the target world coordinates of the target object in the second image by using the homography matrix of the camera device when the second image is acquired.
20. The electronic device according to claim 19, wherein the calibration device is equipped with a coordinate marking device for determining a virtual coordinate system; the calibration point is positioned on the intersection line of the plane where the coordinate marking equipment is positioned and the ground;
the first obtaining module is specifically configured to:
acquiring a virtual coordinate of the calibration point in the virtual coordinate system;
acquiring a homography matrix of the first image, wherein the homography matrix of the first image is used for describing a mapping relation between the virtual coordinate system and the pixel coordinate system in the first image;
and processing the virtual coordinate of the calibration point by using the homography matrix of the first image to obtain the pixel coordinate of the calibration point.
21. The electronic device of claim 20, wherein the first obtaining module is specifically configured to:
acquiring pixel coordinates of a plurality of reference points in the first image; the plurality of reference points are located on the coordinate marking device;
acquiring a virtual coordinate of each reference point in the virtual coordinate system;
calculating a homography matrix of the first image based on the virtual coordinates and the pixel coordinates of the plurality of reference points.
22. The electronic device according to any of claims 19-21, wherein the second obtaining module is specifically configured to:
in the positioning data of the positioning equipment, according to the timestamp of the first image, the world coordinate of the positioning equipment at the moment shown by the timestamp is acquired;
and acquiring the world coordinates of the calibration point based on the world coordinates of the positioning equipment.
23. The electronic device of claim 22, wherein when the positioning device is a single pointing device and the pointing point is a ground projection of the positioning device, the world coordinates of the pointing point are the world coordinates of the positioning device.
24. The electronic device according to claim 22, wherein when there are a plurality of positioning devices, the second obtaining module is specifically configured to:
and calculating the world coordinates of the positioning devices based on the position relations of the positioning devices to obtain the world coordinates of the calibration point.
25. The electronic device according to claim 20, wherein the calibration point is located on the same straight line with an optical center and a positioning point of the camera device; the positioning point is positioned on the positioning equipment;
the first obtaining module is specifically configured to:
and acquiring the pixel coordinates of the positioning point in the first image to obtain the pixel coordinates of the positioning point.
26. The electronic device of claim 25, wherein the second obtaining module is specifically configured to:
acquiring the world coordinates of the positioning points, wherein the world coordinates of the positioning points are determined by the positioning data of the positioning equipment;
acquiring world coordinates of the camera device;
and acquiring the world coordinates of the positioning point based on the world coordinates of the positioning point and the world coordinates of the camera device.
27. The electronic device of claim 26, wherein the second obtaining module is specifically configured to:
in the positioning data of the positioning equipment, acquiring the world coordinate of the positioning equipment at the moment shown by the timestamp of the first image according to the timestamp of the first image;
and acquiring the world coordinates of the positioning point based on the world coordinates of the positioning equipment.
28. The electronic device according to claim 22 or 27, wherein the second obtaining module is specifically configured to:
judging whether positioning data with the same timestamp as the first image exists in the positioning data;
when the positioning data which is the same as the timestamp of the first image does not exist in the positioning data, acquiring a plurality of world coordinates, wherein the time difference between the timestamp of each world coordinate and the timestamp of the first image is within a preset difference value;
and performing interpolation processing on the plurality of world coordinates to obtain the world coordinates of the positioning equipment at the time indicated by the timestamp of the first image.
29. The electronic device according to any one of claims 20-28, wherein the pixel coordinates of any one of the index points and the world coordinates of the index point satisfy the following formula:
Figure FDA0002409316320000061
wherein (x)w,yw) Is the world coordinate of the index point, (u, v) is the pixel coordinate of the index point,
Figure FDA0002409316320000062
is a homography matrix of the camera device, hi,jThe matrix parameters are homography matrix parameters of the camera device, wherein i and j are used for distinguishing the matrix parameters, the value is 1-3, and z iscIs a three-dimensional coordinate parameter.
30. The electronic device of any one of claims 19-29, further comprising: a transceiver module;
the transceiver module is configured to send the target world coordinate to the target vehicle when the target object in the second image is the target vehicle.
31. An electronic device, comprising:
the acquisition module is used for acquiring a second image by utilizing the camera device, wherein the second image comprises a target object;
the acquisition module is used for acquiring the target pixel coordinates of the target object in the second image;
the processing module is used for processing the target pixel coordinate by utilizing a homography matrix of the camera device to obtain a target world coordinate of the target object;
the homography matrix of the camera device is used for describing the mapping relation between the pixel coordinate system and the world coordinate system of the camera device.
32. The electronic device of claim 31, further comprising: a calibration module to:
in the process of moving the calibration equipment, acquiring the pixel coordinates of the calibration point in the first image; the calibration device is contained in the first image, and the calibration point is determined by the calibration device;
acquiring world coordinates of the calibration point, wherein the world coordinates are determined by positioning equipment, and the positioning equipment is carried on the calibration equipment;
and acquiring a homography matrix of the camera device based on the pixel coordinates and world coordinates of the plurality of calibration points.
33. The electronic device according to claim 32, wherein the calibration device is equipped with a coordinate marking device for determining a virtual coordinate system; the calibration point is positioned on the intersection line of the plane where the coordinate marking equipment is positioned and the ground;
the calibration module is specifically configured to:
acquiring a virtual coordinate of the calibration point in the virtual coordinate system;
acquiring a homography matrix of the first image, wherein the homography matrix of the first image is used for describing a mapping relation between the virtual coordinate system and the pixel coordinate system in the first image;
and processing the virtual coordinate of the calibration point by using the homography matrix of the first image to obtain the pixel coordinate of the calibration point.
34. The electronic device according to claim 32 or 33, wherein the calibration module is specifically configured to:
in the positioning data of the positioning equipment, according to the timestamp of the first image, the world coordinate of the positioning equipment at the moment shown by the timestamp is acquired;
and acquiring the world coordinates of the calibration point based on the world coordinates of the positioning equipment.
35. The electronic device of claim 32, wherein the calibration point is located on the same straight line with an optical center and a positioning point of the camera device; the positioning point is positioned on the positioning equipment;
the calibration module is specifically configured to:
and acquiring the pixel coordinates of the positioning point in the first image to obtain the pixel coordinates of the positioning point.
36. The electronic device of claim 35, wherein the calibration module is specifically configured to:
acquiring the world coordinates of the positioning points, wherein the world coordinates of the positioning points are determined by the positioning data of the positioning equipment;
acquiring world coordinates of the camera device;
and acquiring the world coordinates of the positioning point based on the world coordinates of the positioning point and the world coordinates of the camera device.
CN202010171397.6A 2020-03-12 2020-03-12 Positioning method and system, electronic device and computer readable storage medium Withdrawn CN113393520A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010171397.6A CN113393520A (en) 2020-03-12 2020-03-12 Positioning method and system, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010171397.6A CN113393520A (en) 2020-03-12 2020-03-12 Positioning method and system, electronic device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113393520A true CN113393520A (en) 2021-09-14

Family

ID=77615628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010171397.6A Withdrawn CN113393520A (en) 2020-03-12 2020-03-12 Positioning method and system, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113393520A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030202A (en) * 2023-03-29 2023-04-28 四川弘和通讯集团有限公司 Three-dimensional image reconstruction method and device, electronic equipment and storage medium
CN116704046A (en) * 2023-08-01 2023-09-05 北京积加科技有限公司 Cross-mirror image matching method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008014940A (en) * 2006-06-08 2008-01-24 Fast:Kk Camera calibration method for camera measurement of planar subject and measuring device applying same
CN104821956A (en) * 2015-03-31 2015-08-05 百度在线网络技术(北京)有限公司 Positioning method and device based on electronic equipment
CN107464264A (en) * 2016-06-02 2017-12-12 南京理工大学 A kind of camera parameter scaling method based on GPS
CN107481283A (en) * 2017-08-01 2017-12-15 深圳市神州云海智能科技有限公司 A kind of robot localization method, apparatus and robot based on CCTV camera
CN110213488A (en) * 2019-06-06 2019-09-06 腾讯科技(深圳)有限公司 A kind of localization method and relevant device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008014940A (en) * 2006-06-08 2008-01-24 Fast:Kk Camera calibration method for camera measurement of planar subject and measuring device applying same
CN104821956A (en) * 2015-03-31 2015-08-05 百度在线网络技术(北京)有限公司 Positioning method and device based on electronic equipment
CN107464264A (en) * 2016-06-02 2017-12-12 南京理工大学 A kind of camera parameter scaling method based on GPS
CN107481283A (en) * 2017-08-01 2017-12-15 深圳市神州云海智能科技有限公司 A kind of robot localization method, apparatus and robot based on CCTV camera
CN110213488A (en) * 2019-06-06 2019-09-06 腾讯科技(深圳)有限公司 A kind of localization method and relevant device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030202A (en) * 2023-03-29 2023-04-28 四川弘和通讯集团有限公司 Three-dimensional image reconstruction method and device, electronic equipment and storage medium
CN116704046A (en) * 2023-08-01 2023-09-05 北京积加科技有限公司 Cross-mirror image matching method and device
CN116704046B (en) * 2023-08-01 2023-11-10 北京积加科技有限公司 Cross-mirror image matching method and device

Similar Documents

Publication Publication Date Title
US10921803B2 (en) Method and device for controlling flight of unmanned aerial vehicle and remote controller
CN111127563A (en) Combined calibration method and device, electronic equipment and storage medium
CN109817022B (en) Method, terminal, automobile and system for acquiring position of target object
WO2020146039A1 (en) Robust association of traffic signs with a map
CN104469677A (en) Moving track recording system and method based on intelligent terminal
EP3341685A1 (en) Cradle rotation insensitive inertial navigation
CN113393520A (en) Positioning method and system, electronic device and computer readable storage medium
US10993078B2 (en) Tracking system for tracking and rendering virtual object corresponding to physical object and the operating method for the same
US20170227361A1 (en) Mobile mapping system
CN112422927A (en) Real-time combination method and system for unmanned aerial vehicle shooting video and map
Dabove et al. Positioning techniques with smartphone technology: Performances and methodologies in outdoor and indoor scenarios
JP2019149652A (en) Information transmission device, communication system and communication terminal
CN109238224B (en) Unmanned aerial vehicle flying height difference eliminating method, device and system and intelligent terminal
CN111354037A (en) Positioning method and system
WO2021103729A1 (en) Inter-terminal positioning method and apparatus
CN103557834A (en) Dual-camera-based solid positioning method
US20230334850A1 (en) Map data co-registration and localization system and method
CN113808199B (en) Positioning method, electronic equipment and positioning system
Sokolov et al. Development of software and hardware of entry-level vision systems for navigation tasks and measuring
CN111028516A (en) Traffic police duty information transmission method, system, medium and device
CN111210471B (en) Positioning method, device and system
CN114777772A (en) Indoor positioning system based on infrared camera and high accuracy IMU
CN113237464A (en) Positioning system, positioning method, positioner, and storage medium
WO2018079043A1 (en) Information processing device, image pickup device, information processing system, information processing method, and program
CN113574346A (en) Positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220211

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Applicant after: Huawei Cloud Computing Technology Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

TA01 Transfer of patent application right
WW01 Invention patent application withdrawn after publication

Application publication date: 20210914

WW01 Invention patent application withdrawn after publication