CN114926316A - Distance measuring method, distance measuring device, electronic device, and storage medium - Google Patents

Distance measuring method, distance measuring device, electronic device, and storage medium Download PDF

Info

Publication number
CN114926316A
CN114926316A CN202210597963.9A CN202210597963A CN114926316A CN 114926316 A CN114926316 A CN 114926316A CN 202210597963 A CN202210597963 A CN 202210597963A CN 114926316 A CN114926316 A CN 114926316A
Authority
CN
China
Prior art keywords
distance
coordinate
pixel
image
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210597963.9A
Other languages
Chinese (zh)
Inventor
宗泽亮
吴佳飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202210597963.9A priority Critical patent/CN114926316A/en
Publication of CN114926316A publication Critical patent/CN114926316A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the disclosure discloses a distance measuring method, a distance measuring device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a target image acquired by a camera device, wherein the target image comprises a first object and a second object with a distance to be measured; acquiring a first object and a second object, and respectively corresponding first pixel points and second pixel points in a target image; respectively acquiring a first coordinate and a second coordinate of a first pixel point and a second pixel point in a physical plane based on a predetermined mapping relation between the pixel plane and the physical plane of the camera device; and obtaining the distance between the first object and the second object according to the first coordinate and the second coordinate. By the aid of the method and the device, accuracy and stability of distance measurement can be improved.

Description

Distance measuring method, distance measuring device, electronic apparatus, and storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a distance measuring method and apparatus, an electronic device, and a storage medium.
Background
In daily life, it is often necessary to perform ranging on a target object. Generally, a monocular vision distance measurement mode and a binocular vision distance measurement mode are adopted for distance measurement, the binocular vision distance measurement mode can be matched with images collected by a left camera and a right camera to obtain a three-dimensional coordinate of a measured object under a world coordinate system, so that the actual physical distance between two pixel points is calculated, the distance measurement accuracy is high, but the stability of a distance measurement result is low, and the environment privacy interference is easy to cause. Although the monocular visual ranging method can obtain a relatively stable ranging result, the ranging accuracy is relatively low.
Disclosure of Invention
The embodiment of the disclosure provides a distance measuring method and device, an electronic device and a storage medium, wherein coordinates of two objects on a physical plane in an image acquired by a camera device are acquired through a mapping relation between a pixel plane and the physical plane of the camera device, so that the distance between the two objects is obtained, monocular vision distance measurement is realized, and the accuracy and the stability of the distance measurement can be improved.
The embodiment of the disclosure provides a distance measuring method, which includes:
acquiring a target image acquired by a camera device, wherein the target image comprises a first object and a second object with a distance to be measured;
acquiring a first object and a second object, and respectively corresponding first pixel points and second pixel points in a target image;
respectively acquiring a first coordinate and a second coordinate of a first pixel point and a second pixel point in a physical plane based on a predetermined mapping relation between the pixel plane and the physical plane of the camera device;
and obtaining the distance between the first object and the second object according to the first coordinate and the second coordinate.
In a possible implementation manner, before the step of respectively obtaining the first coordinate and the second coordinate of the first pixel point and the second pixel point in the physical plane based on the predetermined mapping relationship between the pixel plane and the physical plane of the image capturing device, the method further includes:
and calibrating the camera device according to the imaging factor of the camera device, and determining the mapping relation between the pixel plane and the physical plane of the camera device.
In one possible implementation, the step of determining a mapping relationship between a pixel plane and a physical plane of the image pickup device includes:
acquiring a homography matrix based on an imaging factor according to the imaging factor of the camera device;
and normalizing elements in the homography matrix, and determining the mapping relation between the pixel plane and the physical plane of the camera.
In a possible implementation manner, the step of obtaining the homography matrix based on the imaging factor according to the imaging factor of the camera device, where the imaging factor includes an attitude angle and an internal parameter of the camera device, includes:
obtaining a rotation matrix according to the attitude angle;
obtaining an internal reference matrix according to the internal reference of the camera device;
and obtaining a homography matrix according to the rotation matrix and the internal reference matrix.
In a possible implementation manner, the imaging factor further includes a scale factor and an image distortion factor, and after the step of obtaining a homography matrix based on the imaging factor according to the imaging factor of the image capturing apparatus, the method further includes:
acquiring a reference image acquired by a camera device, wherein the reference image comprises a first reference object and a second reference object, and the pixel distance and the actual physical distance of the first reference object and the second reference object are known constants;
and obtaining the optimal parameter value of the imaging factor according to the scale factor, the image distortion factor, the homography matrix and the physical distance between the first reference object and the second reference object.
In one possible implementation manner, the method further includes:
and carrying out distortion removal processing on the reference image.
In a possible implementation manner, the step of obtaining an optimal parameter value of the imaging factor according to the scale factor, the image distortion factor, the homography matrix and the physical distance between the first reference object and the second reference object includes:
obtaining the theoretical distance of the pixel distance of the first reference object and the second reference object on the physical plane according to the homography matrix;
and according to the scale factor, the image distortion factor and the homography matrix, minimizing the difference between the theoretical distance and the actual physical distance to obtain the optimal parameter value of the imaging factor.
In a possible implementation manner, the step of obtaining the distance between the first object and the second object according to the first coordinate and the second coordinate includes:
acquiring the Euclidean distance between the first coordinate and the second coordinate;
and multiplying the Euclidean distance by the scale factor of the camera device to obtain the distance between the first object and the second object.
In some embodiments, the disclosed embodiments provide a distance measuring device comprising:
the image acquisition unit is used for acquiring a target image acquired by the camera device, and the target image comprises a first object and a second object with a distance to be measured;
the pixel point acquisition unit is used for acquiring a first object and a second object, and respectively corresponding first pixel points and second pixel points in the target image;
the coordinate acquisition unit is used for respectively acquiring a first coordinate and a second coordinate of a first pixel point and a second pixel point on a physical plane based on a predetermined mapping relation between the pixel plane and the physical plane of the camera device;
and the distance acquisition unit is used for acquiring the distance between the first object and the second object according to the first coordinate and the second coordinate.
In some embodiments, the disclosed embodiments provide an electronic device comprising a processor and a memory. Wherein the memory is for storing computer readable instructions and the processor is for calling the instructions stored in the memory to execute the instructions of the steps in the method of the first aspect.
In some embodiments, the disclosed embodiments provide a chip device comprising a processor and a memory. Wherein the memory is for storing computer readable instructions and the processor is for calling instructions stored in the memory to perform the instructions of the steps in the method of the first aspect.
In some embodiments, the disclosed embodiments provide a computer readable storage medium for storing a computer program which, when executed by a processor, implements the method as in the first aspect.
In some embodiments, the disclosed embodiments provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program, the computer operable to cause a computer to perform a method as in the first aspect.
The implementation of the embodiment of the disclosure has the following beneficial effects:
it can be seen that after the target image acquired by the camera device is acquired, first pixel points and second pixel points corresponding to a first object and a second object of a distance to be measured in the target image are acquired. And respectively acquiring a first coordinate and a second coordinate of the first pixel point and the second pixel point on the physical plane based on a predetermined mapping relation between the pixel plane and the physical plane of the camera device. That is to say, the coordinates of the two objects on the physical plane in the image acquired by the camera device are obtained through the mapping relation between the pixel plane and the physical plane of the camera device, so that the distance between the two objects is obtained, monocular vision distance measurement is realized, and the accuracy and the stability of the distance measurement can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating a camera imaging system according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a distance measurement system according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a distance measuring method according to an embodiment of the disclosure;
fig. 4 is a schematic view of a scene of a homography matrix provided by an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a distance measuring device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The terms "first" and "second," and the like in the description, claims, and drawings of the present application are used solely to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. Such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements recited, but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
"at least one" means one or more, "a plurality" means two or more, "at least two" means two or three and three or more, "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, for example, "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b," a and c, "" b and c, "or" a and b and c.
Some concepts related to embodiments of the present disclosure are explained first. The process of object imaging is essentially the conversion of several coordinate systems, first converting one point in space from the world coordinate system to the camera coordinate system, then converting (projecting) the camera coordinate system to the image physical coordinate system, and finally converting the image physical coordinate system to the image pixel coordinate system.
Referring to fig. 1, fig. 1 is a schematic diagram of an embodiment of the disclosureA schematic diagram of a camera imaging system. As shown in fig. 1, the world coordinate system is the coordinates of an object in the real world, and is transformed according to the size and position of the object, and the unit is a length unit. The origin of the world coordinate system is P (X) W ,Y W ,Z W ),X W ,Y W ,Z W The respective coordinate axes are perpendicular to each other. Camera coordinate system with optical center O c The direction of the optical axis is taken as the z-axis as the origin, and the X-direction and the y-direction parallel to the image are taken as the X-axis and the y-axis, which can be respectively referred to as the X-axis, the y-axis and the z-axis c 、Y c 、Z c The unit is a length unit. The image physical coordinate system takes the intersection point of the main optical axis and the image plane as a coordinate origin O i The x-axis and y-axis directions are shown in FIG. 1, and the units are units of length. The image pixel coordinate system takes the vertexes (U, V) of the image as the origin of coordinates, the U direction and the V direction are respectively parallel to the x direction and the y direction, the x axis and the y axis can be respectively called as the U axis and the V axis, and the unit is in pixel.
In the embodiments of the present disclosure, the plane corresponding to the image pixel coordinate system is referred to as a pixel plane, such as a plane corresponding to the U axis and the V axis in fig. 1. The plane corresponding to the world coordinate system is called a physical plane. The parameters in the camera that affect the imaging of the object may be referred to as the imaging factor of the camera. The imaging factors may include an internal parameter, attitude angle, image distortion factor, scale factor, and the like.
Where the internal reference describes some parameters inside the camera device, it may include the focal length (f) of the camera device x ,f y ) Generally, they are equal. The internal reference may also include principal point coordinates (u) of the camera device 0 ,v 0 ) And the number of horizontal and vertical pixels representing the phase difference between the center pixel coordinate of the image and the origin pixel coordinate of the image. f. of x =f*S x F is focal length, S x Scale factor(s) for the transverse direction, which may be equal to d x ,d x Indicating how many length units each one pixel in the x direction occupies. The scale factor(s) is used to describe the magnitude of the actual physical value represented by a pixel and may be equal to the ratio between the pixel and the millimeter.
The attitude angle describes the position and orientation of the camera in some three-dimensional space, and may include the rotation angles of the camera with respect to the X, Y, and Z axes of the coordinate system, which are pitch, yaw, and roll, respectively, and may be described as (α, β, γ).
The image distortion factor may also be referred to as a distortion parameter. Distortion parameter (k) 1 ,k 2 ,k 3 ,p 1 ,p 2 ) In (k) 1 ,k 2 ,k 3 ) As radial distortion coefficient, (p) 1 ,p 2 ) Is the tangential distortion coefficient. Radial distortion occurs during the conversion of the camera coordinates to the physical coordinates of the image, due to the fact that the rays are more curved away from the center of the lens than near the center. The tangential distortion factor occurs during the camera fabrication process and results from the lens not being perfectly parallel to the plane image. Undistorted coordinates (U, V) in the image pixel coordinate system fall on the image pixel coordinate system (U) after radial distortion and tangential distortion d ,V d ) The above. That is, the relationship between the real image imgR and the distorted image imgD is: imgR (U, V) ═ imgD (U) d ,V d )。
The world coordinate system can be converted into the camera coordinate system through an external parameter matrix (or called a rotation matrix) corresponding to the attitude angle. And the camera coordinate system is converted into an image pixel coordinate system and an image physical coordinate system through an internal reference matrix corresponding to the internal reference. The image physical coordinate system is transformed into the pixel coordinate system by a pixel transformation matrix (related to scale factors).
Wherein, the rotation matrix R corresponding to the attitude angle (α,β,γ) The relationship with the attitude angle (α, β, γ) can be described by formula (1):
R (α,β,γ) =R Y (α)*R X (β)*R z (γ) (1)
wherein R is Y (α) is a matrix corresponding to the rotation of the three-dimensional coordinate axes about the Y axis by α, R X (beta) is a matrix corresponding to the rotation of the three-dimensional coordinate axis around the X axis by beta, R z And (gamma) is a matrix corresponding to the rotation gamma of the three-dimensional coordinate axis around the Z axis.
The internal reference matrix K corresponding to the internal reference is an upper triangular matrix and mainly describes attributes such as focal length, principal point coordinates, scale factors and the like. The internal reference matrix K can be expressed by the following equation (2):
Figure BDA0003668853440000051
homogeneous coordinates (homogenetic coordinates) of a pixel point pair on images of two different views can be expressed by a projective transformation (projective transformation), that is: x is the number of 1 =H*x 2 . Where H may be referred to as a Homography (homograph) matrix. The homography matrix may be understood simply as it is used to describe the positional mapping of an object between one coordinate system and another, e.g., between a physical plane and a pixel plane. The non-homogeneous coordinates on a two-dimensional image are (x, y) and the homogeneous coordinates are (x, y,1), which can also be written as (x/z, y/z,1) or (x, y, z).
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 2, fig. 2 is a schematic view of a scene of a distance measuring system according to an embodiment of the present disclosure. As shown in fig. 2, the distance measurement system may include a terminal apparatus 201, a server 202, an image pickup device 203, a first object 204, a second object 205, and the like. The terminal device 201 may include a Personal Computer (PC), a notebook computer, an all-in-one machine, a palmtop computer, a mobile phone, a tablet computer (pad), a smart television playing terminal, a vehicle-mounted terminal, or a portable device.
The server 202 may be an independent server, or may be a cloud server that provides basic cloud computing services such as cloud service, cloud database, cloud computing, cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, Content Delivery Network (CDN), and big data and artificial intelligence platform. Server 202 may alternatively be implemented as a server cluster comprised of multiple servers.
The image capturing device 203 may include the monitoring apparatus shown in fig. 2, and may further include a camera, a video camera, and the like, and may specifically be a monocular camera. The camera 203 may be a fixed position device, such as a camera in a roadside device. Or may be a device at a non-fixed location, such as a camera in a terminal device held by a user.
The first object 204 and the second object 205 may be two different objects or may be two different components on the same object. The first object 204 and the second object 205 may be any two objects of which the distance is to be measured, such as a distance between a vehicle and a vehicle, a distance between a building and a building, a distance between a vehicle and a building, and the like, and are not limited herein. For example, the first object 204 and the second object 205 are both vehicles, and the distance measurement method provided by the embodiment of the disclosure may be applied to aspects such as automatic driving or collision prevention, and may implement interaction between the vehicles and external information.
The number and types of the terminal apparatus 201, the server 202, the image pickup device 203, the first object 204, and the second object 205 are not limited in the embodiment of the present disclosure. The server 202 may serve a plurality of cameras 203 simultaneously, and the cameras 203 may capture target images including a first object 204 and a second object 205 at a distance to be measured. Reference images containing a first reference object and a second reference object (not shown in fig. 2) at known distances may also be acquired.
The first reference object and the second reference object may be two different objects or may be two different parts of the same object. The actual physical distance between the first reference object and the second reference object is a known constant. For example, roads of known width, signs of known spacing distance, traffic lights of known spacing distance, trees of known spacing distance, utility poles of known spacing distance, houses of known spacing distance, etc. It should be noted that the first reference object and the second reference object may be objects within the shooting range of the image pickup device 203, or may be objects that are placed within the shooting range of the image pickup device 203 at the time of shooting.
A plurality of pairs of points of known physical distance may exist between the first reference object and the second reference object such that the acquired reference images of the first reference object and the second reference object include pairs of pixel points of known pixel distance. Alternatively, the number of pairs of pixel points in the first and second reference objects between the known pixel distance and the actual physical distance may be 7. It can be understood that the greater the number of pixel point pairs of the known physical distance, the more beneficial the acquisition of the pixel point pairs on the image acquired by the image pickup device, so that the mapping relationship between the pixel plane and the physical plane of the image pickup device can be acquired based on the pixel distance between the pixel point pairs and the actual physical distance of the pixel point pairs on the physical plane, and the accuracy of acquiring the mapping relationship can be improved. If the directions of the pixel point pairs are different, for example, the direction of the pixel point pair a is the horizontal axis direction, the direction of the pixel point pair B is the vertical axis direction, and the direction of the pixel point pair C is the vertical direction, then the distance measurement can be performed through the pixel point pairs in different directions, and the accuracy of the distance measurement can be further improved.
As shown in fig. 2, the camera 203 may capture a target image 206 including a first object 204 and a second object 205, and partial images of the first object 204 and the second object 205 in the target image 206 are an image 207 and an image 208, respectively. The target image 206 may then be transmitted to the terminal apparatus 201 or the server 202, or the terminal apparatus 201 transmits the target image 206 to the server 202, so that the terminal apparatus 201 or the server 202 determines coordinates of the first object 204 and the second object 205 on the physical plane, respectively, corresponding to the image positions in the target image 206 according to the mapping relationship between the pixel plane and the physical plane of the image pickup device 203, so that the distance between the first object 204 and the second object 205 may be determined according to the coordinates of the first object 204 and the second object 205 on the physical plane, respectively.
For example, the camera may be a camera in the roadside apparatus, the first object may be a vehicle within a shooting range of the camera, and the second object may be another vehicle within the shooting range of the camera. In this manner, a first image including the reference object, the first object, and the second object can be acquired by the image pickup device.
Referring to fig. 3, fig. 3 is a schematic flow chart of a distance measuring method according to an embodiment of the disclosure. The distance measuring method may be performed by a distance measuring device, which may be a terminal device and/or a server in fig. 1, or a computer program (including program code, program instructions), or the like. As shown in fig. 3, the method includes steps S301 to S304, wherein:
s301: acquiring a target image acquired by a camera device; the target image comprises a first object and a second object of which the distance is to be measured.
The target image may be any one of images of the first object and the second object with the distance to be measured, which are acquired by the camera device, or one frame of image in the target video acquired by the camera device. The first object and the second object may refer to the foregoing or the following, and are not described herein again.
S302: and acquiring a first object and a second object, and respectively corresponding to a first pixel point and a second pixel point in the target image.
The first pixel points and the second pixel points are coordinate values of the first object and the second object in the target image respectively. The positions of the first pixel point and the second pixel point are not limited, and the positions can be the middle points of the objects or the boundary points. For example, the first object is a first vehicle, and the first pixel point is a position of a midpoint on the head of the first vehicle, which corresponds to the target image; the second object is a second vehicle, and the second pixel point is a position corresponding to a midpoint on the tail of the second vehicle in the target image, so that the distance between the head of the first vehicle and the tail of the second vehicle can be obtained, and the vehicle collision can be avoided. For another example, the first pixel point is a position corresponding to the door opening side of the first vehicle in the target image, and the second pixel point is a position corresponding to a point of the parking lot wall in the target image, so that the distance between the door opening side of the first vehicle and the parking lot wall can be obtained, and the method can be used for avoiding collision with the wall when the vehicle door is opened.
S303: based on a predetermined mapping relation between a pixel plane and a physical plane of the camera device, a first coordinate and a second coordinate of the first pixel point and the second pixel point on the physical plane are respectively obtained.
The mapping relation between the pixel plane and the physical plane of the camera device can be used for describing the conversion relation between the coordinate of an object on the pixel plane in an image acquired by the camera device and the coordinate of the object on the physical plane, and after the mapping relation between the pixel plane and the physical plane of the camera device is acquired, the coordinate of a pixel point in the image acquired by the camera device in the physical plane can be acquired, so that the first coordinate and the second coordinate of the first pixel point and the second pixel point on the physical plane respectively can be acquired.
It should be noted that when the pixel plane of the target image acquired by the image pickup device is consistent with the pixel plane in the mapping relationship, it can be understood that the imaging factor of the image pickup device is not changed, so that the first coordinate and the second coordinate of the first pixel point and the second pixel point in the physical plane can be acquired. Otherwise, determining a mapping relationship between the pixel plane when the target image is acquired and the pixel plane in the mapping relationship, or obtaining the mapping relationship between the pixel plane when the target image is acquired and the physical plane, and obtaining the first coordinate and the second coordinate of the first pixel point and the second pixel point in the physical plane according to the mapping relationship.
The present application is not limited to the method for determining the mapping relationship between the pixel plane and the physical plane of the image capturing apparatus, and in a possible example, before step S303, the method may further include: and calibrating the camera device according to the imaging factor of the camera device, and determining the mapping relation between the pixel plane and the physical plane of the camera device.
The imaging factor of the camera device is a parameter that affects the imaging of the object, and the type of the imaging factor can be referred to the foregoing or the following, which is not described herein again. The mapping relationship between the pixel plane and the physical plane of the image pickup device is related to the imaging factor of the image pickup device, and the mapping relationship can be understood as a mathematical model corresponding to the imaging factor. The parameters in the mathematical model can be imaging factors of the camera device, the input can be pixel points in the image, and the output can be coordinates of the pixel points on a physical plane. The calibration method can be used for constructing a mathematical model based on the relationship between the imaging factor and the coordinate and the mapping relationship between the coordinate system and the coordinate system. And then the coordinates of the pixel points on the pixel plane are used as input, the coordinates of the pixel points on the physical plane are used as output, and the imaging factors of the camera device are solved, so that the parameter estimation of the imaging factors is more accurate, and the comprehensive analysis of various imaging factors influencing image imaging is facilitated. The solving process of the imaging factor can be obtained by iteration through a distribution estimation algorithm or a particle swarm algorithm, which is not limited herein. The imaging factor of the camera device is used for calibrating the camera device, so that the accuracy of determining the mapping relation between the pixel plane and the physical plane of the camera device can be improved, and the accuracy of distance measurement can be improved.
In one possible example, the method of determining the mapping relationship between the pixel plane and the physical plane of the image pickup apparatus may include the steps of: acquiring a homography matrix based on an imaging factor according to the imaging factor of the camera device; and normalizing elements in the homography matrix, and determining the mapping relation between the pixel plane and the physical plane of the camera.
The homography matrix may be used to describe a position mapping relationship between the pixel plane and the physical plane of the object, which may refer to the foregoing or the following, and is not described herein again. The method for obtaining the homography matrix is not limited, and the homography matrix can be obtained by calculation according to the imaging factor and the coordinates of the pixel points with known physical distance and pixel distance on the pixel plane and the physical plane respectively. Or in a possible example, the imaging factor comprises an attitude angle and an internal parameter of the camera device, and the method for obtaining the homography matrix based on the imaging factor according to the imaging factor of the camera device comprises the following steps: obtaining a rotation matrix according to the attitude angle; obtaining an internal reference matrix according to the internal reference of the camera device; and obtaining a homography matrix according to the rotation matrix and the internal reference matrix.
The internal reference, attitude angle, rotation matrix, and internal reference matrix may be referred to as described above or below. The relationship between the internal reference and the internal reference matrix, and the relationship between the attitude angle and the rotation matrix, can be referred to the aforementioned equations (1) and (2).
The homography matrix can be referred to in fig. 4. As shown in fig. 4, the images 101 and 102 include images 103 and 104 of the reference object 10 taken from different perspectives, respectively. There is a point x in the reference object 10 which is associated with a pixel point x in the image 103 1 Point x and pixel point x in image 104 2 . Pixel point x 1 May be (u) 1 ,v 1 ,1),x 2 May be (u' 1 ,v′ 1 1), and x) 1 =H*x 2
It should be noted that the pixel points are described by coordinate values on the image, and the pixel point pair includes two pixel points. Acquisition of the homography matrix requires at least 4 pixel point pairs. For example, the matrix H of 4 pixel point pairs in FIG. 4 4point Comprising (Δ u) 1 ,Δv 1 )、(Δu 2 ,Δv 2 )、(Δu 3 ,Δv 3 ) And (Δ u) 4 ,Δv 4 ) Will matrix H 4point Mapping the elements in (1) to obtain a homography matrix H matrix . The H matrix Matrix of 3 x 3, comprising H 11 、H 12 、H 13 、H 21 、H 22 、H 23 、H 31 、H 32 、H 33 A total of 9 elements. After the homography matrix is acquired, coordinates of pixel point pairs on other images acquired by the camera device on the physical plane respectively can be acquired according to the homography matrix.
According to an internal reference matrix K and a rotation matrix R (α,β,γ) The method of obtaining the homography matrix H can be calculated with reference to the following equation (3):
H=K*R (α,β,γ) *K -1 (3)
it can be understood that, in this example, the homography matrix is obtained according to the rotation matrix obtained by the attitude angle and the internal reference matrix obtained by the internal reference, and the accuracy of obtaining the homography matrix can be improved.
The method for normalizing the elements in the homography matrix is not limited in the present application, and the last row and the last column of the homography matrix may be used as the first element. And dividing each element in the homography matrix by the first element to obtain a reference matrix. And then obtaining the product between the reference matrix and the homogeneous coordinate of the pixel point. The coordinate value of the pixel point on the physical plane may be a known constant. Therefore, element standardization in the homography matrix is realized through the homogeneous coordinates of the pixel points and the reference matrix corresponding to the homography matrix, the accuracy rate of acquiring the homography matrix can be improved, and the accuracy rate of distance measurement is improved.
It can be understood that the homography matrix based on the imaging factor is obtained according to the imaging factor of the camera. And then, standardizing elements in the homography matrix, and determining the mapping relation between the pixel plane and the physical plane of the camera device. That is to say, the mapping relation between the pixel plane and the physical plane of the image pickup device is obtained according to the homography matrix related to the pixel plane and the physical plane of the image pickup device, so that the accuracy of determining the mapping relation can be improved, and the accuracy of distance measurement can be improved.
In a possible implementation manner, the imaging factor further includes a scale factor and an image distortion factor, and after the homography matrix based on the imaging factor is obtained according to the imaging factor of the image capturing device, the method further includes the following steps: acquiring a reference image acquired by a camera device; and obtaining the optimal parameter value of the imaging factor according to the scale factor, the image distortion factor, the homography matrix and the physical distance between the first reference object and the second reference object.
The reference image comprises a first reference object and a second reference object, and the pixel distance and the actual physical distance of the first reference object and the second reference object are known constants. The image distortion factor, the reference image, the first reference object, the second reference object, the scale factor, and the distortion parameter may refer to the foregoing or the following, and are not repeated herein. The optimal parameter value of the imaging factor may be understood as an optimal value of the imaging factor solved. It can be understood that, after the optimal parameter values of the imaging factors are obtained, the pixel plane and the physical plane of the image capturing device may be calibrated based on the optimal parameter values of the imaging factors, so as to obtain the mapping relationship between the pixel plane and the physical plane of the image capturing device.
The method for obtaining the optimal parameter value of the imaging factor in the embodiments of the present disclosure is not limited, and in a possible implementation manner, obtaining the optimal parameter value of the imaging factor according to the scale factor, the image distortion factor, the homography matrix and the physical distance between the first reference object and the second reference object includes the following steps: obtaining the theoretical distance of the pixel distance of the first reference object and the second reference object on the physical plane according to the homography matrix; and according to the scale factor, the image distortion factor and the homography matrix, minimizing the difference between the theoretical distance and the actual physical distance to obtain the optimal parameter value of the imaging factor.
The theoretical distance is the distance of the pixel distance of the first reference object and the second reference object on the physical plane, which is obtained on the basis of the homography matrix. The theoretical distance may be equal to a product between the homography matrix and a pixel distance of the first reference object and the second reference object. The closer the theoretical distance is to the actual physical distance, the closer the value of the imaging factor is to the actual value. Based on the method, the optimal parameter value of each imaging factor is obtained by a method of minimizing the difference between the theoretical distance and the actual physical distance according to the scale factor, the image distortion factor and the homography matrix. On the basis of obtaining the homography matrix, the mapping relation between the pixel plane and the physical plane of the camera device is obtained by combining the scale factor and the image distortion factor, and the mapping relation comprises the optimal parameter value of the imaging factor, so that the accuracy of obtaining the mapping relation can be improved, and the accuracy of distance measurement can be improved.
Minimizing the difference between the theoretical distance and the actual physical distance can be referred to the following equation (4):
Figure BDA0003668853440000091
wherein N is ls The number of pairs of reference pixel points between the first reference object and the second reference object may be greater than or equal to 7. II P in the formula (4) k -Q k2 Being the actual physical distance of the reference pixel point pair,
Figure BDA0003668853440000092
is the theoretical distance of the reference pixel point pair in the physical plane. The actual physical distance of the reference pixel point pair is known, as is the pixel distance of the reference pixel point pair in the reference image. In this way, the difference between the minimum theoretical distance and the actual physical distance can be obtained by equation (4) so that the optimum parameter value of the imaging factor is obtained from the difference.
The optimum parameter value for the imaging factor may be a value selected from a reference range. Wherein the reference range of the imaging factor may be obtained based on the initial value of the imaging factor and the range threshold. The initial value of the imaging factor can be estimated based on parameters of the imaging device, a basic form, and the like. For example, in (f) of the internal references x ,f y ) Can be obtained based on the focal length of the camera device, the (u) in the internal reference 0 ,v 0 ) May be obtained based on the focal position of the image pickup device, the initial value may be (0, 0); when (u) 0 ,v 0 ) When the attitude of the aircraft is a top view attitude, the pitch angle can be 180 degrees, the yaw angle and the roll angle can be 0; the distortion parameter may be a small value, and the initial value may be 0; the initial value of the scale factor may be equal to a divisor between the image length and the number of pixels, or the like. The range threshold may be estimated according to the type of the imaging factor, for example, the range threshold of the internal parameter of the image capturing device is 100, the range threshold of the distortion parameter may be 0.5, the range threshold of the scale factor may be 0.1, and the like. Illustratively, if the focal length is 1000, the range threshold is 100, and the reference range of focal lengths is (900,1100).
In one possible example, the method of obtaining optimal parameter values for the imaging factors may comprise the iterative steps of: selecting L reference values from the reference range of the imaging factors; acquiring the theoretical distance of the pixel distance of the first reference object and the second reference object on the physical plane according to each reference value in the L reference values; selecting a target value from the L reference values according to a difference value between the theoretical distance and the actual physical distance; in response to the target value not meeting the preset condition, reducing the reference range of the imaging factor based on the target value, and returning to the step of selecting L reference values from the reference range of the imaging factor; or in response to the target reference value satisfying a preset condition, taking the target value as the optimal parameter value of the imaging factor.
Where L is an integer greater than or equal to 2, and the numerical value of L is not limited in the embodiments of the present disclosure, for example, 500. The imaging factors may include scale factors, image distortion factors, interpolation and pose angles, or homography matrices determined by the interpolation and pose angles. The preset condition that the target value is satisfied in the embodiment of the present disclosure is not limited, and may be that the number of iterations (obtaining the target value) is greater than a threshold, for example, 5. Or may be that the difference between the last obtained target value and the present obtained target value is less than a threshold, e.g., 0.5, etc.
It can be understood that, in this example, the target value is selected according to the difference between the theoretical distance and the actual physical distance, the reference range of the imaging factor is narrowed according to the target value, and the optimal reference value of the imaging factor is obtained through multiple iterations, which can improve the accuracy and efficiency of obtaining the optimal parameter value, and is beneficial to improving the accuracy of obtaining the mapping relationship between the pixel plane and the physical plane of the image pickup device.
In one possible implementation manner, the method further includes: and carrying out distortion removal processing on the reference image.
It is understood that the image distortion factor has some effect on the imaging of the object. Therefore, before the optimal parameter value of the imaging factor is obtained through the reference image, the reference image can be subjected to distortion removal processing, and the accuracy rate of obtaining the optimal parameter value can be improved. The distortion removal processing may specifically substitute respective corresponding pixel points of the first reference object and the second reference object in the reference image into formulas corresponding to the radial distortion coefficient and the tangential distortion coefficient to obtain pixel coordinates after distortion removal. By considering the image distortion factor, the accuracy of obtaining the mapping relation between the pixel plane and the physical plane of the camera device can be further improved, and the accuracy of distance measurement can be further improved.
S304: and obtaining the distance between the first object and the second object according to the first coordinate and the second coordinate.
The method for obtaining the distance between the first object and the second object is not limited in the present application, and the euclidean distance, the manhattan distance (also called city-block distance), the chebyshev distance, the minz distance, the normalized euclidean distance, the mahalanobis distance, and the like between the first coordinate and the second coordinate may be obtained.
In the following, the euclidean distance is taken as an example, and the euclidean metric refers to the true distance between two points in an m-dimensional space, or the natural length of a vector (i.e., the distance of the point from the origin). Such as: the euclidean distance in two and three dimensions is the actual distance between two points. Pixel point (u) 1 ,v 1 ) And pixel point (u) 2 ,v 2 ) The algorithm of euclidean distance in two-dimensional space is shown in the following equation (5):
Figure BDA0003668853440000101
in one possible implementation, step S304 may include: acquiring the Euclidean distance between the first coordinate and the second coordinate; and multiplying the Euclidean distance by the scale factor of the camera device to obtain the distance between the first object and the second object. Therefore, the scale factor of the camera device is considered, and the accuracy of distance acquisition is further improved.
In the method shown in fig. 3, after a target image acquired by a camera device is acquired, first pixel points and second pixel points corresponding to a first object and a second object of a distance to be measured in the target image are acquired. And respectively acquiring a first coordinate and a second coordinate of the first pixel point and the second pixel point on the physical plane based on a predetermined mapping relation between the pixel plane and the physical plane of the camera device. That is to say, the coordinates of the two objects on the physical plane in the image acquired by the camera device are obtained through the mapping relation between the pixel plane and the physical plane of the camera device, so that the distance between the two objects is obtained, monocular vision distance measurement is realized, and the accuracy and the stability of the distance measurement can be improved.
In one possible example, a first image and a second image acquired by a camera device are acquired; acquiring a third pixel point and a fourth pixel point which correspond to a third object in the first image and the second image respectively; respectively acquiring a third coordinate and a fourth coordinate of a third pixel point and a fourth pixel point on a physical plane based on a mapping relation between the pixel plane and the physical plane of the camera device; and acquiring the moving distance of the third object according to the third coordinate and the fourth coordinate.
Wherein the first image and the second image both contain a third object. The first image and the second image may be any image in the target video that includes a third object, and the third object may be the first object or the second object, or another object, which is not limited herein. The third pixel point may be a coordinate value of the third object in the first image, and the fourth pixel point may be a coordinate value of the third object in the second image, and specifically may be a position of a pixel point selected in the third object. The selection method of the third pixel point and the fourth pixel point can refer to the above, and can be a midpoint or a boundary point of an object. The third coordinate and the fourth coordinate are respectively corresponding positions of the third pixel point and the fourth pixel point on the physical plane based on the mapping relation between the pixel plane and the physical plane of the camera device. The moving distance of the third object may be equal to the euclidean distance between the third coordinate and the fourth coordinate, or may be equal to the product of the euclidean distance and the scale factor of the imaging device, or the like, which is not limited herein.
It can be understood that after the mapping relationship between the pixel plane and the physical plane of the image capturing device is obtained, the coordinates of the pixel points in the two images containing the same object can be obtained according to the mapping relationship, so as to obtain 2 different positions before and after the object. And then the moving distance of the object is obtained according to the coordinates corresponding to the 2 positions, so that the accuracy and the stability of obtaining the moving distance can be improved.
In one possible example, the time interval between the first image and the second image may be obtained by an absolute value of a difference between an acquisition time of the second image and an acquisition time of the first image; or the time interval between the first image and the second image may be derived from the sampling frame rate of the camera device and the number of video frames between the first image and the second image.
The sampling frame rate of the camera is used to describe the acquisition efficiency of the camera when acquiring the target video, that is, the acquisition time interval between two adjacent images is usually 0.2 second. It can be understood that the camera device collects a plurality of images according to the sampling frame rate thereof, and if the first image and the second image belong to the same video and the number of the video frames with the phase difference is M, the product between M and the sampling frame rate can be calculated to obtain the time interval between the first image and the second image. Therefore, the acquisition time of each video frame does not need to be recorded, and the convenience and the efficiency of acquiring the time interval can be improved. After acquiring the time interval and the moving distance, the moving distance may be divided by the time interval to obtain a moving speed of the third object. The moving speed is equivalent to the average speed, so that the moving speed is obtained on the basis of the mapping relation between the pixel plane and the physical plane of the camera device, and the accuracy and the stability of speed measurement can be improved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a distance measuring device according to an embodiment of the disclosure. As shown in fig. 5, the distance measuring apparatus 500 includes an image obtaining unit 501, a pixel point obtaining unit 502, a coordinate obtaining unit 503, a distance obtaining unit 504, and a relationship obtaining unit 505, wherein:
the image acquiring unit 501 is configured to acquire a target image acquired by the camera device, where the target image includes a first object and a second object at a distance to be measured;
the pixel point obtaining unit 502 is configured to obtain a first object and a second object, and respectively correspond to a first pixel point and a second pixel point in a target image;
the coordinate obtaining unit 503 is configured to obtain a first coordinate and a second coordinate of the first pixel point and the second pixel point in the physical plane, respectively, based on a predetermined mapping relationship between the pixel plane and the physical plane of the image capturing apparatus;
the distance acquisition unit 504 is configured to acquire a distance between the first object and the second object according to the first coordinate and the second coordinate.
In a possible implementation manner, the relationship obtaining unit 505 is configured to calibrate the image capturing apparatus according to an imaging factor of the image capturing apparatus, and determine a mapping relationship between a pixel plane and a physical plane of the image capturing apparatus.
In a possible implementation manner, the relationship obtaining unit 505 is configured to obtain a homography matrix based on an imaging factor according to the imaging factor of the image capturing apparatus; and normalizing the elements in the homography matrix to determine the mapping relation between the pixel plane and the physical plane of the camera device.
In a possible implementation manner, the imaging factor includes an attitude angle and an internal reference of the camera, and the relationship obtaining unit 505 is specifically configured to obtain a rotation matrix according to the attitude angle; obtaining an internal reference matrix according to the internal reference of the camera device; and obtaining a homography matrix according to the rotation matrix and the internal reference matrix.
In a possible implementation manner, the imaging factor further includes a scale factor and an image distortion factor, and the image obtaining unit 501 is further configured to obtain a reference image collected by the camera device, where the reference image includes a first reference object and a second reference object, and a pixel distance and an actual physical distance of the first reference object and the second reference object are known constants; the relationship obtaining unit 505 is further configured to obtain an optimal parameter value of the imaging factor according to the scale factor, the image distortion factor, the homography matrix, and the physical distance between the first reference object and the second reference object.
In a possible implementation manner, the relation obtaining unit 505 is further configured to perform a distortion removal process on the reference image.
In a possible implementation manner, the relationship obtaining unit 505 is specifically configured to obtain a theoretical distance between the pixel distance of the first reference object and the pixel distance of the second reference object in the physical plane according to the homography matrix; and according to the scale factor, the image distortion factor and the homography matrix, minimizing the difference between the theoretical distance and the actual physical distance to obtain the optimal parameter value of the imaging factor.
In a possible implementation manner, the distance obtaining unit 504 is specifically configured to obtain an euclidean distance between the first coordinate and the second coordinate; and multiplying the Euclidean distance by a scale factor of the camera device to obtain the distance between the first object and the second object.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 6, the electronic device 600 includes a processor 601, a communication interface 602, and a memory 603. The processor 601, the communication interface 602, and the memory 603 may be connected to each other via a bus 605, or may be connected in other ways. The related functions implemented by the pixel point obtaining unit 502, the coordinate obtaining unit 503, the distance obtaining unit 504, and the relationship obtaining unit 505 shown in fig. 5 may be implemented by one or more processors 601. The relevant functions implemented by the image acquisition unit 501 shown in fig. 5 may be implemented by one or more communication interfaces 602.
The processor 601 includes one or more processors, for example, one or more Central Processing Units (CPUs), and in the case that the processor 601 is a Central Processing Unit (CPU), the CPU may be a single-core CPU or a multi-core CPU. In the embodiment of the present disclosure, the processor 601 is used to control the electronic device 600 to implement the embodiment shown in fig. 3.
The communication interface 602 is used for realizing communication with other devices, for example, if the electronic device 600 is a terminal device, the communication interface 602 can realize communication between the terminal device and a device such as a server; if the electronic device 600 is a server, the communication interface 602 can realize communication between the server and a device such as a terminal device.
The memory 603 includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a compact disc read-only memory (CD-ROM), and the memory 603 is used for storing relevant instructions and data.
In the disclosed embodiment, the memory 603 stores computer readable instructions 604 and the processor 601 is for instructions stored in the memory 603. The instructions are for performing the steps of:
acquiring a physical distance set and an image position set of a plurality of first pixel point pairs corresponding to a reference object in a first image and an image position set of a second pixel point pair corresponding to a first object and a second object, and acquiring a reference range of each imaging factor in a plurality of imaging factors;
acquiring a target image acquired by a camera device, wherein the target image comprises a first object and a second object with a distance to be measured;
acquiring a first object and a second object, and respectively corresponding first pixel points and second pixel points in a target image;
respectively acquiring a first coordinate and a second coordinate of a first pixel point and a second pixel point on a physical plane based on a predetermined mapping relation between the pixel plane and the physical plane of the camera device;
and obtaining the distance between the first object and the second object according to the first coordinate and the second coordinate.
In a possible implementation manner, before the first coordinate and the second coordinate of the first pixel point and the second pixel point in the physical plane are respectively obtained based on a predetermined mapping relationship between the pixel plane and the physical plane of the image capturing device, the instruction is further configured to perform the following steps:
and calibrating the camera device according to the imaging factor of the camera device, and determining the mapping relation between the pixel plane and the physical plane of the camera device.
In one possible implementation, in determining the mapping relationship between the pixel plane and the physical plane of the image capturing apparatus, the instructions are specifically configured to perform the following steps:
obtaining a homography matrix based on the imaging factors according to the imaging factors of the camera device;
and normalizing the elements in the homography matrix to determine the mapping relation between the pixel plane and the physical plane of the camera device.
In a possible implementation manner, the imaging factor includes an attitude angle and an internal reference of the camera, and in terms of obtaining a homography matrix based on the imaging factor according to the imaging factor of the camera, the instructions are specifically configured to perform the following steps:
obtaining a rotation matrix according to the attitude angle;
obtaining an internal reference matrix according to the internal reference of the camera device;
and obtaining a homography matrix according to the rotation matrix and the internal reference matrix.
In a possible implementation manner, the imaging factor further includes a scale factor and an image distortion factor, and after obtaining a homography matrix based on the imaging factor according to the imaging factor of the image capturing device, the instructions are further configured to perform the following steps:
acquiring a reference image acquired by a camera device, wherein the reference image comprises a first reference object and a second reference object, and the pixel distance and the actual physical distance of the first reference object and the second reference object are known constants;
and obtaining the optimal parameter value of the imaging factor according to the scale factor, the image distortion factor, the homography matrix and the physical distance between the first reference object and the second reference object.
In a possible implementation manner, the instructions are further configured to perform the following steps:
and carrying out distortion removal processing on the reference image.
In one possible implementation, the instructions are specifically configured to perform the following steps in obtaining an optimal parameter value of the imaging factor according to the scale factor, the image distortion factor, the homography matrix, and the physical distance between the first reference object and the second reference object:
obtaining the theoretical distance of the pixel distance of the first reference object and the second reference object on the physical plane according to the homography matrix;
and according to the scale factor, the image distortion factor and the homography matrix, minimizing the difference between the theoretical distance and the actual physical distance to obtain the optimal parameter value of the imaging factor.
In a possible implementation manner, in terms of obtaining the distance between the first object and the second object according to the first coordinate and the second coordinate, the instructions are specifically configured to perform the following steps:
acquiring the Euclidean distance between the first coordinate and the second coordinate;
and multiplying the Euclidean distance by the scale factor of the camera device to obtain the distance between the first object and the second object.
Embodiments of the present disclosure also provide a computer-readable storage medium storing a computer program comprising program instructions that, when executed by the processor, cause the processor to perform some or all of the steps of any one of the distance measurement methods as set forth in the above method embodiments.
Embodiments of the present disclosure also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the distance measurement methods as recited in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
In the embodiments disclosed in the present application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the elements may be selected according to actual needs to achieve the objectives of the embodiments of the present disclosure.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated unit, if implemented in the form of a software program module and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned memory comprises: a U disk, a RAM, a ROM, a removable hard disk, a magnetic disk, or an optical disk.
The embodiments of the present disclosure are described in detail above, and the principles and embodiments of the present disclosure are explained herein by applying specific embodiments, and the descriptions of the embodiments are only used to help understand the methods and the core ideas of the present disclosure; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific implementation manner and the application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. A distance measuring method, characterized by comprising:
acquiring a target image acquired by a camera device, wherein the target image comprises a first object and a second object with a distance to be measured;
acquiring a first object and a second object, and respectively corresponding to a first pixel point and a second pixel point in the target image;
respectively acquiring a first coordinate and a second coordinate of the first pixel point and the second pixel point on a physical plane based on a predetermined mapping relation between the pixel plane and the physical plane of the camera device;
and obtaining the distance between the first object and the second object according to the first coordinate and the second coordinate.
2. The method according to claim 1, wherein before the step of obtaining the first coordinate and the second coordinate of the first pixel point and the second pixel point in the physical plane respectively based on the predetermined mapping relationship between the pixel plane of the image pickup device and the physical plane, the method further comprises:
and calibrating the camera device according to the imaging factor of the camera device, and determining the mapping relation between the pixel plane and the physical plane of the camera device.
3. The method according to claim 1 or 2, wherein the step of determining a mapping relationship between a pixel plane and a physical plane of the image pickup device comprises:
obtaining a homography matrix based on the imaging factor according to the imaging factor of the camera device;
and normalizing elements in the homography matrix, and determining the mapping relation between the pixel plane and the physical plane of the camera device.
4. The method according to claim 3, wherein the imaging factor includes an attitude angle and an internal reference of the camera, and the step of obtaining the homography matrix based on the imaging factor according to the imaging factor of the camera comprises:
obtaining a rotation matrix according to the attitude angle;
obtaining an internal reference matrix according to the internal reference of the camera device;
and obtaining the homography matrix according to the rotation matrix and the internal reference matrix.
5. The method according to claim 3 or 4, wherein the imaging factor further comprises a scale factor and an image distortion factor, and further comprising, after the step of obtaining a homography matrix based on the imaging factor according to the imaging factor of the image capture device:
acquiring a reference image acquired by the camera device, wherein the reference image comprises a first reference object and a second reference object, and the pixel distance and the actual physical distance of the first reference object and the second reference object are known constants;
and obtaining the optimal parameter value of the imaging factor according to the scale factor, the image distortion factor, the homography matrix and the physical distance between the first reference object and the second reference object.
6. The method of claim 5, further comprising:
and carrying out distortion removal processing on the reference image.
7. The method of claim 5, wherein the step of deriving optimal parameter values for the imaging factor based on the scale factor, the image distortion factor, the homography matrix, and physical distances of the first and second reference objects comprises:
obtaining the theoretical distance of the pixel distance of the first reference object and the second reference object on a physical plane according to the homography matrix;
and minimizing the difference between the theoretical distance and the actual physical distance according to the scale factor, the image distortion factor and the homography matrix to obtain the optimal parameter value of the imaging factor.
8. The method according to any one of claims 1-7, wherein said step of deriving a distance between said first object and said second object from said first coordinate and said second coordinate comprises:
acquiring the Euclidean distance between the first coordinate and the second coordinate;
and multiplying the Euclidean distance by a scale factor of the image pickup device to obtain the distance between the first object and the second object.
9. A distance measuring device, comprising:
the image acquisition unit is used for acquiring a target image acquired by the camera device, wherein the target image comprises a first object and a second object with a distance to be measured;
the pixel point acquisition unit is used for acquiring a first pixel point and a second pixel point which correspond to the first object and the second object respectively in the target image;
a coordinate obtaining unit, configured to obtain first coordinates and second coordinates of the first pixel point and the second pixel point in a physical plane, respectively, based on a predetermined mapping relationship between the pixel plane and the physical plane of the imaging device;
a distance acquisition unit configured to acquire a distance between the first object and the second object according to the first coordinate and the second coordinate.
10. An electronic device comprising a processor and a memory, wherein the memory is configured to store computer-readable instructions and the processor is configured to invoke the instructions stored in the memory to perform the method of any of claims 1-9.
11. A computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, causes the processor to implement the method of any one of claims 1-9.
CN202210597963.9A 2022-05-30 2022-05-30 Distance measuring method, distance measuring device, electronic device, and storage medium Withdrawn CN114926316A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210597963.9A CN114926316A (en) 2022-05-30 2022-05-30 Distance measuring method, distance measuring device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210597963.9A CN114926316A (en) 2022-05-30 2022-05-30 Distance measuring method, distance measuring device, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN114926316A true CN114926316A (en) 2022-08-19

Family

ID=82812332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210597963.9A Withdrawn CN114926316A (en) 2022-05-30 2022-05-30 Distance measuring method, distance measuring device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN114926316A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115839667A (en) * 2023-02-21 2023-03-24 青岛通产智能科技股份有限公司 Height measuring method, device, equipment and storage medium
CN117782030A (en) * 2023-11-24 2024-03-29 北京天数智芯半导体科技有限公司 Distance measurement method and device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115839667A (en) * 2023-02-21 2023-03-24 青岛通产智能科技股份有限公司 Height measuring method, device, equipment and storage medium
CN117782030A (en) * 2023-11-24 2024-03-29 北京天数智芯半导体科技有限公司 Distance measurement method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN107223269B (en) Three-dimensional scene positioning method and device
JP6902122B2 (en) Double viewing angle Image calibration and image processing methods, equipment, storage media and electronics
CN114926316A (en) Distance measuring method, distance measuring device, electronic device, and storage medium
WO2021139176A1 (en) Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN112444242A (en) Pose optimization method and device
CN112489099B (en) Point cloud registration method and device, storage medium and electronic equipment
CN113029128B (en) Visual navigation method and related device, mobile terminal and storage medium
US20200134847A1 (en) Structure depth-aware weighting in bundle adjustment
CN113643414B (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN113361365B (en) Positioning method, positioning device, positioning equipment and storage medium
CN109766896B (en) Similarity measurement method, device, equipment and storage medium
CN116433737A (en) Method and device for registering laser radar point cloud and image and intelligent terminal
CN114187589A (en) Target detection method, device, equipment and storage medium
CN116563384A (en) Image acquisition device calibration method, device and computer device
CN114004890B (en) Attitude determination method and apparatus, electronic device, and storage medium
CN113763478B (en) Unmanned vehicle camera calibration method, device, equipment, storage medium and system
CN117315372A (en) Three-dimensional perception method based on feature enhancement
CN117726747A (en) Three-dimensional reconstruction method, device, storage medium and equipment for complementing weak texture scene
CN113989376B (en) Method and device for acquiring indoor depth information and readable storage medium
CN113790711B (en) Unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method and storage medium
CN113034615B (en) Equipment calibration method and related device for multi-source data fusion
WO2021149509A1 (en) Imaging device, imaging method, and program
CN115035188A (en) Target-based distance measurement method and device and terminal equipment
CN114663519A (en) Multi-camera calibration method and device and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220819

WW01 Invention patent application withdrawn after publication