CN112907650A - Cloud height measuring method and equipment based on binocular vision - Google Patents

Cloud height measuring method and equipment based on binocular vision Download PDF

Info

Publication number
CN112907650A
CN112907650A CN202110172791.6A CN202110172791A CN112907650A CN 112907650 A CN112907650 A CN 112907650A CN 202110172791 A CN202110172791 A CN 202110172791A CN 112907650 A CN112907650 A CN 112907650A
Authority
CN
China
Prior art keywords
image
cloud
binocular
feature points
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110172791.6A
Other languages
Chinese (zh)
Inventor
何敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN202110172791.6A priority Critical patent/CN112907650A/en
Publication of CN112907650A publication Critical patent/CN112907650A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The method comprises the steps of acquiring a first image and a second image of a binocular camera at the same time, and carrying out binocular alignment on the first image and the second image; detecting cloud feature points of the processed first image, and matching the cloud feature points of the second image according to the detected cloud feature points; acquiring visual difference according to the cloud characteristic points of the first image and the second image; and acquiring world coordinates according to the visual difference and parameters of the binocular camera, and determining the target cloud height according to the world coordinates. Therefore, the cloud heights in a wider range can be comprehensively observed, the perception of a three-dimensional world is provided, the measurement result is more accurate, and the measurement cost is reduced.

Description

Cloud height measuring method and equipment based on binocular vision
Technical Field
The application relates to the field of computers, in particular to a cloud height measuring method and equipment based on binocular vision.
Background
The measurement of cloud height has important guiding significance for daily work of airports and meteorological departments. At present, the mainstream method for measuring the cloud height is based on the laser pulse principle, and a relatively representative device is a cloud height instrument. The cloud height instrument generally emits laser pulses vertically upwards from the ground, and cloud height calculation is realized by receiving backscattering of the atmosphere. The method has the disadvantages that the cloud height right above the equipment can only be detected, and is replaced by points, so that large-amplitude fluctuation is easy to occur; on the other hand, the ceilometer is expensive and not suitable for intensive deployment.
Disclosure of Invention
An object of the present application is to provide a method and an apparatus for measuring cloud height based on binocular vision, which solve the problems of the prior art that when a cloud height instrument is used to measure cloud height, the observation range is small, large fluctuation is easy to occur, the price is high, and the method and the apparatus are not suitable for intensive deployment.
According to one aspect of the application, a binocular vision-based cloud height measuring method is provided, and the method comprises the following steps:
acquiring a first image and a second image of a binocular camera at the same time, and performing binocular alignment on the first image and the second image;
detecting cloud feature points of the processed first image, and matching the cloud feature points of the second image according to the detected cloud feature points;
acquiring visual difference according to the cloud characteristic points of the first image and the second image;
and acquiring world coordinates according to the visual difference and parameters of the binocular camera, and determining the target cloud height according to the world coordinates.
Further, the method comprises:
and geometrically correcting the target cloud height according to the elevation angle of the binocular camera to obtain the optimized target cloud height.
Further, acquiring a first image and a second image of a binocular camera at the same time, and performing binocular alignment on the first image and the second image, including:
calibrating each monocular camera in the binocular cameras respectively, and determining internal parameters and distortion parameters of the calibrated binocular cameras;
acquiring a first image and a second image of a binocular camera at the same time by using the calibrated binocular camera; and performing line alignment on the first image and the second image according to the internal parameters and distortion parameters of the binocular camera to complete binocular alignment.
Further, detecting cloud feature points of the processed first image includes:
extracting a sky area from the processed first image;
and carrying out cloud edge detection in the sky area according to the parameter sequence of at least one type of weather state in sequence to obtain cloud feature points through edge detection.
Further, matching the cloud feature points of the second image according to the detected cloud feature points includes:
selecting image blocks around the cloud feature points of the first image as templates;
and matching a sliding window in the second image with an image block with the maximum similarity to the template to obtain matched cloud feature points.
Further, sliding window matching a tile with the greatest similarity to the template in the second image comprises:
when the template moves one pixel on the second image in the transverse direction or the longitudinal direction each time, comparing the template with the image block corresponding to the moved position once, and obtaining a result matrix according to each comparison;
and selecting the minimum value or the maximum value from the result matrix according to the similarity evaluation standard to serve as the maximum similarity, and determining a picture block with the maximum similarity.
Further, acquiring a visual difference according to the cloud feature points of the first image and the cloud feature points of the second image, including:
and acquiring a difference value between the pixel value of the abscissa of the cloud characteristic point of the first image and the pixel value of the abscissa of the cloud characteristic point of the second image, and acquiring a visual difference according to the difference value.
Further, the parameters of the binocular camera include: the camera comprises internal parameters and distortion coefficients of the binocular camera, wherein the internal parameters of the binocular camera comprise a focal length and an optical center distance.
According to another aspect of the present application, there is also provided a binocular vision-based cloud height measuring apparatus, including:
the acquisition device is used for acquiring a first image and a second image of the binocular camera at the same time and carrying out binocular alignment on the first image and the second image;
the detection device is used for detecting the cloud characteristic points of the processed first image and matching the cloud characteristic points of the second image according to the detected cloud characteristic points;
a difference value obtaining device, configured to obtain a visual difference according to the cloud feature points of the first image and the cloud feature points of the second image;
and the coordinate acquisition device is used for acquiring world coordinates according to the visual difference and the parameters of the binocular camera and determining the target cloud height according to the world coordinates.
Further, the apparatus comprises:
and the correcting device is used for geometrically correcting the target cloud height according to the elevation angle of the binocular camera to obtain the optimized target cloud height.
Further, the collection device is configured to:
calibrating each monocular camera in the binocular cameras respectively, and determining internal parameters and distortion parameters of the calibrated binocular cameras;
acquiring a first image and a second image of a binocular camera at the same time by using the calibrated binocular camera; and performing line alignment on the first image and the second image according to the internal parameters and distortion parameters of the binocular camera to complete binocular alignment.
Further, the detection device is configured to:
extracting a sky area from the processed first image;
and carrying out cloud edge detection in the sky area according to the parameter sequence of at least one type of weather state in sequence to obtain cloud feature points through edge detection.
Further, the detection device is configured to:
selecting image blocks around the cloud feature points of the first image as templates;
and matching a sliding window in the second image with an image block with the maximum similarity to the template to obtain matched cloud feature points.
Further, the detection device is configured to:
when the template moves one pixel on the second image in the transverse direction or the longitudinal direction each time, comparing the template with the image block corresponding to the moved position once, and obtaining a result matrix according to each comparison;
and selecting the minimum value or the maximum value from the result matrix according to the similarity evaluation standard to serve as the maximum similarity, and determining a picture block with the maximum similarity.
Further, the difference obtaining device is configured to:
and acquiring a difference value between the pixel value of the abscissa of the cloud characteristic point of the first image and the pixel value of the abscissa of the cloud characteristic point of the second image, and acquiring a visual difference according to the difference value.
According to yet another aspect of the present application, there is also provided a computer readable medium having computer readable instructions stored thereon, the computer readable instructions being executable by a processor to implement the method as described above.
Compared with the prior art, the binocular camera based on the image acquisition method has the advantages that the first image and the second image of the binocular camera are acquired at the same time, and binocular alignment is carried out on the first image and the second image; detecting cloud feature points of the processed first image, and matching the cloud feature points of the second image according to the detected cloud feature points; acquiring visual difference according to the cloud characteristic points of the first image and the second image; and acquiring world coordinates according to the visual difference and parameters of the binocular camera, and determining the target cloud height according to the world coordinates. Therefore, the cloud heights in a wider range can be comprehensively observed, the perception of a three-dimensional world is provided, the measurement result is more accurate, and the measurement cost is reduced.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 illustrates a schematic flow chart of a binocular vision based cloud height measurement method provided according to an aspect of the present application;
fig. 2 is a schematic diagram illustrating the effect of binocular alignment in an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating cloud detection in an actual scenario in another embodiment of the present application;
fig. 4 shows a schematic structural diagram of a binocular vision-based cloud height measuring apparatus provided in another aspect of the present application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include volatile Memory in a computer readable medium, Random Access Memory (RAM), and/or nonvolatile Memory such as Read Only Memory (ROM) or flash Memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change RAM (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, magnetic cassette tape, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
Fig. 1 shows a schematic flow chart of a binocular vision-based cloud height measurement method provided according to an aspect of the present application, the method including: step S11-step S14, wherein in step S11, a first image and a second image of a binocular camera are collected at the same time, and the first image and the second image are aligned in a binocular mode; step S12, detecting the cloud characteristic points of the processed first image, and matching the cloud characteristic points of the second image according to the detected cloud characteristic points; step S13, obtaining visual difference according to the cloud characteristic points of the first image and the second image; and step S14, acquiring world coordinates according to the visual difference and the parameters of the binocular camera, and determining the target cloud height according to the world coordinates. Therefore, the cloud heights in a wider range can be comprehensively observed, the perception of a three-dimensional world is provided, the measurement result is more accurate, and the measurement cost is reduced.
Specifically, in step S11, a first image and a second image of a binocular camera are acquired at the same time, and the first image and the second image are binocular-aligned; here, the left and right camera images of the binocular camera are acquired at the same time, that is, the first image and the second image are acquired, the first image may be a left camera image of the binocular camera, the second image may be a right camera image of the binocular camera, and of course, the right camera image may be the first image and the left camera image may be the second image. And performing binocular alignment on the obtained left and right images by using parameters of a binocular camera, so that the calculation of the visual difference of the left and right cameras is simpler and more convenient.
Specifically, in step S12, cloud feature points of the processed first image are detected, and the cloud feature points of the second image are matched according to the detected cloud feature points; here, the edge detection method using the canny operator may be used to detect the cloud feature points of the processed first image, where the processed first image is an image subjected to binocular alignment; and matching the cloud characteristic points of the detected left image with the cloud characteristic points of the image to be detected, wherein the image to be detected is a right image, so that the cloud characteristic points of the left image and the right image of the binocular camera are obtained, and subsequent vision difference calculation is carried out.
Specifically, in step S13, a visual difference is obtained from the cloud feature points of the first image and the cloud feature points of the second image; here, after the first image and the second image are subjected to binocular alignment, when the visual difference between the two images is calculated, only the pixel values of the one-way coordinates of the cloud feature points of the two images need to be used for calculation, so that the calculation is faster, and the visual difference between the two images, that is, the visual difference between the left camera and the right camera, is obtained.
Specifically, in step S14, world coordinates are obtained according to the visual difference and the parameters of the binocular camera, and the target cloud height is determined according to the world coordinates. Here, the world coordinates of the binocular camera may be obtained from the parameters of the binocular camera and the calculated visual difference, and the coordinates obtained on the z-axis in the world coordinates may be used as the target cloud height.
In an embodiment of the present application, the method includes: and geometrically correcting the target cloud height according to the elevation angle of the binocular camera to obtain the optimized target cloud height. Here, considering that the elevation angle of the camera also affects the measurement result, after the cloud height is obtained by calculation, the calculated target cloud height is geometrically corrected according to the elevation angle of the binocular camera, and if the elevation angle is N ° and the target cloud height is Z, the correction is: and Z multiplied by sin (N DEG/180 DEG multiplied by pi), so that the target cloud height is optimized, and a more accurate cloud measurement result is obtained.
In an embodiment of the present application, in step S11, calibrating each of the binocular camera and the binocular camera respectively, and determining an internal reference and a distortion coefficient of the calibrated binocular camera; acquiring a first image and a second image of a binocular camera at the same time by using the calibrated binocular camera; and performing line alignment on the first image and the second image according to the internal parameters and distortion parameters of the binocular camera to complete binocular alignment. Firstly, calibrating a camera, respectively calibrating a single-view camera and a double-sided camera of a left camera and a right camera, and acquiring internal and external parameters of a group of cameras according to the calibration, such as acquiring internal parameters and distortion coefficients of the double-view camera for algorithm calling; the left camera and the right camera are installed at a certain horizontal distance, and the larger the distance is, the larger the parallax is, and the larger the effective measurement range is. After monocular distortion (also monocular calibration), the image is stretched radially and tangentially, and the pixel position is changed, so that a distortion coefficient is obtained; the binocular alignment is to align the left view and the right view according to monocular internal reference data (focal length and distortion coefficient) and binocular relative position relationship (rotation matrix and translation vector) obtained after the cameras are calibrated, so that the imaging origin coordinates of the left view and the right view are consistent, the optical axes of the two cameras are parallel, the left imaging plane and the right imaging plane are coplanar, and the epipolar lines are aligned; as shown in fig. 2, in the world coordinate system xyz, the projection of the point P on the left camera gl is Pl, the projection on the right camera gr is Pr, Ol is the optical center of the left camera, Or is the optical center of the right camera, when binocular alignment is performed, the y-axis where P1 is located and the y-axis where P2 is located (dotted line portion in the figure), that is, the y-axis in the left and right figures is parallel to the line OlOr connecting the optical centers of the left and right cameras, and after binocular alignment, the y-axes of the left and right images are aligned, and the relative position of pixels in the horizontal direction mainly affecting parallax is calculated, and calculation of parallax is the simplest. The calibration is a process of solving a conversion matrix from a world coordinate system (world plane) to a pixel coordinate system (pixel plane) through corresponding pixel points in the world coordinate system and the pixel coordinate system, and further respectively solving internal and external parameters and distortion parameters of the camera through the conversion matrix according to a certain constraint condition. The monocular calibration process may use the following: and (3) enabling a camera to collect a standard grid shot at each angle, extracting pixel coordinates of grid points in the photo, and solving an internal reference matrix, an external reference matrix and the like by using maximum likelihood estimation. The binocular calibration process may use the following: acquiring relative position relations of the binocular camera, such as relative rotation angles, relative displacement and the like, and using the relative position relations for binocular ranging, wherein after the binocular camera is installed, the relative position relations can not be changed; the specific process is as follows: and (3) simultaneously shooting a prepared checkerboard by using a binocular camera, wherein the simultaneous shooting of the two eyes refers to simultaneous appearance in the visual field of the two eyes, and data such as relative distance and relative rotation angle between the cameras can be calculated according to the actual coordinates of each coordinate point on the actual checkerboard and the corresponding pixel coordinates on the imaging of the left camera and the right camera. The camera is calibrated to acquire internal parameters of the camera, such as focal length and the like, and a distortion coefficient, and the calibrated camera is used for acquiring an image, wherein the acquired image is an image with distortion eliminated.
In yet another embodiment of the present application, in step S12, a sky region is extracted from the processed first image; and carrying out cloud edge detection in the sky area according to the parameter sequence of at least one type of weather state, and acquiring cloud feature points through edge detection. Here, there is a difference in cloud edge information in consideration of different weather conditions, such as sunny days, cloudy days, rainy days, snowy days, and the like; for example, in a case of cloudy and cloudy sunny days, edge information of clouds is strong, and in a case of cloudy days or no clouds, edge information of clouds is weak, so in the embodiment of the present application, three sets of canny operator parameters are used, which respectively correspond to sunny days, cloudy days, and rainy days, and when detecting cloud features of a left image, the detection can be performed according to parameters of at least one type of weather state, for example, according to two types of weather states of sunny days and cloudy days, or according to three types of weather states of sunny days, cloudy days, and rainy days, or according to other types of weather states. In a specific embodiment of the application, detection can be sequentially performed according to parameter sequences of sunny days, cloudy days and rainy days, a sky area is extracted first, cloud edges are searched in the sky area, and then cloud feature points are determined; three sets of different parameter thresholds are adjusted and obtained by dividing the data set into image sets under three environments of sunny days, cloudy days and rainy days. As shown in fig. 3, four cloud feature points are detected in the left image by using the cloud edge detection method, the cloud feature points detected in the left image are matched with the cloud feature points in the right image, and finally four pairs of cloud feature points (a, a '), (B, B'), (C, C '), (D, D') are obtained, and cloud height values at four positions are obtained by calculation according to the four pairs of cloud feature points.
Selecting image blocks around the cloud feature points of the first image as templates according to the embodiment; and matching a sliding window in the second image with an image block with the maximum similarity to the template to obtain matched cloud feature points. Here, a block around the feature point detected in the left image is selected as a template, and the sliding window in the right image is matched with a block with the highest similarity. The corresponding function in OpenCV can be used as a template matching function matchTemplate (), the similarity of each position matched with the template image is searched in the input image through a sliding window, the largest image block is found, and the cloud characteristic point of the right image is obtained through the found image block.
Specifically, in another embodiment of the present application, when the template is moved by one pixel in the horizontal or vertical direction on the second image, the template is compared with the image block corresponding to the moved position once, and a result matrix is obtained according to each comparison; and selecting the minimum value or the maximum value from the result matrix according to the similarity evaluation standard to serve as the maximum similarity, and determining a picture block with the maximum similarity. Here, the evaluation criterion of the similarity includes a square error matching method CV _ TM _ SQDIFF (the higher the similarity is, the smaller the value is), a correlation coefficient matching method CV _ TM _ ccoff (1 indicates the best match, -1 indicates the worst match), and the like, taking CV _ TM _ ccoff as an example:
Figure BDA0002939377580000101
wherein, T is a template which takes an image block around the feature point in the left image as a template, I is a matching image (right image) to be searched, and R is a result; (x ', y') is the coordinate in the template T, the range is more than or equal to 0 and less than or equal to W-1, and more than or equal to 0 and less than or equal to H-1, (x, y) is the coordinate in the result R, the range is more than or equal to 0 and less than or equal to W-W, and more than or equal to 0 and less than or equal to H-H. The template T is shifted one pixel at a time in the horizontal or vertical direction on the right image and a comparison calculation is made, assuming that the size of T is W H and the size of I is W H, the size of R is (W-W +1) × (H-H + 1). The method comprises the following steps of transversely comparing W-W +1 times and longitudinally comparing H-H +1 times to obtain a (W-W +1) x (H-H +1) dimensional result matrix, finally comparing to obtain the maximum value or the minimum value (the minimum value is selected by an average matching method) in R, specifically obtaining a matching feature pattern block in the right image according to the pattern block corresponding to the maximum value or the minimum value according to the evaluation standard of the similarity, and further obtaining matching feature points.
In another embodiment of the present application, in step S13, a difference value between a pixel value of an abscissa of the cloud feature point of the first image and a pixel value of an abscissa of the cloud feature point of the second image is obtained, and a visual difference is obtained according to the difference value. Parameters of the binocular camera include: the camera comprises internal parameters and distortion coefficients of the binocular camera, wherein the internal parameters of the binocular camera comprise a focal length and an optical center distance. Here, the world coordinates are calculated from the binocular parameters as follows:
Figure BDA0002939377580000102
wherein Z is the depth of cloud height, d is the visual difference, f is the focal length, b is the distance between the optical centers of the left and right cameras, and XRAnd XTThe x pixel value of the right image cloud characteristic point and the x pixel value of the left image cloud characteristic point are respectively. The calculation result of the world coordinates is determined by the vision difference, and the vision difference is calculated by the difference value of x pixels of the corresponding cloud features of the left camera and the right camera and is influenced by the camera parameters (the focal length f and the optical center distance b). Further, the parameters of the binocular camera include: the camera comprises internal parameters and distortion coefficients of the binocular camera, wherein the internal parameters of the binocular camera comprise a focal length and an optical center distance. Therefore, 3D information in a visual field range is obtained through binocular images, and cloud height calculation is further achieved.
In addition, a computer readable medium is provided, on which computer readable instructions are stored, and the computer readable instructions are executable by a processor to implement the foregoing binocular vision based cloud height measurement method.
Corresponding to the method described above, the present application also provides a terminal, which includes modules or units capable of executing the method steps described in the embodiments of fig. 1, fig. 2, or fig. 3, and these modules or units may be implemented by hardware, software, or a combination of hardware and software, and the present application is not limited thereto. For example, in an embodiment of the present application, there is also provided a binocular vision-based cloud height measurement apparatus, including:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform the operations of the method as previously described.
For example, the computer readable instructions, when executed, cause the one or more processors to:
acquiring a first image and a second image of a binocular camera at the same time, and performing binocular alignment on the first image and the second image by using parameters of the binocular camera;
detecting cloud feature points of the processed first image, and matching the cloud feature points of the second image according to the detected cloud feature points;
acquiring visual difference according to the cloud characteristic points of the first image and the second image;
and acquiring world coordinates according to the visual difference and parameters of the binocular camera, and determining the target cloud height according to the world coordinates.
Fig. 4 is a schematic structural diagram illustrating a binocular vision-based cloud height measuring apparatus according to still another aspect of the present application, the apparatus including: the system comprises a collecting device 11, a detecting device 12, a difference value obtaining device 13 and a coordinate obtaining device 14, wherein the collecting device 11 is used for collecting a first image and a second image of a binocular camera at the same time and carrying out binocular alignment on the first image and the second image; the detection device 12 is configured to detect cloud feature points of the processed first image, and match the cloud feature points of the second image according to the detected cloud feature points; the difference value obtaining device 13 is configured to obtain a visual difference according to the cloud feature points of the first image and the cloud feature points of the second image; the coordinate acquisition device 14 is used for acquiring world coordinates according to the visual difference and the parameters of the binocular camera, and determining the target cloud height according to the world coordinates.
It should be noted that the content executed by the acquiring device 11, the detecting device 12, the difference obtaining device 13, and the coordinate obtaining device 14 is the same as or corresponding to the content in the above steps S11, S12, S13, and S14, and for brevity, the description is omitted here.
In an embodiment of the present application, the apparatus includes: and the correcting device is used for geometrically correcting the target cloud height according to the elevation angle of the binocular camera to obtain the optimized target cloud height. Here, considering that the elevation angle of the camera also affects the measurement result, after the cloud height is obtained by calculation, the calculated target cloud height is geometrically corrected according to the elevation angle of the binocular camera, and if the elevation angle is N ° and the target cloud height is Z, the correction is: and Z multiplied by sin (N DEG/180 DEG multiplied by pi), so that the target cloud height is optimized, and a more accurate cloud measurement result is obtained.
In an embodiment of the present application, the collecting device 11 is configured to: calibrating each monocular camera in the binocular cameras respectively, and determining internal parameters and distortion coefficients of the calibrated binocular cameras; acquiring a first image and a second image of a binocular camera at the same time by using the calibrated binocular camera; and performing line alignment on the first image and the second image according to the internal parameters and distortion parameters of the binocular camera to complete binocular alignment. Firstly, calibrating a camera, respectively calibrating a single-view camera and a double-sided camera of a left camera and a right camera, and acquiring internal and external parameters of a group of cameras according to the calibration, such as acquiring internal parameters and distortion coefficients of the double-view camera for algorithm calling; the left camera and the right camera are installed at a certain horizontal distance, and the larger the distance is, the larger the parallax is, and the larger the effective measurement range is. After monocular distortion (also monocular calibration), the image is stretched radially and tangentially, and the pixel position is changed, so that a distortion coefficient is obtained; the binocular alignment is to align the left view and the right view according to monocular internal reference data (focal length and distortion coefficient) and binocular relative position relationship (rotation matrix and translation vector) obtained after the cameras are calibrated, so that the imaging origin coordinates of the left view and the right view are consistent, the optical axes of the two cameras are parallel, the left imaging plane and the right imaging plane are coplanar, and the epipolar lines are aligned; as shown in fig. 2, in the world coordinate system xyz, the projection of the point P on the left camera gl is Pl, the projection on the right camera gr is Pr, Ol is the optical center of the left camera, Or is the optical center of the right camera, when binocular alignment is performed, the y-axis where P1 is located and the y-axis where P2 is located (dotted line portion in the figure), that is, the y-axis in the left and right figures is parallel to the line OlOr connecting the optical centers of the left and right cameras, and after binocular alignment, the y-axes of the left and right images are aligned, and the relative position of pixels in the horizontal direction mainly affecting parallax is calculated, and calculation of parallax is the simplest. The calibration is a process of solving a conversion matrix from a world coordinate system (world plane) to a pixel coordinate system (pixel plane) through corresponding pixel points in the world coordinate system and the pixel coordinate system, and further respectively solving internal and external parameters and distortion parameters of the camera through the conversion matrix according to a certain constraint condition. The monocular calibration process may use the following: and (3) enabling a camera to collect a standard grid shot at each angle, extracting pixel coordinates of grid points in the photo, and solving an internal reference matrix, an external reference matrix and the like by using maximum likelihood estimation. The binocular calibration process may use the following: acquiring relative position relations of the binocular camera, such as relative rotation angles, relative displacement and the like, and using the relative position relations for binocular ranging, wherein after the binocular camera is installed, the relative position relations can not be changed; the specific process is as follows: and (3) simultaneously shooting a prepared checkerboard by using a binocular camera, wherein the simultaneous shooting of the two eyes refers to simultaneous appearance in the visual field of the two eyes, and data such as relative distance and relative rotation angle between the cameras can be calculated according to the actual coordinates of each coordinate point on the actual checkerboard and the corresponding pixel coordinates on the imaging of the left camera and the right camera. The camera is calibrated to acquire internal parameters of the camera, such as focal length and the like, and a distortion coefficient, and the calibrated camera is used for acquiring an image, wherein the acquired image is an image with distortion eliminated.
In yet another embodiment of the present application, the detecting device 12 is configured to: extracting a sky area from the processed first image; and carrying out cloud edge detection in the sky area according to the parameter sequence of at least one type of weather state in sequence to obtain cloud feature points through edge detection. Here, there is a difference in cloud edge information in consideration of different weather conditions, such as sunny days, cloudy days, rainy days, snowy days, and the like; for example, in a case of cloudy and cloudy sunny days, edge information of clouds is strong, and in a case of cloudy days or no clouds, edge information of clouds is weak, so in the embodiment of the present application, three sets of canny operator parameters are used, which respectively correspond to sunny days, cloudy days, and rainy days, and when detecting cloud features of a left image, the detection can be performed according to parameters of at least one type of weather state, for example, according to two types of weather states of sunny days and cloudy days, or according to three types of weather states of sunny days, cloudy days, and rainy days, or according to other types of weather states. In a specific embodiment of the application, detection can be sequentially performed according to parameter sequences of sunny days, cloudy days and rainy days, a sky area is extracted first, cloud edges are searched in the sky area, and then cloud feature points are determined; three sets of different parameter thresholds are adjusted and obtained by dividing the data set into image sets under three environments of sunny days, cloudy days and rainy days. As shown in fig. 3, four cloud feature points are detected in the left image by using the cloud edge detection method, the cloud feature points detected in the left image are matched with the cloud feature points in the right image, and finally four pairs of cloud feature points (a, a '), (B, B'), (C, C '), (D, D') are obtained, and cloud height values at four positions are obtained by calculation according to the four pairs of cloud feature points.
Specifically, the detection device is used for: selecting image blocks around the cloud feature points of the first image as templates; and matching a sliding window in the second image with an image block with the maximum similarity to the template to obtain matched cloud feature points. When the sliding window is matched, when the template moves by one pixel on the second image in the transverse direction or the longitudinal direction each time, the template and the image block corresponding to the moved position are compared once, and a result matrix is obtained according to each comparison; and selecting the minimum value or the maximum value from the result matrix according to the similarity evaluation standard to serve as the maximum similarity, and determining a picture block with the maximum similarity.
In another embodiment of the present application, the difference obtaining device 13 is configured to: and acquiring a difference value between the pixel value of the abscissa of the cloud characteristic point of the first image and the pixel value of the abscissa of the cloud characteristic point of the second image, and acquiring a visual difference according to the difference value. Here, the world coordinates are calculated from the binocular parameters as follows:
Figure BDA0002939377580000141
wherein Z is the depth of cloud height, d is the visual difference, f is the focal length, b is the distance between the optical centers of the left and right cameras, and XRAnd XTThe x pixel value of the right image cloud characteristic point and the x pixel value of the left image cloud characteristic point are respectively. The calculation result of the world coordinates is determined by the vision difference, and the vision difference is calculated by the difference value of x pixels of the corresponding cloud features of the left camera and the right camera and is influenced by the camera parameters (the focal length f and the optical center distance b). Further, the parameters of the binocular camera include: the camera comprises internal parameters and distortion coefficients of the binocular camera, wherein the internal parameters of the binocular camera comprise a focal length and an optical center distance. Therefore, 3D information in a visual field range is obtained through binocular images, and cloud height calculation is further achieved.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (10)

1. A binocular vision-based cloud height measurement method is characterized by comprising the following steps:
acquiring a first image and a second image of a binocular camera at the same time, and performing binocular alignment on the first image and the second image;
detecting cloud feature points of the processed first image, and matching the cloud feature points of the second image according to the detected cloud feature points;
acquiring visual difference according to the cloud characteristic points of the first image and the second image;
and acquiring world coordinates according to the visual difference and parameters of the binocular camera, and determining the target cloud height according to the world coordinates.
2. The method according to claim 1, characterized in that it comprises:
and geometrically correcting the target cloud height according to the elevation angle of the binocular camera to obtain the optimized target cloud height.
3. The method of claim 1, wherein capturing a first image and a second image of a binocular camera at the same time, performing binocular alignment of the first and second images, comprises:
calibrating each monocular camera in the binocular cameras respectively, and determining internal parameters and distortion coefficients of the calibrated binocular cameras;
acquiring a first image and a second image of a binocular camera at the same time by using the calibrated binocular camera;
and performing line alignment on the first image and the second image according to the internal parameters and distortion parameters of the binocular camera to complete binocular alignment.
4. The method of claim 1, wherein detecting cloud feature points of the processed first image comprises:
extracting a sky area from the processed first image;
and carrying out cloud edge detection in the sky area according to the parameter sequence of at least one type of weather state, and acquiring cloud feature points through edge detection.
5. The method according to claim 1 or 4, wherein matching the cloud feature points of the second image according to the detected cloud feature points comprises:
selecting image blocks around the cloud feature points of the first image as templates;
and matching a sliding window in the second image with an image block with the maximum similarity to the template to obtain matched cloud feature points.
6. The method of claim 5, wherein sliding a window in the second image matches a tile with the greatest similarity to the template, comprising:
when the template moves one pixel on the second image in the transverse direction or the longitudinal direction each time, comparing the template with the image blocks corresponding to the moved positions once, and obtaining a result matrix according to each comparison;
and selecting the minimum value or the maximum value from the result matrix according to the similarity evaluation standard to serve as the maximum similarity, and determining a picture block with the maximum similarity.
7. The method of claim 1, wherein obtaining the visual difference from the cloud feature points of the first image and the cloud feature points of the second image comprises:
and calculating a difference value between the pixel value of the abscissa of the cloud characteristic point of the first image and the pixel value of the abscissa of the cloud characteristic point of the second image, and obtaining a visual difference according to the difference value.
8. The method of claim 1, wherein the parameters of the binocular camera comprise: the camera comprises internal parameters and distortion coefficients of the binocular camera, wherein the internal parameters of the binocular camera comprise a focal length and an optical center distance.
9. A cloud height measuring device based on binocular vision, the device comprising:
the acquisition device is used for acquiring a first image and a second image of the binocular camera at the same time and carrying out binocular alignment on the first image and the second image;
the detection device is used for detecting the cloud characteristic points of the processed first image and matching the cloud characteristic points of the second image according to the detected cloud characteristic points;
a difference value obtaining device, configured to obtain a visual difference according to the cloud feature points of the first image and the cloud feature points of the second image;
and the coordinate acquisition device is used for acquiring world coordinates according to the visual difference and the parameters of the binocular camera and determining the target cloud height according to the world coordinates.
10. A computer readable storage medium having computer readable instructions stored thereon which are executable by a processor to implement the method of any one of claims 1 to 8.
CN202110172791.6A 2021-02-08 2021-02-08 Cloud height measuring method and equipment based on binocular vision Pending CN112907650A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110172791.6A CN112907650A (en) 2021-02-08 2021-02-08 Cloud height measuring method and equipment based on binocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110172791.6A CN112907650A (en) 2021-02-08 2021-02-08 Cloud height measuring method and equipment based on binocular vision

Publications (1)

Publication Number Publication Date
CN112907650A true CN112907650A (en) 2021-06-04

Family

ID=76124035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110172791.6A Pending CN112907650A (en) 2021-02-08 2021-02-08 Cloud height measuring method and equipment based on binocular vision

Country Status (1)

Country Link
CN (1) CN112907650A (en)

Similar Documents

Publication Publication Date Title
CN111354042B (en) Feature extraction method and device of robot visual image, robot and medium
US8121400B2 (en) Method of comparing similarity of 3D visual objects
CN103559711B (en) Based on the method for estimating of three dimensional vision system characteristics of image and three-dimensional information
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
CN109211198B (en) Intelligent target detection and measurement system and method based on trinocular vision
CN109801333B (en) Volume measurement method, device and system and computing equipment
CN108470356B (en) Target object rapid ranging method based on binocular vision
US9025862B2 (en) Range image pixel matching method
CN107516322B (en) Image object size and rotation estimation calculation method based on log polar space
WO2023046211A1 (en) Photogrammetry method, apparatus and device, and storage medium
EP4242609A1 (en) Temperature measurement method, apparatus, and system, storage medium, and program product
CN111179335A (en) Standing tree measuring method based on binocular vision
CN110738703A (en) Positioning method and device, terminal and storage medium
CN111899345B (en) Three-dimensional reconstruction method based on 2D visual image
Sirmacek et al. Accuracy assessment of building point clouds automatically generated from iphone images
CN111105467A (en) Image calibration method and device and electronic equipment
CN109902695B (en) Line feature correction and purification method for image pair linear feature matching
CN116380918A (en) Defect detection method, device and equipment
RU2692970C2 (en) Method of calibration of video sensors of the multispectral system of technical vision
CN111768448A (en) Spatial coordinate system calibration method based on multi-camera detection
CN112907650A (en) Cloud height measuring method and equipment based on binocular vision
CN113808070B (en) Binocular digital speckle image related parallax measurement method
CN112950709B (en) Pose prediction method, pose prediction device and robot
CN111630569A (en) Binocular matching method, visual imaging device and device with storage function
CN114782556A (en) Camera and laser radar registration method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination