CN113091627B - Method for measuring vehicle height in dark environment based on active binocular vision - Google Patents
Method for measuring vehicle height in dark environment based on active binocular vision Download PDFInfo
- Publication number
- CN113091627B CN113091627B CN202110452294.1A CN202110452294A CN113091627B CN 113091627 B CN113091627 B CN 113091627B CN 202110452294 A CN202110452294 A CN 202110452294A CN 113091627 B CN113091627 B CN 113091627B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- image
- camera
- height
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 239000011159 matrix material Substances 0.000 claims description 23
- 230000002146 bilateral effect Effects 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 8
- 238000003709 image segmentation Methods 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000012216 screening Methods 0.000 claims description 2
- 238000001514 detection method Methods 0.000 abstract description 11
- 238000005516 engineering process Methods 0.000 abstract description 10
- 238000005259 measurement Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/02—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
- G01B11/06—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness for measuring thickness ; e.g. of sheet material
- G01B11/0608—Height gauges
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
A method for measuring the height of a vehicle in a dark environment based on active binocular vision belongs to the technical field of machine vision identification. According to the invention, the unmanned aerial vehicle is used for carrying the binocular camera and the line laser, so that the automatic detection of the vehicle height in the dark environment is realized, and the problem that the traditional binocular stereoscopic vision has large measurement error in the dark environment and even cannot be measured is effectively solved. The unmanned aerial vehicle carries the binocular camera and the line laser to measure the height of the vehicle, so that the occupied site can be reduced, the height of the vehicle can be measured at any time and any place, the blank of the portable overall dimension detection technology for national overrun control is filled, and the practicability is high.
Description
Technical Field
The invention belongs to the technical field of machine vision identification, and particularly relates to a method for measuring the height of a vehicle in a dark environment based on active binocular vision.
Background
Due to the unordered competition in the transportation market in recent years, some transportation enterprises and individuals over-limit the transportation of goods driven by interest. The damage to the road surface and the breakage of the bridge caused by the over-limit transportation greatly shorten the normal service life, and the damage to the road caused by the over-limit overload of the vehicles per year on the road in China exceeds 300 billion yuan; the over-limit transportation is easy to cause road traffic accidents, so that the life safety of people is endangered, and the over-limit transportation becomes the first killer of the road traffic safety, especially the ultra-high transportation.
The detection of the overall dimension of the vehicle is a key part in the process of over-limit transportation detection, and because the traditional technology is low in detection precision and long in time consumption, contradictions and conflicts between a vehicle owner and detection personnel are easily caused, and the detection becomes a key problem which needs to be solved urgently in the over-limit treatment work.
As an important branch of artificial intelligence theory and technology, the image recognition technology is rapidly developed, the application field of the image recognition technology is continuously expanded, a convolution neural network is taken as a representative deep learning theory to provide a new mode analysis method for the image recognition technology, a plurality of problems in the machine vision field are solved, such as three-dimensional object detection, deep feature extraction and the like, and powerful support is provided for the portable vehicle overall dimension detection technology.
In recent years, unmanned aerial vehicles increasingly enter the public visual field, and the fields of security monitoring, geographical mapping, film and television shooting, logistics transportation and the like play an increasingly important role.
The traditional binocular stereoscopic vision dimension measurement technology is used for taking a picture in a dark environment or a dark environment, and a lot of noise is generated on the acquired image. The image matching is very difficult, and in some cases, the feature points cannot be found, and the measurement result cannot be obtained.
Therefore, there is a need in the art for a new solution to solve this problem.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method for measuring the height of the vehicle in the dark environment based on active binocular vision is used for solving the technical problem that the height of the vehicle cannot be accurately measured due to the fact that a large number of noise points are generated on an acquired image when a traditional binocular stereoscopic vision size measuring technology is used for taking a picture in the dark environment or the dark environment.
A method for measuring the height of a vehicle in a dark environment based on active binocular vision, which utilizes an unmanned aerial vehicle carrying a binocular camera and a line laser to hover over the top of the vehicle to measure the height of the vehicle, comprises the following steps which are sequentially carried out,
firstly, calibrating a binocular Camera carried by the unmanned aerial vehicle by utilizing a Camera Calibration tool box (Stereo Camera Calibration Toolbox) in MATLAB, respectively obtaining internal parameters and external parameters of a left Camera and a right Camera of the binocular Camera through Calibration,
wherein the internal parameters comprise a normalized focal length alpha of the camera in the u-axis direction in an image pixel coordinate systemxNormalizing focal length alpha of video camera in direction of v axis in image pixel coordinate systemyCoordinates (u) of the optical center of the camera in the image pixel coordinate system0,v0) The external parameters comprise a rotation matrix R of 3 × 3 and a translation vector t of 3 × 1;
respectively obtaining the coordinates of any pixel point in the image pixel coordinate systems of the left camera and the right camera by utilizing the internal parameters and the external parameters of the left camera and the right camera of the calibrated binocular camera and the conversion matrix between the image pixel coordinate systems and the world coordinate systems;
secondly, a laser plane is emitted by a line laser carried by the unmanned aerial vehicle and projected to the top of the vehicle, a bright light parallel to a transverse axis of the vehicle is generated at the top of the vehicle, and a part of scene at the top of the vehicle is illuminated by the light to be displayed;
step three, keeping the unmanned aerial vehicle hovering over the top of the vehicle to enable the vehicle in the process of traveling to be located in a visual range of a binocular camera, continuously collecting a plurality of vehicle light bar images from the head to the tail of the vehicle by the binocular camera carried by the unmanned aerial vehicle after calibration is completed in the process of traveling of the vehicle, and obtaining coordinates of each pixel point in each image;
step four, preprocessing a plurality of images obtained in the step three, wherein the image preprocessing comprises bilateral filtering, image segmentation and gray level transformation;
step five, taking the laser light stripe as a one-dimensional feature, extracting a central point of the laser light stripe in the left camera image, screening a matching point of the central point of the laser light stripe in the right camera image by adopting epipolar line geometric constraint, and calculating and obtaining a coordinate difference value, namely a parallax value, of the central point and the matching point in the left camera image and the right camera image;
according to the triangulation principle of parallel binocular vision, calculating the distance between a vehicle top scene point where the center of the laser light bar is located and the binocular camera (2) by using the parallax value obtained in the step five and according to a depth calculation formula, and taking the shortest distance between the binocular camera and the vehicle top in the multiple images as the distance between the binocular camera and the vehicle top;
and step seven, calculating and obtaining the height of the vehicle according to a calculation formula of the vehicle height by using the hovering height of the unmanned aerial vehicle during image acquisition and the distance between the binocular camera and the top of the vehicle obtained in the step six.
In the first step, a transformation matrix between the image pixel coordinate system and the world coordinate system is:
in formula (I), (u, v) is the coordinate of any pixel point in the image pixel coordinate system; alpha is alphax=f/dx, αyF is the focal length of the camera, and dx and dy are the physical sizes of any pixel point in the image physical coordinate system in the directions of the x axis and the y axis; and (X, Y, Z) is the coordinate of (u, v) in a world coordinate system.
In the second step, the selected line laser is a red light laser.
The bilateral filter model for bilateral filtering in step four is as follows:
in the formula (II), f (i, j) is the gray value of the noise image f at the coordinate (i, j),for filtered imagesThe gray value at coordinate (i, j), r is the filter window radius, Ωr,i,jIs a coordinate set of pixel points in a square area with (i, j) as the center and the side length of (2r +1), f (m, n) is the gray value of the noise image f at the coordinate (m, n), and ω is the gray value of the noise image f at the coordinate (m, n)s(m, n) is the spatial weight at coordinate (m, n), ωg(m, n) is the gray level similarity weight at coordinate (m, n), σsSpatial standard deviation, σgIs the gray scale standard deviation.
And the image segmentation in the fourth step is performed by adopting a difference image method, and the laser light bar image processed by the difference image method is as shown in the formula (III):
Il(x,y)=Im(x,y)-Ib(x,y) (Ⅲ)
in the formula (III), Il(x, y) is the laser light bar image after the difference image processing, Im(x, y) is an image of an object taken when the laser is projected onto the body of the automobile, Ib(x, y) isA background image.
The gray level conversion in the fourth step is to convert the color image into a gray level image, and the conversion formula is shown as the formula (IV):
Gray=0.3R+0.59G+0.11B (Ⅳ)
in formula (iv), Gray represents Gray values, and R, G, and B represent values of three channels in a color image, respectively.
The method for extracting the laser light strip center in the step five is a Hessian matrix method, wherein the expression of the Hessian matrix is shown as the formula (V):
in formula (v), g (x, y) is a two-dimensional gaussian convolution template, and I (x, y) is an image matrix having an equal size to the two-dimensional gaussian convolution template and centered on the image point (x, y).
In the sixth step, the depth calculation formula is shown as formula (VI):
in the formula (VI), Z is the distance between the binocular camera and the top of the vehicle; b is the distance between the optical centers of the left camera and the right camera; f is the focal length of the camera; d is the disparity value.
The formula for calculating the vehicle height in the seventh step is shown as the formula (VII):
H=DH-Z (Ⅶ)
in the formula (VII), H is the vehicle height, DHFor the height of hovering when unmanned aerial vehicle gathered the image, Z is the distance of binocular camera and vehicle top.
Through the design scheme, the invention can bring the following beneficial effects:
according to the invention, the unmanned aerial vehicle is used for carrying the binocular camera and the line laser, so that the automatic detection of the vehicle height in the dark environment is realized, and the problem that the traditional binocular stereoscopic vision has large measurement error in the dark environment and even cannot be measured is effectively solved.
The unmanned aerial vehicle carries the binocular camera and the line laser to measure the height of the vehicle, so that the occupied site can be reduced, the height of the vehicle can be measured at any time and any place, the blank of the portable overall dimension detection technology for national overrun control is filled, and the practicability is high.
Drawings
The invention is further described with reference to the following figures and detailed description:
fig. 1 is a schematic diagram of an unmanned aerial vehicle carrying a binocular camera and a line laser for acquiring a vehicle scene image from above a vehicle in the method for measuring the vehicle height in a dark environment based on active binocular vision.
FIG. 2 is a flow chart of a method for measuring vehicle height in a dark environment based on active binocular vision according to the present invention.
FIG. 3 is a diagram of a calibration process of a binocular camera in the method for measuring vehicle height in a dark environment based on active binocular vision.
In the figure, 1-unmanned aerial vehicle, 2-binocular camera and 3-line laser are adopted.
Detailed Description
The invention is further described below with reference to the figures and examples.
As shown in the figure, the method for measuring the height of the vehicle in the dark environment based on active binocular vision is characterized in that an unmanned aerial vehicle 1 is used for carrying a binocular camera 2 and a line laser 3 and hovering over the top of the vehicle to measure the height of the vehicle, and the method comprises the following specific steps:
step one, calibrating the binocular Camera 2 by utilizing a Camera Calibration tool box 'Stereo Camera Calibration Toolbox' in MATLAB, and obtaining internal parameters alpha of a left Camera and a right Camera of the binocular Camera 2 through Calibrationx、αy、u0、v0And an external parameter R, t, where αxFor the normalized focal length, alpha, of the camera in the direction of the u-axis in the image pixel coordinate systemyFor the normalized focal length of the camera in the direction of the v-axis in the image pixel coordinate system, (u)0,v0) Is the coordinate of the optical center of the camera in the image pixel coordinate system, R isA rotation matrix of 3 × 3, t being a translation vector of 3 × 1; the internal parameters, the translation vectors and the rotation matrix of the binocular camera 2 can form a projection matrix, the projection matrix is a conversion coefficient matrix which enables two-dimensional image coordinate points and three-dimensional geometric space points to correspond one to one, and the camera calibration process is shown in fig. 3; respectively obtaining the coordinates of any pixel point in the image pixel coordinate systems of the left camera and the right camera by utilizing the internal parameters and the external parameters of the left camera and the right camera of the calibrated binocular camera 2 and the conversion matrix between the image pixel coordinate systems and the world coordinate systems;
the conversion matrix between the image pixel coordinate system and the world coordinate system is shown as formula (I):
in formula (I), (u, v) is the coordinate of any pixel point in the image pixel coordinate system; alpha is alphax=f/dx, αyF is the focal length of the camera, and dx and dy are the physical sizes of any pixel point in the image physical coordinate system in the directions of the x axis and the y axis; and (X, Y, Z) is the coordinate of (u, v) in a world coordinate system.
Carrying a line laser 3 by using the unmanned aerial vehicle 1, wherein the line laser 3 is a red light line laser 3, the line width of the red light line laser is adjustable, the projected light stripes are stable, and the gray values of the pixel points of the cross section of the laser light stripes in the vertical direction are approximately in Gaussian curve distribution; the brightness of the laser light bar directly influences the quality of image processing, and the line laser 3 with high brightness is selected to enable the characteristics of the target to be obviously different from the background, reduce noise interference and enable the image processing result to be more reliable. A laser plane is emitted by a line laser 3, the flying direction of the unmanned aerial vehicle 1 is adjusted, bright light parallel to the transverse axis of the vehicle is generated on the upper surface of the vehicle, and part of the scene at the top of the vehicle is illuminated by the light to be displayed, so that active characteristic information is generated at the top of the vehicle, as shown in fig. 1;
step three, keeping the unmanned aerial vehicle 1 hovering over the top of the vehicle, wherein the hovering height is suitable for enabling the vehicle in the process of traveling to be located in the visual range of the binocular camera 2, and continuously collecting a plurality of vehicle light strip images by utilizing the binocular camera 2 calibrated in the first unmanned aerial vehicle 1 carrying step from the laser light strip located at the head of the vehicle to the light strip located at the tail of the vehicle in the process of traveling of the vehicle;
step four, preprocessing a plurality of images obtained in the step three, wherein the image preprocessing comprises bilateral filtering, image segmentation and gray level transformation; the bilateral filtering processing can remove noise points in the image, so that the image becomes smoother; the purpose of image segmentation is to perform subsequent processing only on the segmented information area, so that the image processing speed is increased; the purpose of the gray scale transformation is to convert a color image into a gray scale image, so that the calculation amount in subsequent image processing becomes relatively small, and the speed of processing the light bar image becomes fast;
the bilateral filtering in the fourth step is to utilize a bilateral filter to process the image, so as to filter the noise in the image, the bilateral filter protects the image boundary information while filtering the noise, and the bilateral filter model is as shown in formula (II):
in the formula (II), f (i, j) is the gray value of the noise image f at the coordinate (i, j),for filtered imagesThe gray value at coordinate (i, j), r is the filter window radius, Ωr,i,jIs a coordinate set of pixel points in a square area with (i, j) as the center and the side length of (2r +1), f (m, n) is the gray value of the noise image f at the coordinate (m, n), and ω is the gray value of the noise image f at the coordinate (m, n)s(m, n) is the spatial weight at coordinate (m, n), ωg(m, n) is the gray level similarity weight at coordinate (m, n), σsSpatial standard deviation, σgIs the gray scale standard deviation.
And the image segmentation in the fourth step adopts a difference image method to extract the laser light bar image, as shown in formula (III):
Il(x,y)=Im(x,y)-Ib(x,y) (Ⅲ)
in the formula (III), Il(x, y) is the laser light bar image after the difference image processing, Im(x, y) is an image of an object taken when the laser is projected onto the body of the automobile, Ib(x, y) is a background image.
The gray level conversion in the fourth step is to convert the color image into a gray level image, and the conversion formula is shown as the formula (IV):
Gray=0.3R+0.59G+0.11B (Ⅳ)
in formula (iv), Gray represents Gray values, and R, G, and B represent values of three channels in a color image, respectively.
Step five, extracting the center of a laser light stripe in the left camera image, wherein the light stripe is used as a one-dimensional feature, the search range is smaller, adopting epipolar line geometric constraint to screen a matching point of a central point of the laser light stripe in the right camera image, and calculating a coordinate difference value, namely a parallax value, of the corresponding feature point in the left camera image and the right camera image;
in the fifth step, the method for extracting the center of the laser light stripe is a Hessian matrix method, the Hessian matrix method is strong in noise resistance, relatively accurate in directionality of light stripe extraction, good in comprehensive robustness, and obvious in advantages under the conditions of complex environment and high precision requirement, and the Hessian matrix expression is shown as a formula (V):
in formula (v), g (x, y) is a two-dimensional gaussian convolution template, and I (x, y) is an image matrix having an equal size to the two-dimensional gaussian convolution template and centered on the image point (x, y).
According to the triangulation principle of parallel binocular vision, calculating the distance between a scene point at the top of the vehicle where the center of the laser light bar is located and the binocular camera (2) by using the parallax value obtained in the step five and according to a depth calculation formula, and taking the shortest distance between the binocular camera 2 and the top of the vehicle in the multiple images as the distance between the binocular camera 2 and the top of the vehicle;
the depth calculation formula is shown as formula (VI):
in the formula (VI), Z is the distance between the binocular camera and the top of the vehicle; b is the distance between the optical centers of the left camera and the right camera; f is the focal length of the camera; d is the disparity value.
And step seven, calculating the height of the vehicle according to a calculation formula of the vehicle height by using the hovering height of the unmanned aerial vehicle 1 during image acquisition and the distance between the binocular camera 2 and the top of the vehicle obtained in the step six.
The calculation formula of the vehicle height is shown as the formula (VII):
H=DH-Z (Ⅶ)
in the formula (VII), H is the vehicle height, DHFor the height of hovering when unmanned aerial vehicle gathered the image, Z is the distance of binocular camera and vehicle top.
Finally, the above embodiments are only used for illustrating the technical solutions and ideas of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.
Claims (9)
1. The utility model provides a method for measuring vehicle height under dark surrounds based on initiative binocular vision, utilizes unmanned aerial vehicle (1) to carry on binocular camera (2) and line laser ware (3) and hover and measure the vehicle height above the vehicle top, characterized by: comprises the following steps which are sequentially carried out,
firstly, calibrating a binocular Camera (2) carried by an unmanned aerial vehicle (1) by utilizing a Camera Calibration tool box (Stereo Camera Calibration Toolbox) in MATLAB, respectively obtaining internal parameters and external parameters of a left Camera and a right Camera of the binocular Camera (2) through Calibration,
wherein the internal parameters comprise a normalized focal length alpha of the camera in the u-axis direction in an image pixel coordinate systemxNormalizing focal length alpha of video camera in direction of v axis in image pixel coordinate systemyCoordinates (u) of the optical center of the camera in the image pixel coordinate system0,v0) The external parameters comprise a rotation matrix R of 3 × 3 and a translation vector t of 3 × 1;
respectively obtaining the coordinates of any pixel point in the image pixel coordinate systems of the left camera and the right camera by utilizing the internal parameters and the external parameters of the left camera and the right camera of the calibrated binocular camera (2) and the conversion matrix between the image pixel coordinate system and the world coordinate system;
secondly, a laser plane is emitted by a line laser (3) carried by the unmanned aerial vehicle (1) and projected to the top of the vehicle, a bright light parallel to the transverse axis of the vehicle is generated at the top of the vehicle, and part of the scene on the top of the vehicle is illuminated by the light to be displayed;
step three, keeping the unmanned aerial vehicle (1) hovering over the top of the vehicle to enable the vehicle in the process of traveling to be located in the visual range of the binocular camera (2), and continuously collecting a plurality of vehicle light bar images from the head to the tail of the vehicle and obtaining coordinates of each pixel point in each image by the binocular camera (2) carried by the unmanned aerial vehicle (1) after calibration in the process of traveling of the vehicle;
step four, preprocessing a plurality of images obtained in the step three, wherein the preprocessing of the images comprises bilateral filtering, image segmentation and gray level transformation;
step five, taking the laser light stripe as a one-dimensional feature, extracting a central point of the laser light stripe in the left camera image, screening a matching point of the central point of the laser light stripe in the right camera image by adopting epipolar line geometric constraint, and calculating and obtaining a coordinate difference value, namely a parallax value, of the central point and the matching point in the left camera image and the right camera image;
according to the triangulation principle of parallel binocular vision, calculating the distance between a scene point at the top of the vehicle where the center of the laser light bar is located and the binocular camera (2) by using the parallax value obtained in the step five and according to a depth calculation formula, and taking the shortest distance between the binocular camera (2) and the top of the vehicle in the multiple images as the distance between the binocular camera (2) and the top of the vehicle;
and step seven, calculating and obtaining the height of the vehicle according to a calculation formula of the vehicle height by using the hovering height of the unmanned aerial vehicle (1) during image acquisition and the distance between the binocular camera (2) and the top of the vehicle obtained in the step six.
2. The method for measuring the height of the vehicle in the dark environment based on the active binocular vision as claimed in claim 1, wherein the method comprises the following steps: in the first step, a transformation matrix between the image pixel coordinate system and the world coordinate system is:
in formula (I), (u, v) is the coordinate of any pixel point in the image pixel coordinate system; alpha is alphax=f/dx,αyF is the focal length of the camera, and dx and dy are the physical sizes of any pixel point in the image physical coordinate system in the directions of the x axis and the y axis; and (X, Y, Z) is the coordinate of (u, v) in a world coordinate system.
3. The method for measuring the height of the vehicle in the dark environment based on the active binocular vision as claimed in claim 1, wherein the method comprises the following steps: in the second step, the selected line laser is a red light laser.
4. The method for measuring the height of the vehicle in the dark environment based on the active binocular vision as claimed in claim 1, wherein the method comprises the following steps: the bilateral filter model of bilateral filtering in the fourth step is as follows
In the formula (II), f (i, j) is the gray value of the noise image f at the coordinate (i, j),for filtered imagesThe gray value at coordinate (i, j), r is the filter window radius, Ωr,i,jIs a coordinate set of pixel points in a square area with (i, j) as the center and the side length of (2r +1), f (m, n) is the gray value of the noise image f at the coordinate (m, n), and ω is the gray value of the noise image f at the coordinate (m, n)s(m, n) is the spatial weight at coordinate (m, n), ωg(m, n) is the gray level similarity weight at coordinate (m, n), σsSpatial standard deviation, σgIs the gray scale standard deviation.
5. The method for measuring the height of the vehicle in the dark environment based on the active binocular vision as claimed in claim 1, wherein the method comprises the following steps: and the image segmentation in the fourth step is performed by adopting a difference image method, and the laser light bar image processed by the difference image method is as shown in the formula (III):
Il(x,y)=Im(x,y)-Ib(x,y) (Ⅲ)
in the formula (III), Il(x, y) is the laser light bar image after the difference image processing, Im(x, y) is an image of an object taken when the laser is projected onto the body of the automobile, Ib(x, y) is a background image.
6. The method for measuring the height of the vehicle in the dark environment based on the active binocular vision as claimed in claim 1, wherein the method comprises the following steps: the gray level conversion in the fourth step is to convert the color image into a gray level image, and the conversion formula is shown as the formula (IV):
Gray=0.3R+0.59G+0.11B (Ⅳ)
in formula (iv), Gray represents Gray values, and R, G, and B represent values of three channels in a color image, respectively.
7. The method for measuring the height of the vehicle in the dark environment based on the active binocular vision as claimed in claim 1, wherein the method comprises the following steps: the method for extracting the laser light strip center in the step five is a Hessian matrix method, wherein the expression of the Hessian matrix is shown as the formula (V):
in formula (v), g (x, y) is a two-dimensional gaussian convolution template, and I (x, y) is an image matrix having an equal size to the two-dimensional gaussian convolution template and centered on the image point (x, y).
8. The method for measuring the height of the vehicle in the dark environment based on the active binocular vision as claimed in claim 1, wherein the method comprises the following steps: in the sixth step, the depth calculation formula is shown as formula (VI):
in the formula (VI), Z is the distance between the binocular camera and the top of the vehicle; b is the distance between the optical centers of the left camera and the right camera; f is the focal length of the camera; d is the disparity value.
9. The method for measuring the height of the vehicle in the dark environment based on the active binocular vision as claimed in claim 1, wherein the method comprises the following steps: the formula for calculating the vehicle height in the seventh step is shown as the formula (VII):
H=DH-Z (Ⅶ)
in the formula (VII), H is the vehicle height, DHFor the height of hovering when unmanned aerial vehicle gathered the image, Z is the distance of binocular camera and vehicle top.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110452294.1A CN113091627B (en) | 2021-04-26 | 2021-04-26 | Method for measuring vehicle height in dark environment based on active binocular vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110452294.1A CN113091627B (en) | 2021-04-26 | 2021-04-26 | Method for measuring vehicle height in dark environment based on active binocular vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113091627A CN113091627A (en) | 2021-07-09 |
CN113091627B true CN113091627B (en) | 2022-02-11 |
Family
ID=76679897
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110452294.1A Expired - Fee Related CN113091627B (en) | 2021-04-26 | 2021-04-26 | Method for measuring vehicle height in dark environment based on active binocular vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113091627B (en) |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6773573B2 (en) * | 2017-01-25 | 2020-10-21 | 株式会社トプコン | Positioning device, position identification method, position identification system, position identification program, unmanned aerial vehicle and unmanned aerial vehicle identification target |
US20180257780A1 (en) * | 2017-03-09 | 2018-09-13 | Jeffrey Sassinsky | Kinetic unmanned aerial vehicle flight disruption and disabling device, system and associated methods |
CN108594211A (en) * | 2018-04-11 | 2018-09-28 | 沈阳上博智像科技有限公司 | Determine device, method and the movable equipment of obstacle distance |
CN208433688U (en) * | 2018-06-28 | 2019-01-25 | 洛阳视距智能科技有限公司 | A kind of automatic intelligent inspection device of electric inspection process |
CN109191504A (en) * | 2018-08-01 | 2019-01-11 | 南京航空航天大学 | A kind of unmanned plane target tracking |
CN110887486B (en) * | 2019-10-18 | 2022-05-20 | 南京航空航天大学 | Unmanned aerial vehicle visual navigation positioning method based on laser line assistance |
CN112053434B (en) * | 2020-09-28 | 2022-12-27 | 广州极飞科技股份有限公司 | Disparity map generation method, three-dimensional reconstruction method and related device |
-
2021
- 2021-04-26 CN CN202110452294.1A patent/CN113091627B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN113091627A (en) | 2021-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110569704B (en) | Multi-strategy self-adaptive lane line detection method based on stereoscopic vision | |
CN104848851B (en) | Intelligent Mobile Robot and its method based on Fusion composition | |
CN111553252B (en) | Road pedestrian automatic identification and positioning method based on deep learning and U-V parallax algorithm | |
CN110647850A (en) | Automatic lane deviation measuring method based on inverse perspective principle | |
CN105825203B (en) | Based on point to matching and the matched ground arrow mark detection of geometry and recognition methods | |
CN107392929B (en) | Intelligent target detection and size measurement method based on human eye vision model | |
CN107248159A (en) | A kind of metal works defect inspection method based on binocular vision | |
CN110084243B (en) | File identification and positioning method based on two-dimensional code and monocular camera | |
CN111047695B (en) | Method for extracting height spatial information and contour line of urban group | |
CN109842756A (en) | A kind of method and system of lens distortion correction and feature extraction | |
CN113052903A (en) | Vision and radar fusion positioning method for mobile robot | |
CN112329747B (en) | Vehicle parameter detection method based on video identification and deep learning and related device | |
CN111272139B (en) | Monocular vision-based vehicle length measuring method | |
CN105913013A (en) | Binocular vision face recognition algorithm | |
CN111046843A (en) | Monocular distance measurement method under intelligent driving environment | |
CN114331986A (en) | Dam crack identification and measurement method based on unmanned aerial vehicle vision | |
CN106446785A (en) | Passable road detection method based on binocular vision | |
CN114972177A (en) | Road disease identification management method and device and intelligent terminal | |
CN106709432B (en) | Human head detection counting method based on binocular stereo vision | |
CN113091627B (en) | Method for measuring vehicle height in dark environment based on active binocular vision | |
CN109815966A (en) | A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm | |
CN115690743A (en) | Airport gate stand intrusion detection method based on deep learning | |
CN113219472B (en) | Ranging system and method | |
CN110956640B (en) | Heterogeneous image edge point detection and registration method | |
CN112785647A (en) | Three-eye stereo image detection method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220211 |