CN115984335A - Method for acquiring characteristic parameters of fog drops based on image processing - Google Patents

Method for acquiring characteristic parameters of fog drops based on image processing Download PDF

Info

Publication number
CN115984335A
CN115984335A CN202310265163.1A CN202310265163A CN115984335A CN 115984335 A CN115984335 A CN 115984335A CN 202310265163 A CN202310265163 A CN 202310265163A CN 115984335 A CN115984335 A CN 115984335A
Authority
CN
China
Prior art keywords
image
fog
fogdrops
calculating
fogdrop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310265163.1A
Other languages
Chinese (zh)
Other versions
CN115984335B (en
Inventor
兰玉彬
关润洪
陈盛德
邱幸妍
孙文昊
甘广强
陈乐君
廖玲君
赵英杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN202310265163.1A priority Critical patent/CN115984335B/en
Publication of CN115984335A publication Critical patent/CN115984335A/en
Application granted granted Critical
Publication of CN115984335B publication Critical patent/CN115984335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Image Analysis (AREA)

Abstract

The invention aims to provide a method for acquiring characteristic parameters of fog drops based on image processing, which comprises the following steps: acquiring a fog drop image of a target area; carrying out image preprocessing on the fogdrop image; screening out a characteristic region according to the image preprocessing result, calculating the matchable fogdrops in the characteristic region, and acquiring the number of fogdrops, the pixel size occupied by the fogdrops and the coordinate position of the fogdrops image according to the image preprocessing result; and calculating the movement speed of the matchable fogdrops according to the result of the matching calculation of the matchable fogdrops in the characteristic region and the coordinate positions of the matchable fogdrops. The method for acquiring the characteristic parameters of the fog drops based on the image processing does not interfere with the wind field of the rotor wing and the motion of the fog drops, and is suitable for the wind field spraying characteristic test in a small range and in a short time.

Description

Method for acquiring characteristic parameters of fog drops based on image processing
Technical Field
The invention relates to the technical field of precision agricultural aviation pesticide application, in particular to a method for acquiring characteristic parameters of fog drops based on image processing.
Background
The application of chemical pesticides for pest control is the most important and effective control means used in China at present.
With the rapid development of agricultural aviation industry in China, the pesticide application mode of the plant protection unmanned aerial vehicle has the characteristics of high operation efficiency, strong adaptability, less environmental pollution and the like; but plant protection unmanned aerial vehicle gives medicine to poor free of charge in-process receives the circumstances such as the drift deposit that causes liquid medicine droplet under rotor wind field and the influence of external wind field, influences plant protection unmanned aerial vehicle's the effect of giving medicine to poor free of charge. The following problems exist in the prior art:
1. in the pesticide spraying process, the deposition effect of the liquid medicine fog drops is an important index concerned by users, and the drift of the liquid medicine is a main reason for causing low effective utilization rate of the pesticide and environmental pollution. The prior art mainly focuses on analyzing the influence of the droplet particle size on droplet deposition drift, and researches on reducing the drift problem in the pesticide application process by changing the particle size of the droplets through changing operation parameters; ignoring the velocity of the droplet itself also has a large effect on droplet deposition motion.
2. In the process of analyzing the fogdrop deposition drift, the drift rate is measured by means of water-sensitive paper, wiring measurement and the like, and the drift rate is analyzed and obtained by measuring fogdrop parameters on the water-sensitive paper; however, the liquid medicine fog drops can diffuse into the water-sensitive paper, and certain errors can exist in the drift rate or the deposition amount.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method for acquiring fog drop characteristic parameters based on image processing, wherein the calculated fog drop characteristic parameters comprise fog drop size, fog drop position coordinates and fog drop movement speed. The invention does not interfere the wind field of the rotor and the motion of fog drops, is suitable for the wind field spraying characteristic test in a small range and in a short time, and solves the problems in the background technology.
The invention provides a method for acquiring fog drop characteristic parameters based on image processing, which comprises the following steps:
acquiring a fogdrop image of a target area;
carrying out image preprocessing on the fogdrop image;
screening out a characteristic region according to the image preprocessing result, calculating the matchable fogdrops in the characteristic region, and acquiring the number of fogdrops, the pixel size occupied by the fogdrops and the coordinate position of the fogdrops image according to the image preprocessing result;
and calculating the movement speed of the matchable fogdrops according to the result of the matching calculation of the matchable fogdrops in the characteristic region and the coordinate positions of the matchable fogdrops.
Preferably, the image preprocessing of the fog drop image includes:
acquiring an original fogdrop color image;
an original color image is converted into a binary image through binarization by a fixed threshold value;
performing dilatometry on the binary image;
and performing Laplace image enhancement operation on the expanded image.
Preferably, said dilatometry operation comprises: firstly, defining a cross structural element of 3*3;
traversing each pixel point of the binary image, and judging the correspondence between the 3*3 neighborhood of the pixel point and the original point of the structural element;
if the value of the pixel point is 0, corresponding the 3*3 neighborhood of the pixel point to the structural element;
if the value in the corresponding structural element is 1, the value of the pixel point is reset to 1.
Preferably, the matching calculation of the matchable fogdrop in the characteristic region comprises the following steps:
inputting two adjacent images which are subjected to image preprocessing, and performing HARRIS corner detection first image calculation to obtain candidate corners;
calculating the relevant information of the candidate corner points, and performing feature description on the candidate corner points;
according to the feature description, calculating the corner point with the highest matching degree in the first image and the second image by using normalized cross-correlation;
and displaying the matched fog drop result and returning the related information of the corresponding matched corner points.
Preferably, the calculating the HARRIS corner detection first image to obtain the candidate corner includes performing a translation operation on each point of the first image by using a convolution frame after a pixel point is translated upwards or downwards, leftwards or rightwards;
for each pixel (x, y) in the image, in the neighborhood of 9*9 convolution box, calculate the covariance matrix M of the gradient map using the Sobel operator of size 9;
defining the corner response function R as 0.06, and judging the result that R is greater than 0.04 as the corner.
Preferably, the calculating, identifying and extracting relevant information of the fog drops on the characteristic parameters of the fog drops specifically comprises: after each fog drop contour is identified by using a findContours function, labeling the fog drops according to the rule from left to right and from top to bottom by using a labeling algorithm;
calculating the number of pixels in each fog drop outline by using a count _ non zero function;
extracting boundary rectangular coordinates of the fogdrop outline by using a boundingRec function, and calculating the central position of the fogdrop;
calculating the movement speed of the matchable fogdrops;
and calculating the motion speed of the matched fog drops according to the result of the matched fog drops in the neighborhood fog drop matching calculation part and the coordinate positions of the matched fog drops.
Preferably, the size of the fog drops in the characteristic parameter of the fog drops can be calculated by the following formula:
M=L/N D=(P*M)/a
wherein, M is the actual length of a pixel, unit: cm;
l is the grid size in the grid calibration, and the unit is as follows: cm;
n is the pixel number of the grid in the image, and the unit is as follows: a pixel;
d is the original droplet size in units: cm;
p is the number of pixels in the image occupied by the fogdrop, unit: a pixel;
a is a relation coefficient.
Preferably, the calculation of the movement speed of the matched pair of droplets can be obtained by the following formula:
S=M*√[(X 1 -X 2 ) 2 +(Y 1 -Y 2 ) 2 ]
wherein S is the actual distance of the fog drop movement; x 1 ,X 2 ,Y 1 ,Y 2 The method is obtained from the coordinate position of the center of the calculated fog drop in the characteristic parameters of the calculated fog drop;
m is the actual length of a pixel point, unit: cm, the calculation mode is consistent with the calculation formula in the original size of the fogdrop;
V=S/T
wherein, V is the fog drop movement speed, and the unit is as follows: m/S, S is the actual distance of droplet movement, in units: cm; t is the time interval of two images, unit: and s.
The method comprises the steps of obtaining a fog drop image of a target area; carrying out image preprocessing on the fogdrop image; screening out a characteristic region according to the result of image preprocessing, calculating the number of fog drops in the characteristic region which can be matched, and acquiring the number of the fog drops in a fog drop image, the pixel size occupied by the fog drops and the coordinate position of the fog drops according to the result of image preprocessing; according to the result of matching calculation of the matchable fogdrop in the characteristic region and the coordinate position of the matchable fogdrop, the movement speed high-speed camera shooting of the matchable fogdrop is calculated, and the characteristic parameters of the spray in the spraying process are obtained through processing and analysis of an image processing technology and a computer vision technology, so that a new method is provided for researching the actual movement condition of the fogdrop in the spraying process, calculation of fogdrop deposition drift and the like.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive labor.
FIG. 1 is a schematic flow chart of a method for acquiring characteristic parameters of fog drops based on image processing according to the present invention;
FIG. 2 is a diagram showing a connecting line with connecting matching corner points in the field fog droplet matching step;
fig. 3 is a schematic diagram of outputting a minimum oblique rectangle and calculating a coordinate position of a center point in the step of calculating the characteristic parameter of the droplet.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that all the directional indicators (such as upper, lower, left, right, front, and rear … …) in the embodiment of the present invention are only used to explain the relative position relationship between the components, the motion situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indicator is changed accordingly.
In addition, the descriptions related to "first", "second", etc. in the present invention are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions in the embodiments are mutually combined, but must be realized by those skilled in the art, and when the technical solutions are mutually contradictory or cannot be realized, the technical solutions should be considered to be absent and not be within the protection scope of the present invention.
If only a plurality of aerial relay communication nodes with relatively fixed positions are constructed in the air, the position change of the rapidly-changed fighting groups is difficult to adapt, and efficient communication between the fighting groups and between the whole fighting teams cannot be guaranteed. The invention judges whether the communication nodes need to be relayed among the fighting groups; if yes, the number and the positions of the relay communication unmanned aerial vehicles are adjusted according to the distance between the combat groups. The method is oriented to the complex battlefield environment of urban battles, and the effective and stable information communication of each fighting group is realized by using the method of real-time position adjustment of the communication relay unmanned aerial vehicle on the premise of not changing communication equipment, so that effective information support is provided for commanding the battles; the real-time calculation and dynamic adjustment of the position of the unmanned aerial vehicle are realized, and the stability and effectiveness of information communication are guaranteed.
Example 1
The invention relates to a method for acquiring characteristic parameters of fog drops based on image processing, which comprises the following steps with reference to figure 1:
1. acquiring a fog drop image of a target area;
specifically, a fog drop image acquisition step, namely acquiring a fog drop field image by using a high-speed camera to acquire a fog drop image of a target area;
2. carrying out image preprocessing on the fogdrop image;
specifically, a step of acquiring fogdrop image information and carrying out image preprocessing by a computer, wherein the fogdrop image information is transmitted to the computer by a high-speed camera for image preprocessing;
3. screening out a characteristic region according to the result of image preprocessing, calculating the number of fog drops in the characteristic region which can be matched, and acquiring the number of the fog drops in a fog drop image, the pixel size occupied by the fog drops and the coordinate position of the fog drops according to the result of image preprocessing;
specifically, in the neighborhood fogdrop matching calculation step, a characteristic region is screened out according to the image preprocessing result, and the matchable fogdrop in the characteristic region is calculated.
4. And calculating the movement speed of the matchable fogdrops according to the result of the matching calculation of the matchable fogdrops in the characteristic region and the coordinate positions of the matchable fogdrops.
Specifically, a fog drop characteristic parameter calculation step, namely acquiring the number of fog drops, the occupied pixel size and the fog drop coordinate position of a fog drop image according to the image preprocessing result; and calculating the movement speed of the matchable fogdrops according to the result of the neighborhood fogdrops matching calculation and the coordinate position of the matched fogdrops.
The method comprises the steps of obtaining a fog drop image of a target area; carrying out image preprocessing on the fogdrop image; screening out a characteristic region according to the image preprocessing result, calculating the matchable fogdrops in the characteristic region, and acquiring the number of fogdrops, the pixel size occupied by the fogdrops and the coordinate position of the fogdrops image according to the image preprocessing result; according to the result of matching calculation of the matchable fogdrop in the characteristic region and the coordinate position of the matchable fogdrop, the movement speed high-speed camera shooting of the matchable fogdrop is calculated, and the characteristic parameters of the spray in the spraying process are obtained through processing and analysis of an image processing technology and a computer vision technology, so that a new method is provided for researching the actual movement condition of the fogdrop in the spraying process, calculation of fogdrop deposition drift and the like.
For the first step, the specific implementation method is as follows: illuminating a shooting area by means of a non-stroboscopic light source, performing grid calibration by using high-speed camera equipment, and then setting a shooting frame rate to acquire a fog drop image;
in the embodiment, the high-speed camera is placed in the lower left area of the whole spraying plane, the distance between the high-speed camera and the spraying plane is 40cm, and the size of the actually shot spraying area is 5cm x 8cm; and adopting mesh paper with the size of 1cm x 1cm for calibration; the frame rate F was set to 2000fp/s, and the duration of spraying was 3s.
For the second step, the computer acquires fog drop image information and carries out image preprocessing;
the computer acquires the fog drop image, and the fog drop image can be transmitted to the computer by adopting standard interface modes such as a USB interface, a twisted pair and the like;
preprocessing an original fogdrop image, wherein the main operation comprises the steps of converting an original color image acquired by a computer into a binary image through fixed threshold binarization; performing dilatometry on the binary image; performing Laplace image enhancement operation on the expanded image; further operation of pre-processing the original fogdrop image comprises the steps of:
and carrying out preprocessing analysis on each acquired frame image by using an image processing technology according to the quantity and distribution characteristics of the fog drops in the image. Firstly, a fixed threshold value of 127 is set for an acquired 24-bit RGB original color image, and the original color image is converted into a binary image.
Then, performing 3*3 dilatometry operation on the binary image, defining a cross-shaped structural element of 3*3, traversing each pixel point of the image, and judging the correspondence between the 3*3 neighborhood of the pixel point and the origin of the structural element; and if the value of the pixel point is 0, judging the corresponding position of the 3*3 neighborhood of the pixel point and the structural element, and if the value in the corresponding structural element is 1, resetting the value of the pixel point to 1.
And performing image enhancement on the image after the dilatometry operation, wherein the selected image enhancement method is a Laplace algorithm, and the size ksize of the Laplace operator is set to be 5. The Laplace image enhancement algorithm is a common image enhancement algorithm, can generate a very obvious gray boundary and is beneficial to extracting the subsequent fogdrop contours.
On the basis of the above embodiment, the further operation of the third-step neighborhood droplet matching calculation is as follows:
firstly, inputting two adjacent images which are subjected to image preprocessing, and searching angular points (pixel points) which are most easily identified as detectors; this step uses the HARRIS corner detection calculation method to extract the detector. The specific principle is that the convolution frame is used for performing translation operation after a pixel point is translated upwards or downwards, leftwards or rightwards on each point on the image, and if the gray value in the convolution frame has large change, the region where the convolution frame is located is considered to have an angular point. In a specific implementation, for each pixel (x, y) in the image, in the neighborhood of 9*9 convolution box, a size 9 Sobel operator is used to calculate the covariance matrix M of the gradient map; setting the corner response function R as 0.06, and judging the result of the corner response function R >0.04 as the corner.
And secondly, calculating the relevant information of the candidate corner points, and performing the feature description of the corner points. The method comprises the following further steps: setting a corner threshold value to be 0.1, and searching candidate corners higher than the threshold value in the HARRIS response image in the step E; and after finding the candidate corner points, acquiring coordinates of the candidate corner points, storing the coordinates into an array, acquiring HARRISS response values of the candidate corner points, and sequencing according to the response values.
Finding the best HARRISS point in the candidate corner points, the specific method is as follows: setting the minimum pixel number of the segmentation corner points and the image boundaries as min _ dist, and setting the value of min _ dist as 10; judging whether the value of min _ dist is greater than 10, and only the corner point with min _ dist greater than 10 is regarded as the optimal HARRIS point;
finding the best HARRIS point, but the HARRIS corner detection method does not provide a method for matching corners according to the information of the corners, so that a descriptor is extracted as the feature of each corner. The corner descriptors subjected to Harris corner detection are typically pixel block information of its surrounding images.
Thus, in this embodiment, by setting width wid to 9, the value of 2 × wid +1 pixels around the optimal HARRIS point is returned as the corner point descriptor.
Thirdly, calculating the matching degree of the paired corner descriptors; this step uses a normalized cross correlation calculation method (NCC) which is used to represent the degree of correlation between normalized objects to be matched.
The specific matching mode is as follows: and combining the feature descriptors of the steps, and selecting the corner point with the highest matching degree in the second image for each corner point descriptor in the first image by using a normalized cross-correlation calculation method (NCC) to obtain the matched paired corner points.
Fourthly, displaying the result of the matched fog drops and returning the related information corresponding to the matched angular points; splicing two adjacent fog drop images into a new image, then acquiring the positions of paired feature descriptors successfully matched, and displaying a picture with connecting lines connecting matching angular points, as shown in fig. 2.
On the basis of the above embodiment, the droplet characteristic parameters may be obtained by calculation, and the droplet characteristic parameters in this embodiment mainly include: the number of fog drops in the fog drop image, the occupied pixel size and the fog drop coordinate position; the moving speed of the fog drops can be matched.
In the processing method for calculating characteristic parameters of droplets, the step of calculating characteristic parameters of droplets further includes:
identifying fog drops and extracting relevant information of the fog drops (the number of the fog drops, the size of the occupied pixels, the coordinate positions of the fog drops and the movement speed of the matchable fog drops), wherein the main implementation process comprises two parts of identifying the fog drops, sequencing and calculating characteristic parameters of the fog drops:
the further operation of identifying and sorting the droplets is: to identify and extract the contour of each droplet in the droplet image, the findContours function is used to identify each droplet contour in the present invention. For each extracted fogdrop contour, using a sort _ controls function to label the fogdrop contours in a regular sequence from left to right and from top to bottom;
the step of calculating the characteristic parameters of the fog drops comprises the number of the fog drops, the pixel size occupied by the fog drops, the coordinate positions of the fog drops and the movement speed of the matched fog drops.
Regarding the statistics of the number of fog drops, the total number of fog drops in a fog drop image can be obtained by the step of identifying and sorting the fog drops, and will not be described in a too large way here.
Regarding calculating the size of the pixel occupied by the fog drop, the present embodiment calculates the number of pixels in each fog drop profile using a count _ non zero function; the count _ not zero function can return the number of pixels with the gray value not being 0, and the size of the pixel occupied by the fog drop can be calculated more accurately. It should be noted that, because the present step is calculated based on the image preprocessing of step two, the image preprocessing includes performing dilatometry and laplacian image enhancement on the droplets, which may make the droplet size larger and thicker; therefore, the size of the pixel occupied by the fog drops calculated in the step is not the size of the pixel occupied by the real fog drops.
In the subsequent calculation, the relationship that the size of the fogdrop subjected to image preprocessing is about 3 times larger than that of the fogdrop in the original color image is found, and the specific detailed coefficient relationship still needs to be further researched; therefore, the size of the pixel occupied by the fog drop calculated in the step can be used for comparing the sizes of the fog drops; the original size of the fog drops is still not particularly accurate, and based on the fact that numerous scholars are still interested in the original size of the fog drops, the invention provides a calculation formula for calculating the original size of the fog drops based on an estimated relation coefficient.
M=L/N D=(P*M)/a
Wherein M is the actual length of a pixel point; l is the size of the grid in the grid calibration; and N is the number of pixels occupied by the grid in the image. D is the original size of the fogdrop, P is the number of pixels in the image occupied by the fogdrop, and a is a relation coefficient.
Regarding calculating the coordinate position of the fog drop, the embodiment uses a minAreaRect function to extract the boundary rectangular coordinate of the fog drop outline, and calculates the coordinate position of the center of the fog drop; the minAreaRect function can calculate the minimum oblique rectangle of the coating contour, so that the central position of the fog drop can be more accurately obtained; meanwhile, the minAreaRect function can output the coordinates (x, y) of the center point in the minimum diagonal rectangle, and the width and height of the diagonal rectangle, as shown in FIG. 3.
The following is the calculation of the speed of movement of the matchable mist droplets: and finding out paired matched fogdrops according to the result of the matchable fogdrops in the neighborhood fogdrops matching calculation part in the step three, and calculating the motion speed of the matched fogdrops by combining the characteristic parameter information of the fogdrops and the following formula.
S=M*√[(X 1 -X 2 ) 2 +(Y 1 -Y 2 ) 2 ]
Wherein S is the actual distance of the fog drop movement; x 1 ,X 2 ,Y 1 ,Y 2 The calculation result is obtained from the coordinate position of the center of the calculated fogdrop in the characteristic parameters of the calculated fogdrop; m is the actual length of one pixel point, and the calculation mode is consistent with the calculation formula in the calculation of the original size of the fogdrop.
V=S/T
Wherein, V is the movement speed of the fogdrop, and the unit is as follows: m/S, S is the actual distance of the fog drop movement, and the unit is as follows: cm; t is the time interval of two images, unit: and S.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. The method for acquiring the characteristic parameters of the fog drops based on image processing is characterized by comprising the following steps of:
acquiring a fog drop image of a target area;
carrying out image preprocessing on the fogdrop image;
screening out a characteristic region according to the image preprocessing result, calculating the matchable fogdrops in the characteristic region, and acquiring the number of fogdrops, the pixel size occupied by the fogdrops and the coordinate position of the fogdrops image according to the image preprocessing result;
and calculating the movement speed of the matchable fogdrops according to the result of the matching calculation of the matchable fogdrops in the characteristic region and the coordinate positions of the matchable fogdrops.
2. The method for obtaining the fog drop characteristic parameters based on the image processing as claimed in claim 1, wherein the image preprocessing is performed on the fog drop image, and comprises the following steps:
acquiring an original fogdrop color image;
an original color image is converted into a binary image through binarization by a fixed threshold value;
performing dilatometry on the binary image;
and performing Laplace image enhancement operation on the expanded image.
3. The method for obtaining droplet characteristic parameters based on image processing as claimed in claim 2, wherein the dilatometry operation comprises: firstly, defining a cross structural element of 3*3;
traversing each pixel point of the binary image, and judging the correspondence between the 3*3 neighborhood of the pixel point and the original point of the structural element;
if the value of the pixel point is 0, corresponding the 3*3 neighborhood of the pixel point to the structural element;
if the value in the corresponding structural element is 1, the value of the pixel point is reset to 1.
4. The method for acquiring the characteristic parameters of the fog drops based on the image processing as claimed in claim 2, wherein the matching calculation of the matchable fog drops in the characteristic region comprises:
inputting two adjacent images which are subjected to image preprocessing, and performing HARRIS corner detection first image calculation to obtain candidate corners;
calculating the relevant information of the candidate corner points, and performing feature description on the candidate corner points;
according to the feature description, calculating the corner point with the highest matching degree in the first image and the second image by using normalized cross correlation;
and displaying the matching fogdrop result and returning the related information corresponding to the matching corner points.
5. The method for obtaining droplet characteristic parameters based on image processing according to claim 4, wherein the performing HARRISS corner detection first image calculation to obtain candidate corners comprises performing a translation operation after translating a pixel point upwards or downwards, leftwards or rightwards at each point of the first image by using a convolution frame;
for each pixel (x, y) in the image, calculating a covariance matrix M of the gradient map using a Sobel operator of size 9 in the neighborhood of 9*9 convolution frame;
defining the corner response function R as 0.06, and judging the result that R is greater than 0.04 as the corner.
6. The method for acquiring the characteristic parameters of the fog drops based on the image processing as claimed in claim 3, is characterized by comprising the following steps:
the method for calculating, identifying and extracting the relevant information of the fog drops comprises the following steps: after each fog drop outline is identified by using a findContours function, marking the fog drops according to a rule from left to right and from top to bottom by using a labeling algorithm;
calculating the number of pixels in each fog drop outline by using a count _ non zero function;
extracting boundary rectangular coordinates of the fogdrop outline by using a boundingRec function, and calculating the central position of the fogdrop;
calculating the movement speed of the matchable fogdrops;
and calculating the motion speed of the matched fog drops according to the result of the matched fog drops in the neighborhood fog drop matching calculation part and the coordinate positions of the matched fog drops.
7. The method for obtaining the fog drop characteristic parameters based on the image processing as claimed in claim 6, wherein the fog drop size in the fog drop characteristic parameters can be obtained by calculating according to the following formula:
M=L/N D=(P*M)/a
wherein, M is the actual length of a pixel, unit: cm;
l is the grid size in the grid calibration, unit: cm;
n is the pixel number of the grid in the image, and the unit is as follows: a pixel;
d is the original size of the fogdrop, and the unit is as follows: cm;
p is the number of pixels in the image occupied by the fogdrop, unit: a pixel;
a is a relation coefficient.
8. The method for acquiring the fog drop characteristic parameters based on the image processing as claimed in claim 6, is characterized by comprising the following steps: the calculation of the speed of movement of the droplets that can be matched in pairs can be obtained by the following formula:
S=M*√[(X 1 -X 2 ) 2 +(Y 1 -Y 2 ) 2 ]
wherein S is the actual distance of the fog drop movement; x 1 ,X 2 ,Y 1 ,Y 2 The method is obtained from the coordinate position of the center of the calculated fog drop in the characteristic parameters of the calculated fog drop;
m is the actual length of a pixel, unit: cm, the calculation mode is consistent with the calculation formula in the original size of the fogdrop;
V=S/T
wherein, S is the actual distance of the fog drop movement, V is the fog drop movement speed, and the unit is as follows: m/S, S is the actual distance of droplet movement, in units: cm, T is the time interval of two images, unit: and s.
CN202310265163.1A 2023-03-20 2023-03-20 Method for acquiring characteristic parameters of fog drops based on image processing Active CN115984335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310265163.1A CN115984335B (en) 2023-03-20 2023-03-20 Method for acquiring characteristic parameters of fog drops based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310265163.1A CN115984335B (en) 2023-03-20 2023-03-20 Method for acquiring characteristic parameters of fog drops based on image processing

Publications (2)

Publication Number Publication Date
CN115984335A true CN115984335A (en) 2023-04-18
CN115984335B CN115984335B (en) 2023-06-23

Family

ID=85970865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310265163.1A Active CN115984335B (en) 2023-03-20 2023-03-20 Method for acquiring characteristic parameters of fog drops based on image processing

Country Status (1)

Country Link
CN (1) CN115984335B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208997A1 (en) * 2010-11-02 2013-08-15 Zte Corporation Method and Apparatus for Combining Panoramic Image
US20130243250A1 (en) * 2009-09-14 2013-09-19 Trimble Navigation Limited Location of image capture device and object features in a captured image
US20160196654A1 (en) * 2015-01-07 2016-07-07 Ricoh Company, Ltd. Map creation apparatus, map creation method, and computer-readable recording medium
CN106981073A (en) * 2017-03-31 2017-07-25 中南大学 A kind of ground moving object method for real time tracking and system based on unmanned plane
US20170278258A1 (en) * 2011-08-31 2017-09-28 Apple Inc. Method Of Detecting And Describing Features From An Intensity Image
CN107657626A (en) * 2016-07-25 2018-02-02 浙江宇视科技有限公司 The detection method and device of a kind of moving target
CN112164037A (en) * 2020-09-16 2021-01-01 天津大学 MEMS device in-plane motion measurement method based on optical flow tracking
CN113706566A (en) * 2021-09-01 2021-11-26 四川中烟工业有限责任公司 Perfuming spray performance detection method based on edge detection
CN115272403A (en) * 2022-06-10 2022-11-01 南京理工大学 Fragment scattering characteristic testing method based on image processing technology
CN115760893A (en) * 2022-11-29 2023-03-07 江苏大学 Single droplet particle size and speed measuring method based on nuclear correlation filtering algorithm

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130243250A1 (en) * 2009-09-14 2013-09-19 Trimble Navigation Limited Location of image capture device and object features in a captured image
US20130208997A1 (en) * 2010-11-02 2013-08-15 Zte Corporation Method and Apparatus for Combining Panoramic Image
US20170278258A1 (en) * 2011-08-31 2017-09-28 Apple Inc. Method Of Detecting And Describing Features From An Intensity Image
US20160196654A1 (en) * 2015-01-07 2016-07-07 Ricoh Company, Ltd. Map creation apparatus, map creation method, and computer-readable recording medium
CN107657626A (en) * 2016-07-25 2018-02-02 浙江宇视科技有限公司 The detection method and device of a kind of moving target
CN106981073A (en) * 2017-03-31 2017-07-25 中南大学 A kind of ground moving object method for real time tracking and system based on unmanned plane
CN112164037A (en) * 2020-09-16 2021-01-01 天津大学 MEMS device in-plane motion measurement method based on optical flow tracking
CN113706566A (en) * 2021-09-01 2021-11-26 四川中烟工业有限责任公司 Perfuming spray performance detection method based on edge detection
CN115272403A (en) * 2022-06-10 2022-11-01 南京理工大学 Fragment scattering characteristic testing method based on image processing technology
CN115760893A (en) * 2022-11-29 2023-03-07 江苏大学 Single droplet particle size and speed measuring method based on nuclear correlation filtering algorithm

Also Published As

Publication number Publication date
CN115984335B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN108596101B (en) Remote sensing image multi-target detection method based on convolutional neural network
CN108491757B (en) Optical remote sensing image target detection method based on multi-scale feature learning
US9846946B2 (en) Objection recognition in a 3D scene
CN108898065B (en) Deep network ship target detection method with candidate area rapid screening and scale self-adaption
Chen et al. A YOLOv3-based computer vision system for identification of tea buds and the picking point
CN110689519B (en) Fog drop deposition image detection system and method based on yolo network
CN111666855B (en) Animal three-dimensional parameter extraction method and system based on unmanned aerial vehicle and electronic equipment
CN109949229A (en) A kind of target cooperative detection method under multi-platform multi-angle of view
CN109446929A (en) A kind of simple picture identifying system based on augmented reality
Wang et al. A smart droplet detection approach with vision sensing technique for agricultural aviation application
CN114764871B (en) Urban building attribute extraction method based on airborne laser point cloud
CN108960190A (en) SAR video object detection method based on FCN Image Sequence Model
CN111709988A (en) Method and device for determining characteristic information of object, electronic equipment and storage medium
Wu et al. ALS data based forest stand delineation with a coarse-to-fine segmentation approach
CN110942092A (en) Graphic image recognition method and recognition system
CN106886754B (en) Object identification method and system under a kind of three-dimensional scenic based on tri patch
CN115717867A (en) Bridge deformation measurement method based on airborne double cameras and target tracking
CN115115954A (en) Intelligent identification method for pine nematode disease area color-changing standing trees based on unmanned aerial vehicle remote sensing
CN114299137A (en) Laser spot center positioning method and test system
CN112749584A (en) Vehicle positioning method based on image detection and vehicle-mounted terminal
Martins et al. Image segmentation and classification with SLIC Superpixel and convolutional neural network in forest context
CN109978982B (en) Point cloud rapid coloring method based on oblique image
CN115984335B (en) Method for acquiring characteristic parameters of fog drops based on image processing
Yang et al. Droplet deposition characteristics detection method based on deep learning
Jovanović et al. Object based image analysis in forestry change detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant