CN111024980A - Image velocimetry method for chromatographic particles near free interface - Google Patents

Image velocimetry method for chromatographic particles near free interface Download PDF

Info

Publication number
CN111024980A
CN111024980A CN201911306262.XA CN201911306262A CN111024980A CN 111024980 A CN111024980 A CN 111024980A CN 201911306262 A CN201911306262 A CN 201911306262A CN 111024980 A CN111024980 A CN 111024980A
Authority
CN
China
Prior art keywords
interface
image
marker
brightness
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911306262.XA
Other languages
Chinese (zh)
Other versions
CN111024980B (en
Inventor
李存标
陈钧伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201911306262.XA priority Critical patent/CN111024980B/en
Publication of CN111024980A publication Critical patent/CN111024980A/en
Application granted granted Critical
Publication of CN111024980B publication Critical patent/CN111024980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P5/00Measuring speed of fluids, e.g. of air stream; Measuring speed of bodies relative to fluids, e.g. of ship, of aircraft
    • G01P5/18Measuring speed of fluids, e.g. of air stream; Measuring speed of bodies relative to fluids, e.g. of ship, of aircraft by measuring the time taken to traverse a fixed distance
    • G01P5/20Measuring speed of fluids, e.g. of air stream; Measuring speed of bodies relative to fluids, e.g. of ship, of aircraft by measuring the time taken to traverse a fixed distance using particles entrained by a fluid stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The invention discloses a method for measuring the speed of a chromatography particle image near a free interface, which comprises the steps of arranging a marker near the free interface for identifying the position of a subsequent interface, arranging a tracer particle for calculating the subsequent speed field, then after distinguishing the marker of the interface and the tracer particle in the particle image, carrying out first image preprocessing on an original particle image to obtain a preprocessed image for identifying the interface, carrying out second image preprocessing on the original particle image to obtain a preprocessed image for calculating the speed field, finally identifying the position of the interface based on the preprocessed image for identifying the interface, carrying out speed field calculation based on the preprocessed image for calculating the speed field, eliminating the interference of the tracer particle in a particle image on the identification result of the interface position and the interference of the marker of the interface on the calculation result of the speed field, and not only obtaining the speed measurement result with higher accuracy, meanwhile, the position of the free interface can be accurately identified, and the method is particularly suitable for measuring a gas-liquid interface or a liquid-liquid interface.

Description

Image velocimetry method for chromatographic particles near free interface
Technical Field
The invention relates to the technical field of particle image velocimetry, in particular to a method for measuring the velocity of a tomography particle image near a free interface.
Background
Tomographic particle image velocimetry (tomographic PIV) is a PIV technology that has developed in recent years. In 2006, Elsinga published the results of a first example of particle image velocimetry (Elsinga G E, van Oudheusden B wave and Scarano F2006 a Experimental assessment of tomophic-PIV acquisition 13thInt. Symp. on Applications of Laser Techniques to Fluid Mechanics (Lisbon, Portugal)). The tomography Particle Image Velocimetry (PIV) is a measurement method for obtaining a three-dimensional velocity field in a volumetric region through a system consisting of a plurality of cameras (generally more than three cameras), the measurement region needs light source illumination, and compared with the traditional Particle Image Velocimetry (PIV) technology, the PIV can be improved from two dimensions to three dimensions in the measurement dimension.
Compared with the tomography in the non-near-wall region, the tomography in the near-free-interface region has the following problems: the position of the interface is determined, the interface reflects and refracts illumination light, and the velocity field calculation result is wrong due to the lack of particles near the interface. For the reasons, the existing tomography particle image velocimetry method is difficult to be applied to the particle image velocimetry near the free interface.
Sunbishu Im1, Young Jin Jeon2, Hyung Jin sun 1, Tomo-PIV measurement of flow area an arbitrary moving body with surface retrieval discloses a method of printing a pattern on a deformable object, and obtaining the position of the object surface at the relevant measurement time by the change of the pattern texture. The method has the limitations that the object is a solid with a pre-printed pattern on the surface, the pattern on the surface of the object is possibly difficult to distinguish from the tracer particles during the experiment, so that the quality of a particle image is reduced, the accuracy of an interface identification result is poor, and the method has extremely high requirements on an interface position algorithm and is not suitable for popularization.
Another method is to determine the interface position by the parameters (such as cross-correlation peak) of the tomographic image velocimetry calculation result. In general, since the other side of the interface has virtually no trace particles compared to the measured velocity side, the signal strength and signal-to-noise ratio of the other side of the interface are low in the reconstructed spatial distribution, and the correlation coefficient should be lower than that of the particle side. Because a marker for directly positioning the interface position is not provided, the method has poor measurement accuracy, is generally used for judging the interface with small change rate, but has poor effect when the interface change rate is large or the interface has reflection, and cannot be suitable for a free interface.
Disclosure of Invention
The invention provides a tomography particle image speed measuring method near a free interface, which aims to solve the technical problem that the conventional tomography particle image speed measuring method is difficult to be applied to free interface measurement.
According to one aspect of the invention, a method for measuring the speed of a tomography particle image near a free interface is provided, which comprises the following steps:
step S1: arranging at least three cameras and an illumination light source, calibrating the positions of the cameras, arranging an interface marker and scattering tracer particles, and then shooting by using the cameras to obtain particle images;
step S2: preprocessing the particle image: distinguishing an interface marker and trace particles in a particle image, obtaining the distribution of the interface marker in the particle image and the movement relation of the interface marker between two adjacent frames of images, performing first image preprocessing on an original particle image to obtain a preprocessed image for identifying an interface, and performing second image preprocessing on the original particle image to obtain a preprocessed image for calculating a velocity field;
step S3: identifying an interface location based on the preprocessed image for interface identification;
step S4: spatial reconstruction and cross-correlation calculations are performed based on the preprocessed images used to calculate the velocity field to obtain the velocity field.
Further, in the step S2, the distribution of the interface marker in the original particle image is obtained specifically through the following steps:
determining the approximate appearing range of the interface marker in the pictures of different cameras by comprehensively utilizing the radius, the brightness and the difference of the moving speed of the interface marker and the tracer particles in the pictures, and then roughly determining the appearing area of the interface marker in the pictures by using a corrosion method and a surrounding area brightness counting method;
performing cross-correlation calculation on two adjacent frames of pictures, wherein the movement range of a query window is limited near the movement range of an interface marker, if an interface marker exists in the area of the query window, the cross-correlation coefficient is close to 1, and if no interface marker exists, the cross-correlation coefficient is close to 0, so that the area where the interface marker exists in the picture can be roughly determined;
and (4) integrating the two bases, judging the range of the interface marker in the picture, obtaining the upper boundary corresponding to the range, and smoothing the position of the upper boundary by using the phase lock average.
Further, the step S2 of performing the first image preprocessing on the original particle image to obtain a preprocessed image for interface recognition specifically includes the following steps:
step S21: covering the brightness distribution of the area outside the area where the interface marker exists in the picture, so that the brightness value of the brightness distribution is zero, and respectively obtaining the first brightness distribution of the covered multi-frame image;
step S22: respectively processing the covered multi-frame images by adopting picture morphological gray scale opening operation to respectively obtain second brightness distribution of the multi-frame images;
step S23: obtaining a transformation matrix of image deformation between two frames of images based on the movement relation of the interface marker between two adjacent frames of images;
step S24: for the picture in the double-frame exposure mode, the transformation matrix is used for transforming the second frame image to the brightness distribution condition which should appear at the moment of the first frame image, the brightness distribution is compared with the actual brightness distribution of the first frame image, and the smaller value of each pixel point is taken, so that the preprocessing result of the first frame image is obtained, and the preprocessing result of the second frame image is obtained in the same way.
Further, the second image preprocessing performed on the original particle image in step S2 to obtain a preprocessed image for calculating the velocity field specifically includes the following steps:
covering the original particle image by using a covering algorithm M (l) ═ M (x, y) × l (x, y), respectively obtaining a first brightness distribution of the covered multi-frame image, wherein l (x, y) is the brightness distribution,
when y < l (x) -d, M (x, y) is 0;
when l (x) -d < y < l (x), M (x, y) ═ 0.5 × (y- (l (x) -d))/d; when l (x) < y < l (x) + h, M (x, y) ═ 0.5; when l (x) + h < y < l (x) + h + d, M (x, y) ═ 0.5 ═ l (x) + h + d-y)/d; when y > l (x) + h + d, M (x, y) ═ 0;
covering M and image brightness distribution l are two matrixes with the same size, l (x) represents the boundary of an interface marker appearing region, h represents the height of the interface marker and tracer particle coexisting region, d represents the width of a buffer region, x and y represent plane position coordinates of pixel points, and h and d are determined by camera resolution and camera inclination;
for the picture in the double-frame exposure mode, deforming the covered second frame image by using the transformation matrix in the step S23, and correspondingly subtracting twice the brightness value of each pixel point in the brightness distribution of the covered and deformed second frame image from the brightness value of each pixel point in the brightness distribution of the first frame original image to obtain the preprocessing result of the first frame image; the same procedure is used for processing the second frame image.
Further, for the case that the brightness value is negative after the second image preprocessing, the step S2 further includes the following steps:
changing the pixel value types in the image into unsigned integer types; or changing the brightness value of the pixel point with the negative value in the image to zero.
Further, the step S3 specifically includes the following steps:
step S31: reconstructing a preprocessed image for interface recognition by adopting a tomography particle image velocimetry space reconstruction algorithm, wherein the reconstruction range comprises an area where an interface appears;
step S32: dividing the reconstructed three-dimensional space into a plurality of layers along the direction of the illumination light source, and extracting a plurality of layers of images along the direction of the illumination light source to calculate a brightness average value;
step S33: carrying out binarization on the image corresponding to the average value, and identifying the position of the interface marker in the space;
step S34: and repeating the steps S32 and S33 for different frames to obtain the time sequence of the positions of the interface markers in the space, removing the interface markers with abnormal positions, and then performing sequence interpolation and smoothing to obtain the interface positions in the three-dimensional space.
Further, the step S3 includes the following steps:
step S31 a: identifying a location of an interface marker in a preprocessed image for interface identification;
step S32 a: reconstructing the position of the interface marker in three-dimensional space using data from different cameras;
step S33 a: and removing the interface marker with abnormal position, and obtaining the interface position in the three-dimensional space by using an interpolation algorithm.
Further, the spatial luminance distribution is subjected to the following processing after the pre-processed image for calculating the velocity field is spatially reconstructed in step S4:
step S41: covering and reconstructing the obtained spatial brightness distribution by using the three-dimensional interface position, namely setting the brightness of the area on the other side of the interface to be zero;
step S42: enhancing image quality in the vicinity of the interface: identifying the volume pixel occupied by the area at one side of the area where the tracer particles are positioned, and reducing the brightness of the volume pixel without the tracer particles in the area;
step S43: and adding artificially synthesized virtual particles moving along with the interface in the area on the other side of the interface.
Further, a part of the interface markers are reserved in the second image preprocessing process as virtual particles, and in the second image preprocessing process, the brightness value of each pixel point in the brightness distribution of the first frame original image is correspondingly subtracted by twice the brightness value of each pixel point in the brightness distribution of the second frame image after covering, deformation and Gaussian blur to obtain the preprocessing result of the first frame image, and the second frame image is processed by adopting the same steps.
Further, the disposing of the interface marker and the trace particle in step S1 includes:
for a deformable object, arranging point-like marks on the surface of the deformable object, wherein the size of the marks is larger than that of the tracer particles so as to ensure that the marks are distinguished in image processing, and the arrangement density of the marks is smaller than that of the scattered tracer particles;
particles different from the tracer particles are added as markers near the liquid surface on the liquid side of the gas-liquid interface or the liquid-liquid interface, the diameter of the markers is larger than that of the tracer particles, the markers are surely distinguished in image processing, and the density of the markers is smaller than the scattering density of the tracer particles.
The invention has the following effects:
the invention relates to a method for measuring the speed of a chromatography particle image near a free interface, which comprises the steps of arranging a marker near the free interface for identifying the position of the subsequent interface, arranging a tracer particle for calculating the subsequent speed field, then distinguishing the marker of the interface and the tracer particle in the particle image, carrying out first image pretreatment on an original particle image to obtain a pretreatment image for identifying the interface, carrying out second image pretreatment on the original particle image to obtain a pretreatment image for calculating the speed field, finally identifying the position of the interface based on the pretreatment image for identifying the interface, carrying out speed field calculation based on the pretreatment image for calculating the speed field, eliminating the interference of the tracer particle in a particle image on the identification result of the interface position and the interference of the marker of the interface on the calculation result of the speed field, and not only obtaining the speed measurement result with higher accuracy, but also can accurately identify the position of the free interface at the same time, and is particularly suitable for the measurement of a gas-liquid interface or a liquid-liquid interface.
In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a method for measuring velocity of tomographic particle images near a free interface according to a preferred embodiment of the present invention.
Fig. 2 is a sub-flowchart of step S1 in fig. 1 according to the preferred embodiment of the present invention.
Fig. 3 is a sub-flowchart of step S2 in fig. 1 according to the preferred embodiment of the present invention.
Fig. 4 is a sub-flowchart of step S3 in fig. 1 according to the preferred embodiment of the present invention.
Fig. 5 is another sub-flowchart of step S3 in fig. 1 according to the preferred embodiment of the present invention.
Fig. 6 is a sub-flowchart of step S4 in fig. 1 according to the preferred embodiment of the present invention.
Detailed Description
The embodiments of the invention will be described in detail below with reference to the accompanying drawings, but the invention can be embodied in many different forms, which are defined and covered by the following description.
As shown in fig. 1, a preferred embodiment of the present invention provides a method for measuring a velocity of a tomographic particle image near a free interface, which is particularly suitable for measurement near a gas-liquid interface or a liquid-liquid interface, and can not only obtain a velocity measurement result with high accuracy, but also accurately identify a position of the free interface. The method for measuring the speed of the tomography particle image near the free interface comprises the following steps:
step S1: arranging at least three cameras and an illumination light source, calibrating the positions of the cameras, arranging an interface marker and scattering tracer particles, and then shooting by using the cameras to obtain particle images;
step S2: preprocessing the particle image: distinguishing an interface marker and trace particles in a particle image, obtaining the distribution of the interface marker in the particle image and the movement relation of the interface marker between two adjacent frames of images, performing first image preprocessing on an original particle image to obtain a preprocessed image for identifying an interface, and performing second image preprocessing on the original particle image to obtain a preprocessed image for calculating a velocity field;
step S3: identifying an interface location based on the preprocessed image for interface identification;
step S4: spatial reconstruction and cross-correlation calculations are performed based on the preprocessed images used to calculate the velocity field to obtain the velocity field.
In this embodiment, the method for measuring speed of tomographic particle images near a free interface includes disposing a marker near the free interface for performing subsequent interface position identification, disposing a trace particle for performing subsequent velocity field calculation, performing first image preprocessing on an original particle image after distinguishing the interface marker and the trace particle in the particle image to obtain a preprocessed image for interface identification, performing second image preprocessing on the original particle image to obtain a preprocessed image for calculating a velocity field, identifying the interface position based on the preprocessed image for interface identification, and performing velocity field calculation based on the preprocessed image for calculating the velocity field, so as to eliminate interference of the trace particle in a particle image on an interface position identification result and interference of the interface marker on a velocity field calculation result, and not only obtain a velocity measurement result with higher accuracy, but also can accurately identify the position of the free interface at the same time, and is particularly suitable for the measurement of a gas-liquid interface or a liquid-liquid interface. In addition, the picture obtained by the camera in the method is simultaneously used for interface measurement and speed field measurement, so that the use amount of the camera is effectively reduced, and the measurement complexity and cost are reduced.
It can be understood that, as shown in fig. 2, the step S1 specifically includes the following steps:
step S11: arranging a camera and a laser: preferably, three or more cameras are aligned with the preliminary measurement area near the interface, and the direction is a side surface. In order to prevent the interface from fluctuating and blocking the view, and to better position the interface, the camera needs to have a certain pitch angle, so that the measured interfaces are distributed in a strip shape in the image and are not overlapped as much as possible. In addition, since the trace particles and interface markers need to be imaged clearly in the camera, while the illumination source needs to be of a certain thickness, and the lens aperture needs to be reduced to increase the depth of field, a tilt adapter or a tilt lens may be employed in the process to help the illuminated particles to be imaged clearly in the obliquely arranged camera.
Step S12: the experimental region is illuminated with planar illumination light having a certain thickness, and either a beam-expanded laser or a non-laser light source such as a light emitting diode may be used. In the case where the interface affects the imaging quality by reflection of the illumination light, the illumination light may be inclined at a certain angle to illuminate the test area, so that the reflected light from the interface is not recorded by the camera.
Step S13: calibrating the position of the camera: the calibration of the camera position using a calibration plate with a regular pattern requires calibration in multiple planes, with as temporary as possible removal of objects in the measurement area or changing of the liquid level. In order to improve the calibration quality, the calibration version should occupy the camera view as much as possible. A transformation function between the spatial reference plane and the camera image and the view angle of each camera everywhere on the reference plane is obtained.
Step S14: arranging an interface marker: the interface marker arrangement is divided into two cases, one is marker arrangement when measuring a velocity field near a deformable object, and the other is marker arrangement when measuring a velocity field near a gas-liquid interface/liquid-liquid interface, and the specific is as follows:
for an object which can be deformed, dot-like marks are arranged on the surface of the object, for example, white matte paint is sprayed on the surface of the object, or particles which can scatter light are added into the object when the object is formed, so that the surface of the object presents a scattered dot-like pattern, the size of the marks is slightly larger than the size of the tracer particles to ensure that the marks are distinguished in image processing, for example, when the diameter of the tracer particles in a picture is about 2 pixels, the size of the marks is preferably about 5 pixels, the arrangement density of the marks is preferably slightly lower than the density of the tracer particles which are scattered during measurement, so that the situation that the tracer particles and an interface marker interfere with each other in the spatial reconstruction process is reduced;
for the gas-liquid interface/liquid-liquid interface, other particles different from the tracer particles can be added in the vicinity of the liquid level of the liquid on the other side of the interface to serve as interface markers, the diameters of the markers are also slightly larger than the diameters of the tracer particles so as to be distinguished, the arrangement density of the markers is preferably slightly lower than the density of the tracer particles spread during measurement, and therefore the situation that the tracer particles and the interface markers interfere with each other in the space reconstruction process is reduced.
In addition, the spatial density of the interface markers needs to be high enough so that when they are reconstructed spatially, points of sufficient density in space can be obtained in order to accurately locate the interface position.
Step S15: acquiring a particle image: the illuminated particle map is photographed using a camera, and the photographing types are mainly classified into two types: double-frame exposure, namely continuously shooting two frames of images by each camera within a very short time interval; multi-frame exposure, i.e. each camera is continuously exposed at the same time interval, and the exposure time interval is very short.
It is understood that, in step S2, the distribution of the interface markers in the original particle image is obtained by the following steps:
determining the approximate appearing range of the interface marker in the pictures of different cameras by comprehensively utilizing the radius, the brightness and the difference of the moving speed of the interface marker and the tracer particles in the pictures, and then roughly determining the appearing area of the interface marker in the pictures by using a corrosion method and a surrounding area brightness counting method; because the brightness and the radius of the interface marker in the picture are both larger than the trace particles, the trace particles can be eliminated through a corrosion algorithm and a surrounding area brightness counting method, and most of the interface marker is reserved, so that the interference of the trace particles on the interface position identification result is eliminated, and the accuracy of interface identification is improved.
Performing cross-correlation calculation on two adjacent frames of pictures, wherein the movement range of a query window is limited near the movement range of an interface marker, if an interface marker exists in the area of the query window, the cross-correlation coefficient is close to 1, and if no interface marker exists, the cross-correlation coefficient is close to 0, so that the area where the interface marker exists in the picture can be roughly determined; meanwhile, the moving relation of the interface marker between two adjacent frames of images is obtained by calculating the cross correlation between the two adjacent frames of images and searching the position of a correlation peak.
And (4) integrating the two bases, judging the range of the interface marker in the picture, obtaining the upper boundary corresponding to the range, and smoothing the position of the upper boundary by using the phase lock average.
In particular, in the case of air flow in the vicinity of the gas-liquid interface where the air velocity is greater than the liquid velocity and the camera angle is slightly inclined downward, the images obtained in the experiment may be divided into three layers from top to bottom in content: a region with only the tracer particle, a region with both the tracer particle and the interfacial marker, and a region with only the interfacial marker. At this time, the cross-correlation calculation is performed on the two adjacent frames of images, and the moving range of the query window is limited in the cross-correlation calculation so as to match the movement of the interface marker between the two adjacent frames of images, the height of the cross-correlation peak in the area where the interface marker appears is close to 1, and the height of the non-appearing area is close to 0. The position of the interface marker appearing in each camera frame can be obtained by integrating the area with higher reliability of cross correlation and the area with more image interface markers. In addition, the result of the cross-correlation calculation process is the moving relation of the interface marker between two adjacent frames of images. If the cross-correlation peak value of two adjacent frame images is lower than the threshold value, the shift amount is set to 0.
For example, in the mode of two-frame exposure, the first-frame cross-correlation coefficient distribution is C1, and the interface marker distribution is M1; the second frame cross-correlation coefficient distribution is C2 and the interfacial marker distribution is M2. At this time, the distribution of the boundary marker velocity field deformation C2 and M2 in the first frame can be estimated by using the boundary marker velocity field deformation C2 and M2 obtained by cross correlation, and the distribution is added with C1 and M1, and the region with the result larger than the threshold value is the region where the boundary marker appears in the image.
In the embodiment, the range in which the interface marker approximately appears is determined by utilizing the difference of the radius, the brightness and the moving speed of the interface marker and the trace particle, then the area in which the interface marker appears is determined by using the corrosion method and the brightness counting method of the surrounding area, and the area in which the interface marker appears is determined by the cross-correlation calculation result of two adjacent frames of pictures. And aiming at the special condition of air flow in the area near the gas-liquid interface, the moving range of the query window is limited in the cross-correlation calculation to be matched with the movement of the interface marker between two adjacent frames, so that the interface marker can be accurately captured, the distribution of the marker in the image and the moving relation between the two adjacent frames of images are obtained by calculating the area brightness average value after the tracing particles are weakened by the corrosion image, the interference of the tracing particles is eliminated, and the accuracy of subsequent interface position identification based on the interface marker is improved.
It can be understood that, as shown in fig. 3, the performing, in step S2, a first image preprocessing on the original particle image to obtain a preprocessed image for interface recognition specifically includes the following steps:
step S21: covering the brightness distribution of the area outside the area where the interface marker exists in the picture, so that the brightness value of the brightness distribution is zero, and respectively obtaining the first brightness distribution of the covered multi-frame image;
step S22: respectively processing the covered multi-frame images by adopting picture morphological gray scale opening operation to respectively obtain second brightness distribution of the multi-frame images;
step S23: obtaining a transformation matrix of image deformation between two frames of images based on the movement relation of the interface marker between two adjacent frames of images;
step S24: obtaining a preprocessed image for interface recognition based on the transformation matrix: and transforming the second frame image by using a transformation matrix to obtain the brightness distribution condition which should appear at the moment of the first frame image, comparing the brightness distribution with the second brightness distribution of the first frame image, and taking the smaller brightness value of each pixel point, thereby obtaining the preprocessing result of the first frame image, and obtaining the preprocessing result of the second frame image in the same way.
It can be understood that step S24 specifically includes: interpolating the transformation matrix to obtain a displacement value of each pixel of the first brightness distribution of the next frame of image relative to the first brightness distribution of the previous frame of image, so as to obtain the position of each pixel in the first brightness distribution of the previous frame of image in the first brightness distribution of the next frame of image, then interpolating the first brightness distribution of the previous frame of image to each pixel respectively, so as to obtain a preprocessed image of the first brightness distribution of the previous frame of image after moving according to the transformation matrix, and the brightness value of each pixel in the preprocessed image is the smaller of the brightness value in the second brightness distribution of the previous frame of image and the brightness value in the preprocessed image; and obtaining a preprocessed image after the first brightness distribution of the next frame of image is moved according to the transformation matrix in the same way, wherein the brightness value of each pixel point in the preprocessed image is the smaller of the brightness value in the second brightness distribution of the next frame of image and the brightness value in the preprocessed image.
For example, in the case of double-frame exposure, if the luminance distribution of the first frame image of a certain camera is l1 and the luminance distribution of the second frame image is l2, the image where the interface marker region does not exist is removed by masking, and the value is set to 0. The first brightness distributions obtained after masking are respectively M (l1) and M (l2), and then the second brightness distributions are respectively O (M (l1)) and O (M (l2)) obtained after picture morphological gray scale opening operation (namely, firstly corrosion and then expansion). And obtaining a transformation relation of image deformation between two frames based on the movement relation of the interface marker between the two adjacent frames of images obtained in the step S1, namely obtaining a transformation matrix VM of image deformation between the two frames. Specifically, the transformation matrix VM corresponds to a part of the positions in the image, and appears once every several pixels in the x-direction and the y-direction of the image. After the VM is interpolated, the displacement value of l2 relative to each pixel point of l1 can be obtained, so that the position of each pixel point in l2 in l1 is obtained, then the brightness distribution l1 is interpolated on each pixel point, and an image T _ [ VM ] of l1 after the VM is moved can be obtained (l 1). Then min (O (M (l1)), T < -VM (O (M (l2)))) is taken as the image of the first pre-processed frame, and min (T < -VM (O (M (l1))), O (M (l2))) is taken as the image of the second pre-processed frame, wherein the minimum value is the result of comparing each pixel.
And for the case of multi-frame exposure, let l1, l2, l3, l2 move with respect to l1 as VM1, l3 moves with respect to l2 as VM2, and the processing result of the second frame may be min (T _ [ VM ] (O (M (l1))), O (M (l2)), T _ [ -VM2] (O (M (l3))), or (min (T _ [ VM ] (O (M (l1))), O (M (l2))) + min (O (M (l2)), T _ [ -VM2] (O (M (l 3))))))/2, or other similar results.
It is understood that the region where the interface marker appears in the original particle image is obtained from the foregoing step, and the second image preprocessing performed on the original particle image in the step S2 to obtain the preprocessed image for calculating the velocity field specifically includes the following steps:
covering the original particle image by using a covering method M (l) ═ M (x, y) × l (x, y), and respectively obtaining first brightness distribution of the covered multi-frame image, wherein l (x, y) is brightness distribution, the covering M and the image brightness distribution l are two matrixes with the same size, and the covering mode is implemented by multiplying corresponding positions of the two matrixes to obtain a new matrix with the same size;
when y < l (x) -d, M (x, y) is 0;
when l (x) -d < y < l (x), M (x, y) ═ 0.5 × (y- (l (x) -d))/d;
when l (x) < y < l (x) + h, M (x, y) ═ 0.5;
when l (x) + h < y < l (x) + h + d, M (x, y) ═ 0.5 ═ l (x) + h + d-y)/d;
when y > l (x) + h + d, M (x, y) ═ 0;
l (x) represents the boundary of the interfacial marker appearance region, h represents the height of the interfacial marker and tracer particle coexistence region, d represents the width of a buffer region, the buffer region refers to a region with a coverage value ranging from 0 to 0.5, x and y represent plane position coordinates of pixel points, h and d are determined by camera resolution and camera inclination, and one typical parameter is h-100 and d-20;
deforming the covered second frame image by using the transformation matrix in the step S23, and correspondingly subtracting twice of the brightness value of each pixel point in the brightness distribution of the covered and deformed second frame image from the brightness value of each pixel point in the brightness distribution of the first frame original image to obtain a preprocessing result of the first frame image; the same procedure is used for processing the second frame image.
For the case of double-frame exposure, if the first frame brightness distribution is l1 and the second frame brightness distribution is l2, the processed first frame brightness distribution is l 1-2T _ [ -VM ] (M (l2)), and the second frame brightness distribution is l 2-2T _ [ VM ] (M (l 1));
for the case of multi-frame exposure, let the luminance distributions of the adjacent three frames be l1, l2, l3, respectively, the l2 is moved relative to l1 as VM1, l3 is moved relative to l2 as VM2, and the luminance distribution of the second frame after processing is l 2-T _ [ VM1] (M (l1)) -T _ [ -VM2] (M (l 3)).
It can be understood that the brightness values of some pixels may be negative after the second image preprocessing, which may affect the result of the cross-correlation. Preferably, the step S2 further includes the following steps:
changing the pixel value types in the image into unsigned integer types; or changing the brightness value of the pixel point with the negative value in the image to zero.
It can be understood that, as shown in fig. 4, the step S3 specifically includes the following steps:
step S31: reconstructing a preprocessed image for interface recognition by adopting a tomography particle image velocimetry space reconstruction algorithm, wherein the reconstruction range comprises an area where an interface appears;
step S32: dividing the reconstructed three-dimensional space into a plurality of layers along the direction of the illumination light source, and extracting a plurality of layers of images along the direction of the illumination light source to calculate a brightness average value;
step S33: carrying out binarization on the image corresponding to the average value, and identifying the position of the interface marker in the space;
step S34: and repeating the steps S32 and S33 for different frames to obtain the time sequence of the positions of the interface markers in the space, removing the interface markers with abnormal positions, and then performing sequence interpolation and smoothing to obtain the interface positions in the three-dimensional space.
In step S31, the first particle image is reconstructed using any one of or a combination of multiple algorithms of MART (multi adaptive geometric reconstruction technique), LOS (Line of sight), MTE (Motion tracking enhancement).
In the step S32, the spatial brightness distribution reconstructed in the step S31 reflects the distribution of the interface markers in the space, and the position of the interface in the three-dimensional space needs to be obtained. The reconstructed three-dimensional space can be divided into a plurality of layers, for example, 100 layers, along the direction perpendicular to the illumination light source (i.e., the z direction), each layer has sporadically distributed interface markers at corresponding interface positions, and each layer of images also has noise from the original images and the spatial reconstruction algorithm. And when the number of layers is proper, the distance between the markers on the interface in the image corresponding to the average value can be effectively reduced, such as within 10 body pixels.
In step S33, the image corresponding to the average value is binarized to identify the position of the interface marker in space, so that the position of the interface marker in space within a certain range in the z direction can be obtained.
In step S34, the above operation is performed on different slice ranges and different frame images in the z direction, so as to obtain a time series of positions of the interface markers in space, then the markers with abnormal positions are removed from the time series, and the time series is interpolated and smoothed, so as to obtain the interface positions in three-dimensional space.
It is understood that, as shown in fig. 5, the step S3 may alternatively include the following steps:
step S31 a: identifying a location of an interface marker in a preprocessed image for interface identification;
step S32 a: reconstructing the position of the interface marker in three-dimensional space using data from different cameras;
step S33 a: and removing the interface marker with abnormal position, and obtaining the interface position in the three-dimensional space by using an interpolation algorithm, wherein the interpolation algorithm can adopt linear interpolation, Airy function interpolation and the like.
It is understood that the process of performing spatial reconstruction and cross-correlation calculation based on the second particle image to obtain the velocity field in step S4 is similar to the conventional tomographic particle image velocimetry method, and therefore, the description thereof is omitted here. The improvement of the step S4 over the conventional method for measuring the velocity of tomographic particle image is that the calculation result of the conventional method is not satisfactory for the region near the interface, i.e. a calculation error occurs due to the lack of particles near the interface, generation of ghost particles near the interface after spatial reconstruction of the residual interface marker after preprocessing, and the like. The step S4 reduces the adverse effect of the above factors on the measurement result by, specifically,
as shown in fig. 6, in step S4, after the spatial reconstruction of the second particle image, the following processing is performed on the spatial luminance distribution:
step S41: covering and reconstructing the obtained spatial brightness distribution by using the three-dimensional interface position, namely setting the brightness of the area on the other side of the interface to be zero;
step S42: enhancing image quality in the vicinity of the interface: identifying the volume pixel occupied by the area at one side of the area where the tracer particles are positioned, and reducing the brightness of the volume pixel without the tracer particles in the area;
step S43: and adding artificially synthesized virtual particles moving along with the interface in the area on the other side of the interface.
In said step S41, a three-dimensional interface position has been obtained in step S3, the other side of the interface referring to the side opposite to the side on which the velocity field is measured.
In step S42, the boundary surface vicinity region is a layered region, and the height of the boundary surface vicinity region on the side where measurement is performed is determined by the camera resolution and the camera tilt angle, and is typically 20 to 50 pixels in height. The volume pixels occupied by the tracer particles in this region are first identified, which are typically characterized by bright spots of less than 3 pixels in diameter, with brightness similar to that of the tracer particles in regions further from the interface. The brightness of the voxels in the region without the tracer particles is then reduced. The method needs larger computing resources, but has better image processing effect. There is also a simpler processing method of high-pass filtering the region, but the simple high-pass filtering has the disadvantages that noise is easily introduced and the processing effect on linear bright spots around the reconstructed trace particles is not good.
In step S43, since the lack of trace particles may occur in the query window across the interface, which may have an adverse effect on the cross-correlation result, it is necessary to add a virtual particle moving along with the interface. Since the free interface itself moves and deforms, the virtual particle cannot appear in a certain position in space and does not move, and the optimal way for the virtual particle to move is that the virtual particle can faithfully reflect the movement of the fluid or solid on the other side of the interface. If the movement condition of the other side of the interface cannot be determined, the virtual particles can move along with the interface, the phase velocities of different positions of the interface can be obtained through cross-correlation calculation of the positions of the two adjacent frames of interfaces, and the movement velocity of the virtual particles can be obtained through interface phase velocity interpolation. The brightness, diameter and distribution density of the virtual particles are determined by the tracer particles, one parameter is that the brightness is 1/2 times the average brightness of the tracer particles, the diameter is the same as the average diameter of the tracer particles, and the distribution density is slightly lower than the tracer particles. The thickness of the virtual particle distribution area is close to the size of the query window of the first cross correlation when the speed is calculated.
Another simpler way to add virtual particles is to leave part of the interface markers as virtual particles during the second image pre-processing, but this requires higher quality of the original image. The concrete requirements are as follows: the spatial distribution density of the interface markers is sufficient, so that a sufficient number of particles exist in a query window near the interface when the velocity field is calculated in a cross-correlation manner; the spatial density of the interface markers cannot be too high to reduce the frequency of ghost particles near the interface; the brightness of the marker at the interface cannot be too high, so that the influence of the marker on the cross-correlation is smaller than the influence of the trace particles on the cross-correlation when the cross-correlation calculation is carried out near the interface. But the image quality can be improved by image pre-processing, such as high-pass filtering calculations in the band-shaped covered area of the image pre-processing, or subtracting the image of the adjacent frame deformed on the image to be reconstructed, where the deformation is based on the displacement of the interface marker between the two frames. That is, the "in step S2, the brightness value of each pixel in the brightness distribution of the first frame image before being covered is correspondingly subtracted by twice the brightness value of each pixel in the brightness distribution of the second frame image after being covered and deformed" to obtain the preprocessing result of the first frame image is modified to "the brightness value of each pixel in the brightness distribution of the first frame image before being covered is correspondingly subtracted by twice the brightness value of each pixel in the brightness distribution of the second frame image after being covered, deformed and blurred by gaussian", so as to obtain the preprocessing result of the first frame image ". The same applies to the second frame image.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for measuring the speed of a tomographic particle image near a free interface is characterized in that,
the method comprises the following steps:
step S1: arranging at least three cameras and an illumination light source, calibrating the positions of the cameras, arranging an interface marker and scattering tracer particles, and then shooting by using the cameras to obtain particle images;
step S2: preprocessing the particle image: distinguishing an interface marker and trace particles in a particle image, obtaining the distribution of the interface marker in the particle image and the movement relation of the interface marker between two adjacent frames of images, performing first image preprocessing on an original particle image to obtain a preprocessed image for identifying an interface, and performing second image preprocessing on the original particle image to obtain a preprocessed image for calculating a velocity field;
step S3: identifying an interface location based on the preprocessed image for interface identification;
step S4: spatial reconstruction and cross-correlation calculations are performed based on the preprocessed images used to calculate the velocity field to obtain the velocity field.
2. The method according to claim 1, wherein said method comprises,
in step S2, the distribution of the interface marker in the original particle image is obtained specifically by the following steps:
determining the approximate appearing range of the interface marker in the pictures of different cameras by comprehensively utilizing the radius, the brightness and the difference of the moving speed of the interface marker and the tracer particles in the pictures, and then roughly determining the appearing area of the interface marker in the pictures by using a corrosion method and a surrounding area brightness counting method;
performing cross-correlation calculation on two adjacent frames of pictures, wherein the movement range of a query window is limited near the movement range of an interface marker, if an interface marker exists in the area of the query window, the cross-correlation coefficient is close to 1, and if no interface marker exists, the cross-correlation coefficient is close to 0, so that the area where the interface marker exists in the picture can be roughly determined;
and (4) integrating the two bases, judging the range of the interface marker in the picture, obtaining the upper boundary corresponding to the range, and smoothing the position of the upper boundary by using the phase lock average.
3. The method according to claim 2, wherein said method comprises,
the step S2 of performing first image preprocessing on the original particle image to obtain a preprocessed image for interface recognition specifically includes the following steps:
step S21: covering the brightness distribution of the area outside the area where the interface marker exists in the picture, so that the brightness value of the brightness distribution is zero, and respectively obtaining the first brightness distribution of the covered multi-frame image;
step S22: respectively processing the covered multi-frame images by adopting picture morphological gray scale opening operation to respectively obtain second brightness distribution of the multi-frame images;
step S23: obtaining a transformation matrix of image deformation between two frames of images based on the movement relation of the interface marker between two adjacent frames of images;
step S24: for the picture in the double-frame exposure mode, the conversion matrix is used for converting the second frame image to obtain the brightness distribution condition which should appear at the moment of the first frame image, the brightness distribution is compared with the second brightness distribution of the first frame image, and the smaller brightness value of each pixel point is taken, so that the preprocessing result of the first frame image is obtained, and the preprocessing result of the second frame image is obtained in the same way.
4. The method according to claim 3, wherein said method comprises,
the second image preprocessing performed on the original particle image in step S2 to obtain a preprocessed image used for calculating the velocity field specifically includes the following steps:
covering the original particle image by using a covering algorithm M (l) ═ M (x, y) × l (x, y), respectively obtaining a first brightness distribution of the covered multi-frame image, wherein l (x, y) is the brightness distribution,
when y < l (x) -d, M (x, y) is 0;
when l (x) -d < y < l (x), M (x, y) ═ 0.5 × (y- (l (x) -d))/d;
when l (x) < y < l (x) + h, M (x, y) ═ 0.5;
when l (x) + h < y < l (x) + h + d, M (x, y) ═ 0.5 ═ l (x) + h + d-y)/d;
when y > l (x) + h + d, M (x, y) ═ 0;
covering M and image brightness distribution l are two matrixes with the same size, l (x) represents the boundary of an interface marker appearing region, h represents the height of the interface marker and tracer particle coexisting region, d represents the width of a buffer region, x and y represent plane position coordinates of pixel points, and h and d are determined by camera resolution and camera inclination;
for the picture in the double-frame exposure mode, deforming the covered second frame image by using the transformation matrix in the step S23, and correspondingly subtracting twice the brightness value of each pixel point in the brightness distribution of the covered and deformed second frame image from the brightness value of each pixel point in the brightness distribution of the first frame original image to obtain the preprocessing result of the first frame image; the same procedure is used for processing the second frame image.
5. The method according to claim 4, wherein said method comprises,
for the case that the brightness value is negative after the second image preprocessing, the step S2 further includes the following steps:
changing the pixel value types in the image into unsigned integer types; or changing the brightness value of the pixel point with the negative value in the image to zero.
6. The method according to claim 5, wherein said method comprises,
the step S3 specifically includes the following steps:
step S31: reconstructing a preprocessed image for interface recognition by adopting a tomography particle image velocimetry space reconstruction algorithm, wherein the reconstruction range comprises an area where an interface appears;
step S32: dividing the reconstructed three-dimensional space into a plurality of layers along the direction of the illumination light source, and extracting a plurality of layers of images along the direction of the illumination light source to calculate a brightness average value;
step S33: carrying out binarization on the image corresponding to the average value, and identifying the position of the interface marker in the space;
step S34: and repeating the steps S32 and S33 for different frames to obtain the time sequence of the positions of the interface markers in the space, removing the interface markers with abnormal positions, and then performing sequence interpolation and smoothing to obtain the interface positions in the three-dimensional space.
7. The method according to claim 5, wherein said method comprises,
the step S3 includes the steps of:
step S31 a: identifying a location of an interface marker in a preprocessed image for interface identification;
step S32 a: reconstructing the position of the interface marker in three-dimensional space using data from different cameras;
step S33 a: and removing the interface marker with abnormal position, and obtaining the interface position in the three-dimensional space by using an interpolation algorithm.
8. The method according to claim 6 or 7, wherein said method comprises the steps of,
in step S4, after spatial reconstruction of the preprocessed image for calculating the velocity field, the spatial luminance distribution is processed as follows:
step S41: covering and reconstructing the obtained spatial brightness distribution by using the three-dimensional interface position, namely setting the brightness of the area on the other side of the interface to be zero;
step S42: enhancing image quality in the vicinity of the interface: identifying the volume pixel occupied by the area at one side of the area where the tracer particles are positioned, and reducing the brightness of the volume pixel without the tracer particles in the area;
step S43: and arranging artificially synthesized virtual particles moving along with the interface in the area on the other side of the interface.
9. The method according to claim 8, wherein said method comprises,
the method comprises the steps of reserving part of interface markers as virtual particles in the second image preprocessing process, correspondingly subtracting the brightness value of each pixel point in the brightness distribution of the first frame original image by twice of the brightness value of each pixel point in the brightness distribution of the second frame image after covering, deformation and Gaussian blur in the second image preprocessing process to obtain the preprocessing result of the first frame image, and processing the second frame image by adopting the same steps.
10. The method according to claim 1, wherein said method comprises,
the manner of disposing the interface marker and the trace particle in step S1 includes:
for a deformable object, arranging point-like marks on the surface of the deformable object, wherein the size of the marks is larger than that of the tracer particles so as to ensure that the marks are distinguished in image processing, and the arrangement density of the marks is smaller than that of the scattered tracer particles;
particles different from the tracer particles are added as markers near the liquid surface on the liquid side of the gas-liquid interface or the liquid-liquid interface, the diameter of the markers is larger than that of the tracer particles, the markers are surely distinguished in image processing, and the density of the markers is smaller than the scattering density of the tracer particles.
CN201911306262.XA 2019-12-18 2019-12-18 Image velocimetry method for chromatographic particles near free interface Active CN111024980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911306262.XA CN111024980B (en) 2019-12-18 2019-12-18 Image velocimetry method for chromatographic particles near free interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911306262.XA CN111024980B (en) 2019-12-18 2019-12-18 Image velocimetry method for chromatographic particles near free interface

Publications (2)

Publication Number Publication Date
CN111024980A true CN111024980A (en) 2020-04-17
CN111024980B CN111024980B (en) 2021-04-02

Family

ID=70210262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911306262.XA Active CN111024980B (en) 2019-12-18 2019-12-18 Image velocimetry method for chromatographic particles near free interface

Country Status (1)

Country Link
CN (1) CN111024980B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496499A (en) * 2021-04-26 2021-10-12 上海理工大学 Low-quality particle velocity field image correction method based on convolutional neural network
CN113702661A (en) * 2021-09-01 2021-11-26 北京大学 Method and device for processing fluid velocity
CN115326806A (en) * 2022-10-17 2022-11-11 湖南大学 Tracer clay development method based on digital image correlation

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009264772A (en) * 2008-04-22 2009-11-12 Nikon Corp Flow evaluation apparatus and flow evaluation method
CN107705318A (en) * 2017-08-22 2018-02-16 哈尔滨工程大学 A kind of turbulent boundary lamellar field speed-measuring method based on border tracer
CN108020168A (en) * 2017-11-23 2018-05-11 哈尔滨工程大学 Nearly free surface gas-liquid two-phase flow field three-dimension measuring system and measuring method based on particle image velocimetry
US20180321273A1 (en) * 2015-11-03 2018-11-08 Korea Aerospace Research Institute Particle image velocimetry and control method therefor
CN109459582A (en) * 2018-12-24 2019-03-12 北京理工大学 Laser particle image speed measurement data processing method based on moving boundary self-identifying
CN110231068A (en) * 2019-07-09 2019-09-13 北京大学 The method for identifying gas-liquid interface position
CN110261642A (en) * 2019-07-09 2019-09-20 北京大学 Three-dimensional particle image velocimetry method suitable for gas-liquid interface
CN110260945A (en) * 2019-07-09 2019-09-20 北京大学 Total-reflection type gas-liquid interface Method of flow visualization and gas-liquid interface location recognition method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009264772A (en) * 2008-04-22 2009-11-12 Nikon Corp Flow evaluation apparatus and flow evaluation method
US20180321273A1 (en) * 2015-11-03 2018-11-08 Korea Aerospace Research Institute Particle image velocimetry and control method therefor
CN107705318A (en) * 2017-08-22 2018-02-16 哈尔滨工程大学 A kind of turbulent boundary lamellar field speed-measuring method based on border tracer
CN108020168A (en) * 2017-11-23 2018-05-11 哈尔滨工程大学 Nearly free surface gas-liquid two-phase flow field three-dimension measuring system and measuring method based on particle image velocimetry
CN109459582A (en) * 2018-12-24 2019-03-12 北京理工大学 Laser particle image speed measurement data processing method based on moving boundary self-identifying
CN110231068A (en) * 2019-07-09 2019-09-13 北京大学 The method for identifying gas-liquid interface position
CN110261642A (en) * 2019-07-09 2019-09-20 北京大学 Three-dimensional particle image velocimetry method suitable for gas-liquid interface
CN110260945A (en) * 2019-07-09 2019-09-20 北京大学 Total-reflection type gas-liquid interface Method of flow visualization and gas-liquid interface location recognition method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SUNGHYUK IM等: "Tomo‑PIV measurement of flow around an arbitrarily moving body with surface reconstruction", 《EXP FLUIDS》 *
YOUNG JIN JEON等: "Three-dimensional PIV measurement of flow around an arbitrarily moving body", 《EXP FLUIDS》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496499A (en) * 2021-04-26 2021-10-12 上海理工大学 Low-quality particle velocity field image correction method based on convolutional neural network
CN113702661A (en) * 2021-09-01 2021-11-26 北京大学 Method and device for processing fluid velocity
CN115326806A (en) * 2022-10-17 2022-11-11 湖南大学 Tracer clay development method based on digital image correlation

Also Published As

Publication number Publication date
CN111024980B (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN111024980B (en) Image velocimetry method for chromatographic particles near free interface
CN107607040B (en) Three-dimensional scanning measurement device and method suitable for strong reflection surface
Lin et al. Vehicle speed detection from a single motion blurred image
US8797417B2 (en) Image restoration method in computer vision system, including method and apparatus for identifying raindrops on a windshield
Haralick et al. Glossary of computer vision terms.
Nayar Shape from focus system for rough surfaces
US7352892B2 (en) System and method for shape reconstruction from optical images
US7430303B2 (en) Target detection method and system
EP0747870B1 (en) An object observing method and device with two or more cameras
US20090016642A1 (en) Method and system for high resolution, ultra fast 3-d imaging
US6868194B2 (en) Method for the extraction of image features caused by structure light using image reconstruction
Ghalib et al. Soil particle size distribution by mosaic imaging and watershed analysis
US12033280B2 (en) Method and apparatus for generating a 3D reconstruction of an object
JP2002509259A (en) Method and apparatus for three-dimensional inspection of electronic components
JPH0926312A (en) Method and apparatus for detection of three-dimensional shape
CN110260945B (en) Total reflection type gas-liquid interface flow display method and gas-liquid interface position identification method
Lin Vehicle speed detection and identification from a single motion blurred image
CN1598594A (en) Method of determining a three-dimensional velocity field in a volume
Haralick et al. Glossary of computer vision terms
US7136171B2 (en) Method for the extraction of image features caused by structure light using template information
CN116740332B (en) Method for positioning center and measuring angle of space target component on satellite based on region detection
CN114674244B (en) Coaxial normal incidence speckle deflection measurement method and device
Loktev et al. Image Blur Simulation for the Estimation of the Behavior of Real Objects by Monitoring Systems.
CN111473944B (en) PIV data correction method and device for observing complex wall surface in flow field
CN108959355A (en) A kind of ship classification method, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant