CN106997587B - Method for measuring flow velocity of intravenous injection liquid drops based on machine vision - Google Patents

Method for measuring flow velocity of intravenous injection liquid drops based on machine vision Download PDF

Info

Publication number
CN106997587B
CN106997587B CN201710178754.XA CN201710178754A CN106997587B CN 106997587 B CN106997587 B CN 106997587B CN 201710178754 A CN201710178754 A CN 201710178754A CN 106997587 B CN106997587 B CN 106997587B
Authority
CN
China
Prior art keywords
frame
pixel
value
template
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710178754.XA
Other languages
Chinese (zh)
Other versions
CN106997587A (en
Inventor
李立
吴玉龙
张原�
张梦颖
余翠
龙凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201710178754.XA priority Critical patent/CN106997587B/en
Publication of CN106997587A publication Critical patent/CN106997587A/en
Application granted granted Critical
Publication of CN106997587B publication Critical patent/CN106997587B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • G06T7/0016Biomedical image inspection using an image reference approach involving temporal comparison
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P5/00Measuring speed of fluids, e.g. of air stream; Measuring speed of bodies relative to fluids, e.g. of ship, of aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The invention relates to a method for measuring the flow velocity of intravenous injection liquid drops based on machine vision. Firstly, shooting a sample video, and converting the video into static images frame by frame; the dropper is then considered as the moving target. Extracting a foreground target and carrying out binarization processing; then, the frame number containing the liquid drop is extracted, a threshold value Th is set, and when the Sum Sum > Th of the pixel values of a certain frame is 255, the frame is considered as the liquid drop containing frame. Otherwise, the frame is considered to be free of drops. The flow rate was calculated from V-fps/Δ N. The invention overcomes the defects of complex measuring technology, inconvenient supervision, time consumption and large cost in the prior art. The method provides support for more accurately, efficiently and conveniently measuring the flow velocity of intravenous injection liquid drops and reducing the workload of medical staff; the invention carries out relevant processing on the video jitter, allows instability in the shooting process and does not need to deliberately fix the shooting equipment. And a breakthrough is brought to the algorithm transplantation to the mobile terminal and the result accuracy.

Description

Method for measuring flow velocity of intravenous injection liquid drops based on machine vision
Technical Field
The invention belongs to the field of image processing and intelligent identification, and particularly relates to a method for measuring the flow velocity of intravenous injection liquid drops based on machine vision.
Background
To date, intravenous infusion technology has been developed for nearly 600 years, but it is in the 20 th century that it really forms a complete set of infusion systems, and has become one of the most effective, direct and common clinical medical treatment means at present. Intravenous drip is a method of delivering large quantities of fluids and drugs intravenously into the body through infusion lines. The medicine which is not easy to be absorbed and patients who vomit and are unconscious can be administrated by adopting the intravenous drip method. Its advantages are quick absorption, accurate dosage, reliable action, direct entry of medicine into tissue and body fluid, quick operation and suitability for emergency treatment and patients unable to take medicine orally.
In clinical medicine, the infusion consumes more time, and can be performed at daytime and night, and the infusion condition, particularly the venous dropping speed, needs to be observed in time so as to control the dropping speed according to the type of the medicine, replace the medicine, pull out the needle and the like in time when the infusion is finished, the dropping speed measurement brings heavy burden to medical care personnel, and the realization of automatic monitoring of the infusion becomes an urgent requirement of clinical application.
Therefore, intravenous drip speed measurement has significant significance for clinical treatment and medical research. The traditional intravenous drip flow rate measuring methods mainly include the following methods: (1) detecting the infusion in a mechanical weighing mode; (2) infrared photoelectric infusion detection; (3) and (4) capacitance metering type infusion detection.
The traditional intravenous drip speed measurement mainly depends on manual measurement of medical personnel, and the manual measurement has many defects: (1) the detection speed is too slow, and a large amount of manpower is consumed; (2) the test results are inaccurate, and medical personnel often rely on personal experience or simple timers to test the intravenous drip rate, and the results may not be accurate enough. Therefore, the traditional manual detection method is difficult to adapt to the development of clinical medical research, and a new detection method with higher automation degree becomes necessary.
With the development of digital image processing technology, the method has more and more extensive application in the medical field, and the method for measuring the speed of intravenous drip by using the digital image processing method not only can improve the detection efficiency and increase the accuracy of a detection result, but also has low cost and higher automation degree.
Disclosure of Invention
The technical problem of the invention is mainly solved by the following technical scheme:
a method for measuring intravenous drop flow rate based on machine vision, comprising:
step 1, shooting a sample video, and converting the video into static images frame by frame;
and 2, regarding the dropper as a moving target. Extracting a foreground target and carrying out binarization processing;
step 3, extracting the frame number containing the liquid drops, setting a threshold Th, and when a certain frame imageIf the Sum Sum > Th of the prime values is 255, then the frame is considered to contain drops. Otherwise, the frame is considered to be free of drops. The frame number containing the drop is recorded. Different intravenous injections have different flow rates due to different switch control types. So that more than one frame of the same drop occurs or a drop is missed, the number of each frame that meets the threshold of the frame containing the drop is recorded. Then, the numbers are traversed, when the numbers are continuous, the frames are regarded as the same frame, and the mean value of the continuous numbers is calculated
Figure BDA0001253064760000021
And 4, calculating the flow rate according to the V-fps/delta N. Firstly, the average frame number of the appeared liquid drops calculated by the steps
Figure BDA0001253064760000022
The drop frame adjacent difference value
Figure BDA0001253064760000023
Mean of the differences between frames
Figure BDA0001253064760000024
Is DeltaN1,ΔN2,ΔN3Average of …, flow rate
Figure BDA0001253064760000025
fps is the frame rate of the video.
The specific processing step of step 2 in the above method for measuring the flow rate of intravenous injection droplets based on machine vision is based on two methods for measuring the flow rate, which comprises:
the first speed measurement method, the template matching method, specifically includes:
step 2.1.1, firstly, extracting a liquid drop template, wherein the method adopts a Hough transform and circular extraction method. The drop shape in drip irrigation is approximately circular. After extracting the circle, selecting the coordinate of the center O as (X1, Y1) and the radius R of the circle. The droplet template is truncated by the center point of coordinates (X1-R, Y1-R), length and width 2R.
Step 2.1.2 droplet extracted in the above stepThe algorithm idea of the general template is that a search template T ((m × n) pixels) is overlapped on a searched image S (W × H pixels) and translated, and the template covers the area of the searched image called sub-image Sij. i, j are the coordinates of the upper left corner of the subgraph on the searched graph S. The search range is:
1≤i≤W-m
1≤j≤W-m
by comparing T and SijAnd completing the template matching process according to the similarity.
And 2.1.3, matching area binarization, wherein the binarization method is relatively random because the frame containing liquid drops is relatively different from the frame without liquid drops. The binarization can be local binarization, and the global binarization can also be self-adaptive binarization. The purpose of binarization is to better perform pixel statistics.
And 2.1.4, counting Sum of the pixel value of the target area. And (3) gradually traversing the image from left to right from (0,0) and from top to bottom, accumulating pixel sums, if the pixel value of a traversal point is 255, adding 1, and the like. The pixel values are summed to 255.
The second speed measurement method, the frame difference method, specifically includes:
and 2.2.1, extracting the foreground by using a GMM Gaussian mixture model. Firstly, setting the mean, variance and weight of each Gaussian to be 0, namely initializing parameters of each model matrix. T frames in the video are used to train the GMM model. For each pixel, a GMM model with the maximum number of models GMM _ MAX _ COMPONT gaussians is established. When the first pixel, its fixed initial mean, variance, and weight are set to 1, individually for it in the program.
In the non-first frame training process, when the pixel value comes from the back, the pixel value is compared with the mean value of the Gaussian, and if the difference between the value of the pixel point and the mean value of the model is within 3 times of the variance, the task belongs to the Gaussian. This time, the update is done with the following equation:
Figure BDA0001253064760000041
Figure BDA0001253064760000042
Figure BDA0001253064760000043
wherein
Figure BDA0001253064760000044
α=1/T,
Figure BDA0001253064760000045
When the difference between the value of the pixel point and the average value is not within 3 times of the range of the pixel point
Figure BDA0001253064760000046
And after the training frame number T is reached, self-adaptive selection of the GMM number of different pixel points is carried out. Firstly, dividing the square difference by the weight to sort the gaussians from big to small, and then selecting the first B gaussians to meet the requirement
Figure BDA0001253064760000047
Wherein C isfTypically 0.3
Thus, noise points in the training process can be well eliminated. In the testing stage, the value of the new pixel point is compared with each mean value in the B gaussians, if the difference value is 2 times of the variance, the pixel point is considered as the background, otherwise, the pixel point is considered as the foreground. And is considered foreground as long as one of the gaussian components satisfies the condition. The foreground is assigned 255 and the background is assigned 0. Thus, a foreground binary image is formed. Since the foreground binary image contains much noise, morphological on-operation is used to reduce the noise to 0, followed by off-operation to reconstruct the information of the edge part lost due to the on-operation. The small noise points that are not connected are eliminated.
And 2.2.2, performing self-adaptive binarization processing on the foreground extracted in the previous step.
And 2.2.3, respectively projecting the binary image in the horizontal direction and the vertical direction. And selecting a threshold Th 1, and traversing the projected images in the horizontal direction and the vertical direction respectively. When N is present0< Th, and N0The next 10 continuous pixel points are all larger than Th, then N is considered to be0Is a boundary point of the dropper. Intercepting a burette target area after the boundary point is obtained; similarly, a threshold Th2 is selected when N is present1< Th, and N1The first 5 points are all larger than Th, and the last 5 points are all smaller than Th, then N is considered to be1Is the boundary point on the other side. And finding four boundary points in the same way, and intercepting the target area as a template.
And 2.2.4, carrying out template matching by using the template, wherein the template matching is consistent with the template matching method implemented in 2.1.2.
And 2.2.5, performing frame difference processing on the matched area. The difference is performed every two frame differences, for example, 1, 4, 7, 10 …, which is advantageous in improving the calculation efficiency.
And 2.2.6, performing opening and closing operation processing on the difference image, and then performing statistics on pixel value sums according to the method 2.1.4.
According to the intravenous injection liquid drop flow velocity measuring method based on the machine vision, aiming at the characteristics of complex background and diversity of an intravenous injection environment, when the step 2 is executed, the two methods process relevant interference factors, and therefore a template matching method or a frame difference method is randomly selected to carry out speed measurement when the step 2 is executed.
Therefore, the invention has the following advantages: 1. the invention designs two different measuring methods, each having advantages. The user can select a proper scheme according to different conditions and environments; 2. the invention provides a detailed algorithm model, and the used equipment is common mobile products such as mobile phones. The understanding and the actual operation of a user are convenient; 3. the invention overcomes the defects of complex measuring technology, inconvenient supervision, time consumption and large cost in the prior art. The method provides support for more accurately, efficiently and conveniently measuring the flow velocity of intravenous injection liquid drops and reducing the workload of medical staff; 4. the invention carries out relevant processing on the video jitter, allows instability in the shooting process and does not need to deliberately fix the shooting equipment. And a breakthrough is brought to the algorithm transplantation to the mobile terminal and the result accuracy.
Drawings
Fig. 1 is a flow chart of a venous drip flow rate calculation system.
Fig. 2 is a flow chart of an algorithm for video dithering.
FIG. 3 is a flow chart of GMM Gaussian mixture model extraction foreground.
Fig. 4 is a complete flow chart of the algorithm of the whole system.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings.
The invention mainly comprises the following steps
The first step is as follows: acquiring a sample video;
the second step is that: the dropper is considered as the moving target. Extracting a foreground target by using a GMM Gaussian mixture model;
the third step: carrying out binarization processing;
the fourth step: and respectively carrying out integral projection in the horizontal direction and the vertical direction on the binary image in the previous step. And setting a threshold Th, and traversing the projected image in the horizontal direction and the vertical direction respectively. When N is present0< Th, and N0The next 10 continuous pixel points are all larger than Th, then N is considered to be0Is a boundary point of the dropper. Intercepting a burette target area after the boundary point is obtained;
the fifth step: carrying out template matching frame by frame, wherein the matched area of all the frames is the position of the dropper;
preferably, in order to try out the measurement under different environments in different scenes, the invention adopts two different thinking methods for measurement counting.
One is a template matching method and the other is a frame difference method.
The template matching method comprises the following specific steps:
the first step is as follows: hough transform, find the circle. Extracting a liquid drop template;
the second step is that: carrying out template matching frame by frame and extracting a matching area;
the third step: carrying out binarization and morphological processing on the matching area;
the fourth step: counting the pixel values of the target area frame by frame;
the fifth step: given a threshold Th2, a frame greater than the threshold is considered to contain a droplet, and the frame number is recorded;
and a sixth step: and calculating the flow rate according to the difference value of the drop frame and the frame rate fps of the video. Suppose two consecutive video frames are N each1,N2If the frame number difference is Δ N ═ N1-N2. I.e. a drop falls after a time of deltan frames. The dropping speed V is fps/delta N.
In the second frame difference method, because the influence of the shake in the shooting process on the frame difference is large, the invention needs to perform anti-shake processing first when the frame difference method is used. The frame difference method comprises the following specific implementation steps:
the first step is as follows: the method of the invention is utilized to carry out anti-shaking treatment;
the second step is that: and (5) performing interframe difference processing. Since the droplet has not yet fallen in the dropper, the droplet changes slowly. The frame extraction and frame separation method is adopted, and the detection efficiency is improved. In order to prevent frame leakage, the flow rate is different and the frame interval is selected differently according to the standards of dropper of different types;
the third step: and performing morphological processing including opening and closing operation on the image with the previous frame difference.
The fourth step: counting the pixel values of the target area frame by frame;
the fifth step: given a threshold Th 3, a frame greater than the threshold is considered to contain a drop, and the frame number is recorded;
and a sixth step: and calculating the flow rate according to the difference value of the drop frame and the frame rate fps of the video. Suppose two consecutive frames of drops are N1,N2If the frame number difference is Δ N ═ N1-N2. I.e. a drop falls after a time of deltan frames. The flow velocity V is fps/deltan.
Example (b):
the following are specific examples of the methods employed.
As shown in fig. 1, 2, 3, and 4, the speed measurement method of the present embodiment includes the following steps:
the first step is as follows: shooting a sample video, and converting the video into a static image frame by frame;
the second step is that: selecting a speed measuring method:
2.1 template matching method
2.1.1, firstly extracting a liquid drop template, wherein the method adopts a Hough transform and circular extraction method. The drop shape in drip irrigation is approximately circular. After extracting the circle, selecting the coordinate of the center O as (X1, Y1) and the radius R of the circle. The droplet template is truncated by the center point of coordinates (X1-R, Y1-R), length and width 2R.
And 2.1.2, taking the liquid drop extracted in the previous step as a template, and performing template matching on all video frames frame by frame, wherein the algorithm idea of the general template is that a searching template T ((m × n) pixels) is overlapped on a searched image S (W × H pixels) and translated, and the template covers the area of the searched image to be called the sub-image Sij. i, j are the coordinates of the upper left corner of the subgraph on the searched graph S. The search range is:
1≤i≤W-m
1≤j≤W-m
by comparing T and SijAnd completing the template matching process according to the similarity.
2.1.3, binarization of the matching area, wherein the binarization method is relatively random because the frame containing the liquid drop is relatively different from the frame without the liquid drop. The binarization can be local binarization, and the global binarization can also be self-adaptive binarization. The purpose of binarization is to better perform pixel statistics.
2.1.4, Sum, the pixel value statistics of the target region. And (3) gradually traversing the image from left to right from (0,0) and from top to bottom, accumulating pixel sums, if the pixel value of a traversal point is 255, adding 1, and the like. The pixel values are summed to 255.
2.2 frame difference method
And 2.2.1, extracting the foreground by using a GMM Gaussian mixture model. Firstly, setting the mean, variance and weight of each Gaussian to be 0, namely initializing parameters of each model matrix. T frames in the video are used to train the GMM model. For each pixel, a GMM model with the maximum number of models GMM _ MAX _ COMPONT gaussians is established. When the first pixel, its fixed initial mean, variance, and weight are set to 1, individually for it in the program.
In the non-first frame training process, when the pixel value comes from the back, the pixel value is compared with the mean value of the Gaussian, and if the difference between the value of the pixel point and the mean value of the model is within 3 times of the variance, the task belongs to the Gaussian. This time, the update is done with the following equation:
Figure BDA0001253064760000081
Figure BDA0001253064760000082
Figure BDA0001253064760000083
wherein
Figure BDA0001253064760000091
α=1/T,
Figure BDA0001253064760000092
When the difference between the value of the pixel point and the average value is not within 3 times of the range of the pixel point
Figure BDA0001253064760000093
And after the training frame number T is reached, self-adaptive selection of the GMM number of different pixel points is carried out. Firstly, dividing the square difference by the weight to sort the gaussians from big to small, and then selecting the first B gaussians to meet the requirement
Figure BDA0001253064760000094
Wherein C isfTypically 0.3
Thus, noise points in the training process can be well eliminated. In the testing stage, the value of the new pixel point is compared with each mean value in the B gaussians, if the difference value is 2 times of the variance, the pixel point is considered as the background, otherwise, the pixel point is considered as the foreground. And is considered foreground as long as one of the gaussian components satisfies the condition. The foreground is assigned 255 and the background is assigned 0. Thus, a foreground binary image is formed. Since the foreground binary image contains much noise, morphological on-operation is used to reduce the noise to 0, followed by off-operation to reconstruct the information of the edge part lost due to the on-operation. The small noise points that are not connected are eliminated.
And 2.2.2, performing self-adaptive binarization processing on the foreground extracted in the previous step.
2.2.3, and then projecting the binary image in the horizontal direction and the vertical direction respectively. And selecting a threshold Th 1, and traversing the projected images in the horizontal direction and the vertical direction respectively. When N is present0< Th, and N0The next 10 continuous pixel points are all larger than Th, then N is considered to be0Is a boundary point of the dropper. Intercepting a burette target area after the boundary point is obtained; similarly, a threshold Th2 is selected when N is present1< Th, and N1The first 5 points are all larger than Th, and the last 5 points are all smaller than Th, then N is considered to be1Is the boundary point on the other side. And finding four boundary points in the same way, and intercepting the target area as a template.
2.2.4 template matching with the template, and the method is consistent with the method for implementing template matching in 2.1.2.
And 2.2.5, performing frame difference processing on the matched area. The difference is performed every two frame differences, for example, 1, 4, 7, 10 …, which is advantageous in improving the calculation efficiency.
2.2.6, performing opening and closing operation processing on the difference image, and then performing statistics on the sum of pixel values according to the method 2.1.4.
The third step: the frame number containing the liquid drop is extracted, a threshold value Th is set, and when the Sum Sum & gt Th of the pixel values of a certain frame is 255, the frame is considered to be the liquid drop-containing frame. Otherwise, the frame is considered to be free of drops. The frame number containing the drop is recorded. Different intravenous injections are injected by different types of switches,the flow rates vary. There may be more than one frame of the same drop, and there may be a drop missing situation. The invention uses a method of recording the number of each frame corresponding to the threshold of the frame containing the drop. Then, the numbers are traversed, when the numbers are continuous, the frames are regarded as the same frame, and the mean value of the continuous numbers is calculated
Figure BDA0001253064760000101
The fourth step: the flow rate was calculated from V-fps/Δ N. Firstly, the average frame number of the appeared liquid drops calculated by the steps
Figure BDA0001253064760000102
The drop frame adjacent difference value
Figure BDA0001253064760000103
Mean of the differences between frames
Figure BDA0001253064760000104
Is DeltaN1,ΔN2,ΔN3… average value. The method for repeatedly calculating the mean value effectively solves the problem of frame leakage. Flow rate of
Figure BDA0001253064760000105
fps is the frame rate of the video.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (2)

1. A method for measuring intravenous drop flow rate based on machine vision, comprising:
step 1, shooting a sample video, and converting the video into static images frame by frame;
step 2, regarding the dropper as a moving target; extracting a foreground target and carrying out binarization processing;
step 3, extracting frame numbers containing liquid drops, setting a threshold value Th, and when the Sum Sum of pixel values of a certain frame is 255 and is greater than Th, determining that the frame is a liquid drop-containing frame; otherwise, the frame is considered to have no droplet; recording a frame number containing the droplet; different intravenous injections have different flow rates due to different switch control types; therefore, when more than one frame of the same drop appears or is missed, recording the number of each frame which meets the threshold value of the drop frame; then, the numbers are traversed, when the numbers are continuous, the frames are regarded as the same frame, and the mean value of the continuous numbers is calculated
Figure FDA0002422973250000011
Step 4, calculating the flow rate according to the V-fps/delta N; firstly, the average frame number of the appeared liquid drops calculated by the steps
Figure FDA0002422973250000012
The drop frame adjacent difference value
Figure FDA0002422973250000013
Mean of the differences between frames
Figure FDA0002422973250000014
Is DeltaN1,ΔN2,ΔN3Average of …; the method for repeatedly calculating the mean value effectively solves the problem of frame leakage; flow rate of
Figure FDA0002422973250000015
fps is the frame rate of the video;
the specific processing step of the step 2 is based on two speed measuring methods, and comprises the following steps:
the first speed measurement method, the template matching method, specifically includes:
step 2.1.1, firstly, extracting a liquid drop template by adopting a Hough transform and circular extraction method; the shape of the liquid drop is approximately circular in the drip irrigation process; after extracting a circle, selecting a circle center O coordinate (X1, Y1) and a circle radius R; intercepting the liquid drop template by the central point of coordinates (X1-R, Y1-R) and the length and width 2R;
step 2.1.2, the liquid drop extracted in the previous step is used as a template, all video frames are subjected to template matching frame by frame, the algorithm idea of the template is that a searching template T is overlapped on a searched image S and translated, wherein the template T is (m × n) pixels, the searched image S is (W × H) pixels, and the template covers the area of the searched image called sub image Sij(ii) a i, j is the coordinate of the upper left corner of the subgraph on the searched graph S; the search range is:
1≤i≤W-m
1≤j≤W-m
by comparing T and SijCompleting the template matching process according to the similarity of the template;
step 2.1.3, matching area binaryzation, wherein the frame containing liquid drops is greatly different from the frame without liquid drops, so that the binaryzation method is local binaryzation or global binaryzation; the purpose of binarization is to better perform pixel statistics;
step 2.1.4, counting Sum of pixel values of the target area; gradually traversing the image from top to bottom from (0,0) to the left and gradually accumulating pixel sums, if the pixel value of a traversal point is 255, adding 1, and so on; calculating the sum of the pixel values to be 255;
the second speed measurement method, the frame difference method, specifically includes:
step 2.2.1, extracting the foreground by using a GMM Gaussian mixture model; firstly, setting the mean value, the variance and the weight of each Gaussian to be 0, namely initializing a model matrix parameter; training a GMM model by adopting a T frame in a video; for each pixel, establishing a GMM model with the maximum model number of GMM _ MAX _ COMPONT gaussians; when the first pixel is in the first pixel, a fixed initial mean value and a fixed initial variance are set in the program for the first pixel, and the weight value is set to be 1;
in the non-first frame training process, when the pixel value comes from the back, the pixel value is compared with the mean value of the Gaussian, and if the difference between the value of the pixel point and the mean value of the model is within 3 times of the variance of the mean value, the pixel value is considered to belong to the Gaussian; this time, the update is done with the following equation:
Figure FDA0002422973250000021
Figure FDA0002422973250000031
Figure FDA0002422973250000032
wherein
Figure FDA0002422973250000033
α=1/T,
Figure FDA0002422973250000034
When the difference between the value of the pixel point and the average value is not within 3 times of the average value
Figure FDA0002422973250000035
After the training frame number T is reached, self-adaptive selection of the number of GMMs of different pixel points is carried out; firstly, dividing the square difference by the weight to sort the gaussians from big to small, and then selecting the first B gaussians to satisfy
Figure FDA0002422973250000036
Wherein C isfIs 0.3
Thus, noise points in the training process can be well eliminated; in the testing stage, the value of a new pixel point is compared with each mean value of the B gaussians, if the difference value is between 2 times of the variance of the mean value, the difference value is regarded as the background, otherwise, the difference value is regarded as the foreground; and is considered foreground as long as one of the gaussian components satisfies the condition; the foreground assignment is 255 and the background assignment is 0; thus, a foreground binary image is formed; since the foreground binary image contains much noise, the noise is reduced to 0 by adopting morphological open operation, and then the information of the edge part lost due to the open operation is reconstructed by using closed operation; small noise points which are not communicated are eliminated;
step 2.2.2, carrying out self-adaptive binarization processing on the foreground extracted in the previous step;
2.2.3, respectively projecting the binary image in the horizontal direction and the vertical direction; selecting a threshold Th 1, and traversing the projected images in the horizontal and vertical directions respectively; when N is present0< Th, and N0The next 10 continuous pixel points are all larger than Th, then N is considered to be0Is a boundary point of the dropper; intercepting a burette target area after the boundary point is obtained; similarly, a threshold Th2 is selected when N is present1< Th, and N1The first 5 points are all larger than Th, and the last 5 points are all smaller than Th, then N is considered to be1Is a boundary point on the other side; finding four boundary points in the same way, and intercepting a target area as a template;
step 2.2.4, the template matching is carried out by using the template, which is consistent with the template matching method implemented in the step 2.1.2;
step 2.2.5, performing frame difference processing on the matched area; the difference is performed in a mode of once every two frame differences, namely 1, 4, 7 and 10 …, so that the calculation efficiency is improved;
and 2.2.6, performing opening and closing operation processing on the difference image, and then performing statistics on pixel value sums according to the method 2.1.4.
2. The method for measuring the flow rate of intravenous injection liquid drops based on machine vision according to claim 1, characterized in that, aiming at the characteristics of complex background and diversity of intravenous injection environment, when step 2 is executed, both methods process relevant interference factors, so that when step 2 is executed, a random template matching method or a frame difference method is selected for speed measurement.
CN201710178754.XA 2017-03-23 2017-03-23 Method for measuring flow velocity of intravenous injection liquid drops based on machine vision Expired - Fee Related CN106997587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710178754.XA CN106997587B (en) 2017-03-23 2017-03-23 Method for measuring flow velocity of intravenous injection liquid drops based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710178754.XA CN106997587B (en) 2017-03-23 2017-03-23 Method for measuring flow velocity of intravenous injection liquid drops based on machine vision

Publications (2)

Publication Number Publication Date
CN106997587A CN106997587A (en) 2017-08-01
CN106997587B true CN106997587B (en) 2020-06-23

Family

ID=59431870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710178754.XA Expired - Fee Related CN106997587B (en) 2017-03-23 2017-03-23 Method for measuring flow velocity of intravenous injection liquid drops based on machine vision

Country Status (1)

Country Link
CN (1) CN106997587B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112986105A (en) * 2021-02-07 2021-06-18 睿科集团(厦门)股份有限公司 Liquid drop counting and speed measuring method based on machine vision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103179995A (en) * 2010-07-15 2013-06-26 陶锴 Iv monitoring by video and image processing
CN105498042A (en) * 2016-01-08 2016-04-20 山东师范大学 Video-based non-light-shielding type transfusion automatic alarm method and device thereof
CN105664297A (en) * 2016-03-14 2016-06-15 英华达(南京)科技有限公司 Infusion monitoring method, system and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3149909A4 (en) * 2014-05-30 2018-03-07 Placemeter Inc. System and method for activity monitoring using video data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103179995A (en) * 2010-07-15 2013-06-26 陶锴 Iv monitoring by video and image processing
CN105498042A (en) * 2016-01-08 2016-04-20 山东师范大学 Video-based non-light-shielding type transfusion automatic alarm method and device thereof
CN105664297A (en) * 2016-03-14 2016-06-15 英华达(南京)科技有限公司 Infusion monitoring method, system and device

Also Published As

Publication number Publication date
CN106997587A (en) 2017-08-01

Similar Documents

Publication Publication Date Title
CN106096577B (en) A kind of target tracking method in camera distribution map
CN105405154B (en) Target object tracking based on color-structure feature
CN111539273B (en) Traffic video background modeling method and system
CN108846365B (en) Detection method and device for fighting behavior in video, storage medium and processor
WO2020073860A1 (en) Video cropping method and device
CN109522854A (en) A kind of pedestrian traffic statistical method based on deep learning and multiple target tracking
CN105894542A (en) Online target tracking method and apparatus
US20130243343A1 (en) Method and device for people group detection
CN111553274A (en) High-altitude parabolic detection method and device based on trajectory analysis
CN103325115B (en) A kind of method of monitoring people counting based on overhead camera head
CN107657244B (en) Human body falling behavior detection system based on multiple cameras and detection method thereof
WO2015153691A2 (en) Computer-implemented methods, computer-readable media, and systems for tracking a plurality of spermatozoa
CN107133607B (en) Demographics&#39; method and system based on video monitoring
CN102175693A (en) Machine vision detection method of visual foreign matters in medical medicament
CN102307274A (en) Motion detection method based on edge detection and frame difference
CN109145696B (en) Old people falling detection method and system based on deep learning
CN106558224B (en) A kind of traffic intelligent monitoring and managing method based on computer vision
CN102855466B (en) A kind of demographic method based on Computer Vision
CN107408119A (en) Image retrieving apparatus, system and method
CN108447076A (en) Multi-object tracking method based on depth enhancing study
CN106997587B (en) Method for measuring flow velocity of intravenous injection liquid drops based on machine vision
CN104598907A (en) Stroke width figure based method for extracting Chinese character data from image
CN104778676A (en) Depth ranging-based moving target detection method and system
CN108629327A (en) A kind of demographic method and device based on image procossing
CN110889347B (en) Density traffic flow counting method and system based on space-time counting characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200623

Termination date: 20210323