CN115063375A - Image recognition method for automatically analyzing ovulation test paper detection result - Google Patents

Image recognition method for automatically analyzing ovulation test paper detection result Download PDF

Info

Publication number
CN115063375A
CN115063375A CN202210731263.4A CN202210731263A CN115063375A CN 115063375 A CN115063375 A CN 115063375A CN 202210731263 A CN202210731263 A CN 202210731263A CN 115063375 A CN115063375 A CN 115063375A
Authority
CN
China
Prior art keywords
line
matrix
test paper
value
matrix block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210731263.4A
Other languages
Chinese (zh)
Other versions
CN115063375B (en
Inventor
洪志令
陈柏伶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Zhong Ling Yi Yong Technology Co ltd
Original Assignee
Xiamen Zhong Ling Yi Yong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Zhong Ling Yi Yong Technology Co ltd filed Critical Xiamen Zhong Ling Yi Yong Technology Co ltd
Publication of CN115063375A publication Critical patent/CN115063375A/en
Application granted granted Critical
Publication of CN115063375B publication Critical patent/CN115063375B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image recognition method for automatically analyzing an ovulation test paper detection result. Firstly, compressing a picture, converting the picture into a gray image, simultaneously calculating to obtain an RGB matrix and an HSV matrix of the picture, and obtaining a 0-1 matrix of the picture according to a threshold interval of the HSV matrix; and then capturing the range of the test paper in the picture, extracting the outline by using a Sobel edge detection operator and an OTSU algorithm when capturing the range of the test paper, then capturing the upper and lower intervals of the test paper by Hough line transformation, comprehensively analyzing the right half part of the ovulation test paper according to the characteristics and the result of the Hough line transformation to obtain the test paper interval. And finally, carrying out characteristic judgment on the 0-1 matrix to screen out intervals of the T line and the C line, and comparing R average values of RGB spaces in the intervals of the T line and the C line to finally obtain a detection classification result of the test paper.

Description

Image recognition method for automatically analyzing ovulation test paper detection result
Technical Field
The invention belongs to the technical field of computer image recognition, and relates to an image recognition method for automatically analyzing an ovulation test paper detection result.
Background
The ovulation test paper is used for predicting whether ovulation occurs or not by detecting the peak level of luteinizing hormone, the high peak value of the luteinizing hormone in urine occurs within 24-48 hours before ovulation of a female, and the result of examination by the ovulation test paper is positive. If two distinct bands appear on the ovulation test paper, this indicates that luteinizing hormone has peaked, meaning that ovulation will occur within 24-48 hours. If only a red band appears on the strip, no ovulation is indicated. If no color band appears on the control line, that is, if one color band does not exist, it indicates that the test has a problem, and it is likely that the detection method is not correct or the reagent is invalid, and the test should be performed again by using a new test paper. The positive result detected by the ovulation test paper indicates that ovulation will occur within 24-48 hours in the future.
Normal users can judge the negativity and the positivity by visual observation. However, when human eyes are used for judgment, the subjectivity of a user is strong, the ovulation test paper has cognitive difference, and the detection result can not be accurately judged to belong to a more accurate state such as strong yang and weak yang, so that whether the ovulation test paper is in the ovulation period or not can not be accurately judged.
Therefore, in order to reduce the conditions of inaccurate identification and the like caused by subjective factors and insufficient cognition of a user, an automatic image identification technology is introduced into the scene, the test paper area is captured by extracting picture features, and the test paper detection result is obtained by analyzing according to the test paper features and the features extracted from the picture.
Disclosure of Invention
The invention discloses an image recognition method for automatically analyzing an ovulation test paper detection result. Firstly, compressing a picture, converting the picture into a gray image, simultaneously calculating to obtain an RGB matrix and an HSV matrix of the picture, and obtaining a 0-1 matrix of the picture according to a threshold interval of the HSV matrix; and then capturing the range of the test paper in the picture, extracting the outline by using a Sobel edge detection operator and an OTSU algorithm when capturing the range of the test paper, then capturing the upper and lower intervals of the test paper by Hough line transformation, comprehensively analyzing the right half part of the ovulation test paper according to the characteristics and the result of the Hough line transformation to obtain the test paper interval. And finally, carrying out characteristic judgment on the 0-1 matrix to screen out intervals of the T line and the C line, and comparing R average values of RGB spaces in the intervals of the T line and the C line to finally obtain a detection classification result of the test paper. Compared with the prior art, the method does not need to label a large amount of data required in deep learning, digital image processing is carried out by utilizing the color distribution characteristics of the test paper, Sobel operator calculation, OTSU edge extraction and Hough line transformation are carried out on the picture to capture the linear characteristics, the test paper interval is obtained by combining the spatial characteristic distribution condition of the 0-1 matrix of the picture, R average value comparison is carried out according to the color difference of the T line and the C line of the test paper, and finally the accurate state corresponding to the test paper detection result is effectively identified.
The method comprises the following steps:
(1) carrying out picture compression and HSV space transformation on the test paper picture
(2) Test paper target area is captured through operations such as edge extraction binarization and Hough transform
(3) Positioning and capturing areas of T line and C line in test result of test paper
(4) Classifying the state of the detection result
The method comprises the following steps of (1) carrying out picture compression and HSV space transformation on a test paper picture, and specifically comprises the following steps: the original picture is first compressed and processed into a picture of 500 pixels in width while maintaining the same aspect ratio as the original picture. And then, acquiring the RGB value of each pixel point of the compressed picture, wherein each RGB space has an HSV (hue, saturation and value) space, and the HSV value of each pixel point can be calculated according to a formula so as to further obtain an HSV matrix of the picture. The calculation formula for converting the RGB space into the HSV space is as follows:
Figure BDA0003713596490000021
Figure BDA0003713596490000022
V=max(R,G,B)
and solving the HSV space according to the formula.
The test paper target area is captured through operations such as edge extraction binarization and Hough transform in the step (2), and the method specifically comprises the following steps: the color image is converted to a grayscale image and the image is mean filtered using a 3 x 3 convolution kernel to smooth the image. And then, operating a Sobel operator to extract the edge of the picture, and using OTSU self-adaption to outline. Then, straight line detection is carried out through Hough transformation, and straight lines in the image are captured. The angle precision of radian measurement in Hough line transformation adopts pi/180. In the captured straight line set, a straight line with too short straight line length is filtered out firstly, and when the length of the straight line is less than 0.4 times of the image width (200 pixel points) or the inclination angle of the straight line is more than 10 degrees, the straight line is filtered out, so that a straight line set is obtained. And then further traversing the straight line set to pair every two straight lines, wherein if the slope difference of the straight lines is not more than 10%, and the straight line gap is more than 5 pixels and less than 80 pixels, the two straight lines are a pair of candidate straight line pairs. Calculating the 0-1 matrix of the image by satisfying the pixel points simultaneously
Figure BDA0003713596490000023
And if so, setting the value of the 0-1 matrix corresponding to the pixel point to be 1, otherwise, setting the value to be 0. And according to the obtained 0-1 matrix, eliminating interference blocks, transversely filtering rows with the number of 1 being less than 5%, and longitudinally filtering columns with the number of 1 being less than 20%. According to the characteristics of the test paper, a red shadow area is arranged on the right side of the test paper. Root of herbaceous plantAccording to the characteristic, the user further traverses whether the matrix block between each candidate straight line pair contains a continuous matrix block, the height of the continuous matrix block is larger than 5 pixel points, the width of the continuous matrix block is larger than or equal to 0.3 times the image width (150 pixel points), and the number of the continuous matrix blocks, corresponding to 0-1 matrix in the range, is 1 and accounts for more than 80% of the total number. If the condition is not met, the straight line pair is judged not to meet the condition and is filtered. If so, further compression is performed. And acquiring the upper and lower boundaries of the continuous matrix block, and judging whether the continuous matrix block is coincided with the candidate straight line pair or not. If the test paper is overlapped, the candidate straight line pair is directly judged to be the upper and lower boundaries of the test paper. And if the test paper is not overlapped, taking the upper and lower boundaries corresponding to the red shaded area as the upper and lower boundaries of the test paper. And obtaining the range of the test paper based on the rule.
The method comprises the following steps of (3) positioning and capturing the areas of the T line and the C line in the test result of the test paper, and specifically comprises the following steps: firstly, filtering columns which do not accord with the T line characteristic and the C line characteristic in a test paper area, and if the ratio of the column value of 1 in a 0-1 matrix corresponding to the test paper is less than 20% of the total number in the columns, judging the columns as the columns which do not accord with the conditions. After filtering columns which do not meet the conditions, the 0-1 matrix corresponding to the test paper area is divided into a small 0-1 matrix block set of blocks. We further filtered T, C lines in the small matrix block set. And judging whether the length-width ratio meets the condition in the rest small matrix blocks, wherein the length-width ratio is the length of the matrix block in the transverse direction divided by the length of the matrix block in the longitudinal direction, and screening out the matrix blocks with the length-width ratios larger than 1:2 and smaller than 1: 10. And then matching similar matrix blocks in the rest matrix block set, judging the similarity between different matrix blocks, if the ratio of the sum of the values of 1 in the two 0-1 matrix blocks is more than 0.4 and less than 2.5, the ratio of the row number of the matrix blocks is more than 0.6 and less than 1.666, and the ratio of the column number is more than 0.33 and less than 3, if the above conditions are met, determining the matrix blocks as the similar matrix blocks, wherein the matrix blocks close to the red shaded area are T lines, and the other side is C lines. If a plurality of matched similar matrix blocks exist, further filtering is carried out. The height of the matrix block is required to be less than the height of the red shadow area and greater than 0.8 times the height of the red shadow area; and the proportion of 1 in the 0-1 matrix block is more than 50% of the matrix block, and if the proportion is not satisfied, the matrix block is filtered. The unique similar matrix block can be accurately positioned through the judgment. If no similar matrix block is detected at this time, the remaining matrix blocks are filtered. The height of the matrix block is within the interval of the red shaded region and the lateral distance of the matrix block to the red shaded region is calculated and filtered out if the lateral distance is greater than 4 times the height of the red shaded region. The remaining matrix blocks are then either T-lines or C-lines. If the transverse distance from the matrix block to the red shadow area is more than 0.8 times and less than 1.5 times of the red shadow area, the matrix block is a T line; if the ratio is more than 1.5 times, the ratio is C line. If only C line exists and T line does not exist, the test result of the test paper is negative. If only T line exists and no C line exists, the test result of the test paper is invalid.
Wherein, the step (4) carries out state classification on the detection result, which specifically comprises the following steps: and respectively solving R values of the T line and the C line, and obtaining the R value of the T line by averaging the R values of each pixel in the matrix block corresponding to the T line, and obtaining the R value corresponding to the C line in the same way. And comparing the R values of the T line and the C line, and if the R value of the C line is greater than the R value of the T line, judging that the sun is strong. If the R value of the T line is less than 1.35 times of the R value of the C line, the positive is judged, if the R value of the T line is less than 1.9 times of the R value of the C line, the weak positive is judged, and if the R value of the C line is less than 1.9 times of the R value of the C line, the negative is judged. The detection classification result of the test paper can be obtained according to the above rules.
Drawings
FIG. 1 is a user interface for photographing the results of the ovulation test strip test;
FIG. 2 is an image processed by a Sobel edge detection operator;
FIG. 3 is a profile image extracted after an OTSU algorithm;
fig. 4 is an image synthesized by the original image and the detected straight lines after hough straight line transformation and conditional constraint filtering;
FIG. 5 is an effect diagram after 0-1 matrix acquisition is performed on an image HSV space;
FIG. 6 shows the fixed red area on the right side of the ovulation strip.
Detailed Description
The method of the present invention is described in detail below with reference to the accompanying drawings and examples.
The test paper detection is to automatically identify the level of Luteinizing Hormone (LH) detected and displayed by the ovulation test paper to judge whether the user is ovulating currently. This patent carries out image analysis to the test paper through image recognition's mode and then obtains the accurate recognition result of test paper, and the test paper discernment state that contains has included three kinds of negative, weak positive, strong positive. The test paper detection method can automatically identify the test paper in different backgrounds, the test paper needs to be photographed according to a limited angle when being photographed, and as shown in figure 1, the test paper needs to be placed in a dotted line view-finding frame when being photographed.
When the test paper is detected, because the influence factors of different brightness environments, background environments and mobile phone pixels exist during photographing, a method for automatically detecting the state of the ovulation test paper is provided. The test paper photo is subjected to digital image recognition, a test paper area in the picture is captured, the T line and the C line of the test paper are positioned according to the position factor and the color factor of the test paper area, and the state of the test paper is further confirmed according to the color distribution condition of the T line and the C line.
Firstly, carrying out picture compression and HSV space transformation on the test paper picture.
The original picture is first compressed and processed into a picture of 500 pixels in width while maintaining the same aspect ratio as the original picture.
And then acquiring the RGB value of each pixel point of the compressed picture, and calculating the HSV value of each pixel point according to the following formula so as to obtain the HSV matrix of the picture. The calculation formula for converting the RGB space into the HSV space is as follows:
Figure BDA0003713596490000041
Figure BDA0003713596490000042
V=max(R,G,B)
and solving the HSV space according to the formula.
And secondly, capturing a test paper target area through operations such as edge extraction binarization, Hough transform and the like.
First, a gray scale algorithm formula is used: and (R + G + B)/3, converting the color image into a Gray image. And mean filtering the processed gray map using a convolution kernel of 3 x 3 to smooth the image.
And then, performing edge extraction on the picture by operating a Sobel operator. Sobel combines Gaussian smoothing and differential derivation to calculate an approximate gradient of an image gray scale function, and indicates that the content in an image changes significantly when the gradient change amplitude is large. When calculating the Sobel operator, derivation is carried out on two directions, the image is assumed to be I, and the horizontal change G is carried out x The calculation result of (a) is:
Figure BDA0003713596490000043
Figure BDA0003713596490000044
vertical variation G y The calculation result of (a) is:
Figure BDA0003713596490000045
after the convolution, the pixel points in the original image are
Figure BDA0003713596490000046
The image after the Sobel operator is shown in fig. 2.
Outline delineation is performed by using an OTSU adaptive threshold method, and an outline image extracted after an OTSU algorithm is shown in FIG. 3.
Straight lines in the image are then captured by hough transform for straight line detection. The angle precision of radian measurement in Hough line transformation adopts pi/180. The image after hough line transformation is shown in fig. 4. In the captured straight line set, a straight line with too short straight line length is firstly filtered, and when the length of the straight line is less than 0.4 times of the image width (200 pixel points) or the inclination angle of the straight line is more than 10 degrees, the straight line is filtered. At this point we have a set of straight lines.
And further traversing the straight line set, pairing every two straight lines, and if the slope difference of the straight lines is not more than 10%, and the straight line gap is more than 5 pixels and less than 80 pixels, then the two straight lines are a pair of candidate straight line pairs.
Calculating the 0-1 matrix of the image when the pixel points satisfy
Figure BDA0003713596490000051
And if the condition is met, the value of the 0-1 matrix corresponding to the pixel point is 1, otherwise, the value is 0. Resulting in a 0-1 matrix of images. And according to the obtained 0-1 matrix, eliminating interference blocks, transversely filtering rows with the number of 1 being less than 5%, and longitudinally filtering columns with the number of 1 being less than 20%. The printed effect of the final 0-1 matrix is shown in FIG. 5.
According to the characteristics of the test paper, a red shaded area is observed on the right side of the test paper, as shown in FIG. 6. According to the characteristic, the user further traverses whether the matrix block between each candidate straight line pair contains a continuous matrix block, the height of the continuous matrix block is larger than 5 pixel points, the width of the continuous matrix block is larger than or equal to 0.3 times the image width (150 pixel points), and the number of the continuous matrix blocks, corresponding to 0-1 matrix in the range, is 1 and accounts for more than 80% of the total number. If the condition is not met, the straight line pair is judged not to meet the condition and is filtered.
If yes, further acquiring the upper and lower boundaries of the continuous matrix block, and judging whether the continuous matrix block is overlapped with the candidate straight line pair. If the test paper is overlapped, directly judging the candidate straight line pair as the upper and lower boundaries of the test paper; and if the test paper is not overlapped, taking the upper and lower boundaries corresponding to the red shaded area as the upper and lower boundaries of the test paper. And obtaining the range of the test paper based on the rule.
And thirdly, positioning and capturing the areas of the T line and the C line in the test result of the test paper.
Further judging the areas of the T line and the C line in the range of the test paper. Firstly, filtering columns which do not accord with the T line characteristic and the C line characteristic in a test paper area, and if the ratio of the column value of 1 in a 0-1 matrix corresponding to the test paper is less than 20% of the total number in the columns, judging the columns as the columns which do not accord with the conditions.
After filtering the columns which do not meet the condition, the 0-1 matrix corresponding to the test paper area is divided into a small 0-1 matrix block set. We further screened the range of T, C lines in the small matrix block set.
And judging whether the length-width ratio meets the condition in the rest small matrix blocks, wherein the length-width ratio is the length of the matrix block in the transverse direction divided by the length of the matrix block in the longitudinal direction, and screening out the matrix blocks with the length-width ratios larger than 1:2 and smaller than 1: 10.
And then carrying out similar matrix block matching in the rest matrix block set, carrying out similarity judgment between different matrix blocks, if the ratio of the sum of the values of 1 in the two 0-1 matrix blocks is more than 0.4 and less than 2.5, the ratio of the row number of the matrix blocks is more than 0.6 and less than 1.666, and the ratio of the column number is more than 0.33 and less than 3, if the above conditions are met, regarding the matrix blocks as similar matrix blocks, and regarding the matrix blocks close to the red shaded area as T lines, and regarding the other side as C lines.
And if a plurality of matched similar matrix blocks exist, further filtering. The height of the matrix block is required to be less than the height of the red shadow area and greater than 0.8 times the height of the red shadow area; and the proportion of 1 in the 0-1 matrix block is more than 50% of the matrix block, and if the proportion is not satisfied, the matrix block is filtered. The unique similar matrix block can be accurately positioned through the judgment.
If no similar matrix block is detected at this time, the remaining matrix blocks are filtered. The height of the matrix block is within the interval of the red shaded region and the lateral distance of the matrix block to the red shaded region is calculated and filtered out if the lateral distance is greater than 4 times the height of the red shaded region. The remaining matrix blocks are then either T-lines or C-lines. If the transverse distance from the matrix block to the red shadow area is more than 0.8 times and less than 1.5 times of the red shadow area, the matrix block is a T line; if the ratio is more than 1.5 times, the ratio is C line. If only C line exists and T line does not exist, the test result of the test paper is negative. If only T line exists and no C line exists, the test result of the test paper is invalid.
And fourthly, carrying out state classification on the detection result.
By the judgment, matrix blocks corresponding to the T line and the C line are obtained, then R values of the T line and the C line are respectively calculated, the R values of each pixel in the matrix blocks corresponding to the T line are averaged to obtain the R value of the T line, and the R value corresponding to the C line is obtained in the same way.
And comparing the R values of the T line and the C line, and if the R value of the C line is greater than the R value of the T line, judging that the sun is strong. If the R value of the T line is less than 1.35 times of the R value of the C line, the positive is judged, if the R value of the T line is less than 1.9 times of the R value of the C line, the weak positive is judged, and if the R value of the C line is less than 1.9 times of the R value of the C line, the negative is judged. The detection classification result of the test paper can be obtained according to the above rules.
In summary, the invention provides an image recognition method for automatically analyzing the detection result of the ovulation test paper. The method is based on a digital image processing technology, after the image is compressed, Sobel operator is used for edge extraction, OTSU self-adaptation is used for outline delineation, Hough transformation is used for straight line detection, and the range of the positioning test paper is judged according to the threshold value of the test paper area. After the test paper range is obtained, 0-1 matrix distribution is obtained according to the HSV matrix of the test paper range, the distribution intervals of the T line and the C line are analyzed and captured according to the threshold condition, finally, threshold judgment is carried out according to the 0-1 matrix distribution conditions of the T line and the C line and the test paper characteristics, and then the detection classification result of the test paper is judged.
Method of the invention although specific embodiments of the invention have been disclosed for illustrative purposes and in the accompanying drawings for purposes of promoting an understanding of the principles of the invention and of its implementation, those skilled in the art will recognize that: no alterations, changes, and modifications are possible without departing from the spirit and scope of the invention, as defined in the appended claims. Therefore, the present invention should not be limited to the disclosure of the preferred embodiments and the accompanying drawings. The presently disclosed embodiments are to be considered in all respects as illustrative and not restrictive on the scope of the appended claims.

Claims (2)

1. An image recognition method for automatically analyzing an ovulation test paper detection result is characterized by comprising the following steps:
(1) acquiring a picture, compressing the picture, and processing the picture into a picture with the width of 500 pixels under the condition of keeping the same aspect ratio as that of the original picture;
(2) and obtaining the RGB space of each pixel point of the picture, and obtaining the HSV space of the picture according to the RGB space conversion. Thereby obtaining the RGB value and HSV value of each pixel point in the picture, and respectively storing the RGB value and HSV value as two-dimensional matrix arrays;
(3) constructing a 0-1 matrix of the picture according to the HSV value of each pixel point of the picture; scanning each pixel point in the image, and marking as a target pixel point as 1 when the H value is 0-60 or 300-360, the S value is not less than 0.09, and the V value is not less than 0.3; the non-conforming label is 0, resulting in a 0-1 matrix;
(4) eliminating interference blocks according to the obtained 0-1 matrix, wherein the number of transverse filtering 1 is less than 5% of rows, and the number of longitudinal filtering 1 is less than 20% of columns;
(5) performing edge extraction on the picture by using a Sobel operator;
(6) using an OTSU adaptive algorithm to outline, and outlining the test paper;
(7) straight line detection is performed by using Hough transform, and straight lines in an image are captured. The angle precision of radian measurement in Hough line transformation adopts pi/180. In the captured straight line set, firstly filtering out the straight line with too short straight line length, and when the length of the straight line is less than 0.4 times of the image width (200 pixel points) or the inclination angle of the straight line is more than 10 degrees, filtering out the straight line, thereby obtaining a straight line set;
(8) searching a red shadow area on the right side of the test paper, traversing whether a matrix block between each candidate straight line pair contains a continuous matrix block, wherein the matrix block has the height of more than 5 pixel points and the width of more than or equal to 0.3 times of the image width (150 pixel points), and the number of the continuous matrix blocks accounts for more than 80% of the total number, and the matrix corresponding to 0-1 in the range is 1;
(9) positioning and capturing an image interval of the test paper, acquiring the upper and lower boundaries of the continuous matrix block, judging whether the continuous matrix block is overlapped with the candidate straight line pair, and if so, directly judging the candidate straight line pair as the upper and lower boundaries of the test paper; if the test paper is not overlapped, taking the upper and lower boundaries corresponding to the red shadow area as the upper and lower boundaries of the test paper, and obtaining the range of the test paper based on the above rules;
(10) screening the range of a T line and a C line in a test paper interval according to the characteristics, and if the ratio of the column value of 1 in a 0-1 matrix corresponding to the test paper is less than 20% of the total number in the column, enabling the column not to accord with the characteristics of the T line and the C line; after filtering columns which do not meet the conditions, dividing a 0-1 matrix corresponding to the test paper area into a small 0-1 matrix block set of blocks; judging whether the length-width ratio is in accordance with the length-width ratio in the rest small matrix blocks, wherein the length-width ratio is the length of the matrix block in the transverse direction divided by the length of the matrix block in the longitudinal direction, and screening out the matrix blocks with the length-width ratios larger than 1:2 and smaller than 1: 10;
(11) carrying out similar matrix block matching, carrying out similarity judgment between different matrix blocks, and if the ratio of the total number of the values of 1 in two 0-1 matrix blocks is more than 0.4 and less than 2.5, the ratio of the row number of the matrix blocks is more than 0.6 and less than 1.666, and the ratio of the column number is more than 0.33 and less than 3, determining the matrix blocks to be similar matrix blocks if the conditions are met, wherein the matrix blocks close to the red shadow area are T lines, and the other edge of the matrix blocks is C lines;
(12) if a plurality of matched similar matrix blocks exist, further filtering is carried out, the height of the matrix block is required to be smaller than the height of the red shadow area and larger than 0.8 time of the height of the red shadow area, the proportion of 1 in the 0-1 matrix block is larger than 50% of the matrix block, and if the proportion of 1 in the matrix block is not larger than 50%, the matrix block is filtered. The unique similar matrix block can be accurately positioned through the judgment. If no similar matrix block is detected at this time, the remaining matrix blocks are filtered. The height of the matrix block is within the interval of the red shaded region and the lateral distance of the matrix block to the red shaded region is calculated and filtered out if the lateral distance is greater than 4 times the height of the red shaded region. The rest matrix blocks are T lines or C lines, and if the transverse distance from the matrix blocks to the red shadow area is more than 0.8 times and less than 1.5 times of the red shadow area, the matrix blocks are T lines; if the ratio is more than 1.5 times, the ratio is C line. If only C line exists and T line does not exist, the test result of the test paper is negative. If only T line exists and no C line exists, the test result of the test paper is invalid;
(13) and solving R values of the T line and the C line, averaging the R values in the matrix block corresponding to the T line to obtain the R value of the T line, and obtaining the R value corresponding to the C line in the same way. Comparing the R values of the T line and the C line, and if the R value of the C line is greater than the R value of the T line, judging that the sun is strong; if the R value of the T line is less than 1.35 times of the R value of the C line, the positive is judged, if the R value of the T line is less than 1.9 times of the R value of the C line, the weak positive is judged, and if the R value of the C line is less than 1.9 times of the R value of the C line, the negative is judged; the detection classification result of the test paper can be obtained according to the above rules.
2. The method of claim 1, wherein the method comprises the steps of identifying the location of the test strip zone and identifying the test strip status;
and (3) positioning and identifying the test paper interval: after the edge features are extracted through the Sobel operator and the OTSU algorithm as described in the steps 7 to 9, a straight line is extracted through hough straight line transformation. Filtering out straight lines with too short straight line length and too large inclination angle from the captured straight line set; traversing whether the matrix block between each candidate straight line pair contains a continuous matrix block with the height larger than 5 pixel points, the width larger than or equal to 0.3 times the image width (150 pixel points), the number of 0-1 matrix corresponding to the continuous matrix block with the value of 1 in the range accounts for more than 80% of the total number, and if the matrix block is matched with the continuous matrix block, determining the interval as a red shadow interval; acquiring the upper and lower boundaries of the continuous matrix block, judging whether the continuous matrix block is overlapped with the candidate straight line pair, and if the continuous matrix block is overlapped with the candidate straight line pair, directly judging that the candidate straight line pair is the upper and lower boundaries of the test paper; if the test paper is not overlapped, taking the upper and lower boundaries corresponding to the red shadow area as the upper and lower boundaries of the test paper;
and (3) identifying the state of the test paper detection result: and step 13, averaging the R values in the matrix block corresponding to the T line to obtain an R value corresponding to the T line, and similarly obtaining an R value corresponding to the C line. Comparing the R values of the T line and the C line, and if the R value of the C line is greater than the R value of the T line, judging that the sun is strong; if the R value of the T line is less than 1.35 times of the R value of the C line, the positive is judged, if the R value of the T line is less than 1.9 times of the R value of the C line, the weak positive is judged, and if the R value of the C line is less than 1.9 times of the R value of the C line, the negative is judged.
CN202210731263.4A 2022-02-18 2022-06-24 Image recognition method for automatically analyzing ovulation test paper detection result Active CN115063375B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210148904 2022-02-18
CN2022101489043 2022-02-18

Publications (2)

Publication Number Publication Date
CN115063375A true CN115063375A (en) 2022-09-16
CN115063375B CN115063375B (en) 2024-06-04

Family

ID=83202100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210731263.4A Active CN115063375B (en) 2022-02-18 2022-06-24 Image recognition method for automatically analyzing ovulation test paper detection result

Country Status (1)

Country Link
CN (1) CN115063375B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292381A (en) * 2023-11-24 2023-12-26 杭州速腾电路科技有限公司 Method for reading serial number of printed circuit board

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110274353A1 (en) * 2010-05-07 2011-11-10 Hailong Yu Screen area detection method and screen area detection system
KR20120111153A (en) * 2011-03-31 2012-10-10 하이테콤시스템(주) Pre- processing method and apparatus for license plate recognition
CN104392212A (en) * 2014-11-14 2015-03-04 北京工业大学 Method for detecting road information and identifying forward vehicles based on vision
WO2016150134A1 (en) * 2015-03-21 2016-09-29 杨轶轩 Test paper reading method, and pregnancy test and ovulation test method therefor
CN107492094A (en) * 2017-07-21 2017-12-19 长安大学 A kind of unmanned plane visible detection method of high voltage line insulator
CN109740595A (en) * 2018-12-27 2019-05-10 武汉理工大学 A kind of oblique moving vehicles detection and tracking system and method based on machine vision
JP2019192022A (en) * 2018-04-26 2019-10-31 キヤノン株式会社 Image processing apparatus, image processing method, and program
US20190340446A1 (en) * 2016-08-01 2019-11-07 Peking University Shenzhen Graduate School Shadow removing method for color image and application
CN110599552A (en) * 2019-08-30 2019-12-20 杭州电子科技大学 pH test paper detection method based on computer vision
CN110852357A (en) * 2019-10-24 2020-02-28 开望(杭州)科技有限公司 Ovulation test paper category detection method
CN112288828A (en) * 2020-09-07 2021-01-29 广州盛成妈妈网络科技股份有限公司 Picture identification method for automatic ovulation test paper
WO2021027364A1 (en) * 2019-08-13 2021-02-18 平安科技(深圳)有限公司 Finger vein recognition-based identity authentication method and apparatus

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110274353A1 (en) * 2010-05-07 2011-11-10 Hailong Yu Screen area detection method and screen area detection system
KR20120111153A (en) * 2011-03-31 2012-10-10 하이테콤시스템(주) Pre- processing method and apparatus for license plate recognition
CN104392212A (en) * 2014-11-14 2015-03-04 北京工业大学 Method for detecting road information and identifying forward vehicles based on vision
WO2016150134A1 (en) * 2015-03-21 2016-09-29 杨轶轩 Test paper reading method, and pregnancy test and ovulation test method therefor
US20190340446A1 (en) * 2016-08-01 2019-11-07 Peking University Shenzhen Graduate School Shadow removing method for color image and application
CN107492094A (en) * 2017-07-21 2017-12-19 长安大学 A kind of unmanned plane visible detection method of high voltage line insulator
JP2019192022A (en) * 2018-04-26 2019-10-31 キヤノン株式会社 Image processing apparatus, image processing method, and program
CN109740595A (en) * 2018-12-27 2019-05-10 武汉理工大学 A kind of oblique moving vehicles detection and tracking system and method based on machine vision
WO2021027364A1 (en) * 2019-08-13 2021-02-18 平安科技(深圳)有限公司 Finger vein recognition-based identity authentication method and apparatus
CN110599552A (en) * 2019-08-30 2019-12-20 杭州电子科技大学 pH test paper detection method based on computer vision
CN110852357A (en) * 2019-10-24 2020-02-28 开望(杭州)科技有限公司 Ovulation test paper category detection method
CN112288828A (en) * 2020-09-07 2021-01-29 广州盛成妈妈网络科技股份有限公司 Picture identification method for automatic ovulation test paper

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
付长斐等: "基于HSV颜色空间的运动目标识别", 控制与信息技术, no. 02, 21 January 2020 (2020-01-21), pages 70 - 74 *
邱东等: "基于改进概率霍夫变换的车道线快速检测方法", 计算机技术与发展, vol. 30, no. 05, 18 December 2019 (2019-12-18), pages 43 - 48 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292381A (en) * 2023-11-24 2023-12-26 杭州速腾电路科技有限公司 Method for reading serial number of printed circuit board
CN117292381B (en) * 2023-11-24 2024-02-27 杭州速腾电路科技有限公司 Method for reading serial number of printed circuit board

Also Published As

Publication number Publication date
CN115063375B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
WO2018018788A1 (en) Image recognition-based meter reading apparatus and method thereof
CN102132323B (en) System and method for automatic image straightening
CN110120042B (en) Crop image pest and disease damage area extraction method based on SLIC super-pixel and automatic threshold segmentation
CN109035273B (en) Image signal fast segmentation method of immunochromatography test paper card
CN107610114A (en) Optical satellite remote sensing image cloud snow mist detection method based on SVMs
CN108181316B (en) Bamboo strip defect detection method based on machine vision
CN112149543B (en) Building dust recognition system and method based on computer vision
AU2020103260A4 (en) Rice blast grading system and method
CN110175556B (en) Remote sensing image cloud detection method based on Sobel operator
CN110648330B (en) Defect detection method for camera glass
CN115797352B (en) Tongue picture image processing system for traditional Chinese medicine health-care physique detection
CN108898132A (en) A kind of terahertz image dangerous material recognition methods based on Shape context description
CN112861654B (en) Machine vision-based famous tea picking point position information acquisition method
CN110866932A (en) Multi-channel tongue edge detection device and method and storage medium
CN116721391B (en) Method for detecting separation effect of raw oil based on computer vision
CN111665199A (en) Wire and cable color detection and identification method based on machine vision
CN114596551A (en) Vehicle-mounted forward-looking image crack detection method
CN113962976A (en) Quality evaluation method for pathological slide digital image
CN111768455A (en) Image-based wood region and dominant color extraction method
CN115063375B (en) Image recognition method for automatically analyzing ovulation test paper detection result
CN115731493A (en) Rainfall micro physical characteristic parameter extraction and analysis method based on video image recognition
CN115588208A (en) Full-line table structure identification method based on digital image processing technology
CN116071337A (en) Endoscopic image quality evaluation method based on super-pixel segmentation
CN115049689A (en) Table tennis identification method based on contour detection technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant