CN109272001B - Structure training method and device of urine test recognition classifier and computer equipment - Google Patents

Structure training method and device of urine test recognition classifier and computer equipment Download PDF

Info

Publication number
CN109272001B
CN109272001B CN201811141847.6A CN201811141847A CN109272001B CN 109272001 B CN109272001 B CN 109272001B CN 201811141847 A CN201811141847 A CN 201811141847A CN 109272001 B CN109272001 B CN 109272001B
Authority
CN
China
Prior art keywords
urine test
component
preset
training
test paper
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811141847.6A
Other languages
Chinese (zh)
Other versions
CN109272001A (en
Inventor
聂靖
叶亚金
常玉棋
蔡贤明
潘晓春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Feidian Health Management Co ltd
Original Assignee
Shenzhen Feidian Health Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Feidian Health Management Co ltd filed Critical Shenzhen Feidian Health Management Co ltd
Priority to CN201811141847.6A priority Critical patent/CN109272001B/en
Publication of CN109272001A publication Critical patent/CN109272001A/en
Application granted granted Critical
Publication of CN109272001B publication Critical patent/CN109272001B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a construction training method, a device and computer equipment of a urine test recognition classifier, wherein the method comprises the following steps: obtaining an interested area of an image to be identified in each urine test paper; extracting a feature vector of the region of interest; storing the characteristic vector of each urine test paper and the classification corresponding to the urine test paper as a sample; constructing a urine test identification classifier according to a first preset number of samples; training the urinalysis recognition classifier through a second preset number of samples, adjusting attribute nodes in the urinalysis recognition classifier according to a training result, and repeatedly executing the training process until the classification accuracy of the urinalysis recognition classifier reaches a preset threshold value. According to the invention, through a deep learning idea, a urine test paper classifier is established, and the classifier is trained to obtain a mapping relation between urine test paper and classification, so that the identification precision is improved.

Description

Structure training method and device of urine test recognition classifier and computer equipment
Technical Field
The invention relates to the technical field of deep learning, in particular to a construction training method and device of a urine test recognition classifier and computer equipment.
Background
With the rapid development of technologies such as mobile internet, computer vision, image processing and the like, the computing power of a computer is stronger and stronger, the urinalysis performed by the computer technology is concerned more and more, the urinalysis performed by the computer technology is not only high in precision, but also more automatic, quicker, more convenient and lower in cost.
In the conventional urine routine detection, a detector goes to a hospital to perform urine detection, a detection result is obtained by a doctor through comparison of a professional instrument, the operation, maintenance and calibration of the instrument are complex, the cost is high, and the requirements on the stability of a light source and the environmental temperature and humidity are high. The detection flow is tedious and time-consuming when the detector goes to the hospital.
In the prior art, in the field of mobile urine test, test paper color lump positioning is mainly performed by placing test paper in a clamping groove with a positioning mark and positioning the test paper color lump by a position mark fixed on the clamping groove.
Disclosure of Invention
In view of the foregoing problems, an object of the embodiments of the present invention is to provide a method, an apparatus and a computer device for training a structure of a urine test recognition classifier, so as to solve the deficiencies of the prior art.
According to an embodiment of the present invention, there is provided a structure training method for a urine test recognition classifier, including:
obtaining an interested area of an image to be identified in each urine test paper;
extracting a feature vector of the region of interest;
storing the characteristic vector of each urine test paper and the classification corresponding to the urine test paper as a sample;
constructing a urine test identification classifier according to a first preset number of samples;
training the urinalysis recognition classifier through a second preset number of samples, adjusting attribute nodes in the urinalysis recognition classifier according to a training result, and repeatedly executing the training process until the classification accuracy of the urinalysis recognition classifier reaches a preset threshold value.
In the above construction training method of the urine test recognition classifier, the urine test recognition classifier is a random forest classifier.
In the above method for training a structure of a urine test recognition classifier, the "structure a urine test recognition classifier based on a first predetermined number of samples" includes:
and constructing a plurality of decision tree classifiers according to the first preset number of samples, and combining the trained decision trees into a random forest classifier.
In the above method for constructing and training a urine test recognition classifier, the construction process of each decision tree classifier includes:
calculating the information gain of each component in the feature vector according to a first preset number of samples;
selecting the component with the largest information gain as a root attribute node, and dividing the first preset number of samples into different subsets according to the test result of the root attribute node;
and calculating the information gain of all the components except the component corresponding to the root attribute node in the samples of each subset, taking the component with the maximum information gain as the child attribute node of the subset, and recursively dividing the subset and generating the child attribute nodes until all the samples in the divided subset point to the same classification.
In the structure training method of the urine test recognition classifier, the urine test paper comprises two positioning blocks and a predetermined number of reaction blocks, the two positioning blocks are respectively positioned at two ends of the urine test paper, and the predetermined number of reaction blocks are respectively equidistantly distributed between the two positioning blocks according to a preset distribution interval;
the step of obtaining the region of interest of the image to be identified in each urine test strip includes:
extracting the outlines of the positioning block and the reaction block in the image to be identified in each urine test paper;
identifying a contour of the locating block from the respective contours;
and taking the contour of the positioning block as a reference, and acquiring a preset area of each reaction block between the two positioning blocks as an interested area.
In the above method for training a structure of a urine test recognition classifier, the "recognizing the contour of the positioning block from the respective contours" includes:
identifying color values of each contour and an aspect ratio of the contour;
acquiring a preset color value and a preset length-width ratio of a positioning block according to a prestored characteristic description file of the test paper;
comparing each identified color value with the predetermined color value and comparing each identified aspect ratio with the predetermined aspect ratio;
identifying a contour of which the color value is the same as the predetermined color value and an aspect ratio of the contour is the same as the predetermined aspect ratio as the contour of the locating block.
In the above method for training the structure of the urine test recognition classifier, the step of "acquiring a predetermined region of each reaction block between two positioning blocks as a region of interest based on the contour of the positioning block" includes:
calculating the center points of two sides closest to each other between the outlines of the two positioning blocks, and calculating the distance between the two center points;
obtaining reaction block areas with preset quantity according to the preset distribution spacing and the sizes of the reaction blocks;
in each reaction block area, a preset area is extracted as an interested area by taking the mass point of the reaction block area as the center.
In the above method for training the structure of the urine test recognition classifier, the extracting the feature vector of the region of interest includes:
converting the color gamut space of the image of the region of interest into an HSV color gamut matrix and an LAB color gamut matrix;
and splitting channels of the HSV color gamut matrix and the LAB color gamut matrix to obtain an H component, an S component, a V component, an L component, an A component and a B component, and forming the H component, the S component, the V component, the L component, the A component and the B component into a feature vector.
According to another embodiment of the present invention, there is provided a structure training device for a urine test recognition classifier, including:
the acquisition module is used for acquiring an interested area of an image to be identified in each urine test paper;
the extraction module is used for extracting the characteristic vector of the region of interest;
the storage module is used for storing the characteristic vector of each urine test paper and the classification corresponding to the urine test paper into a sample;
a construction module for constructing a urine test recognition classifier based on a first predetermined number of samples;
and the training module is used for training the urinalysis recognition classifier through a second preset number of samples, adjusting attribute nodes in the urinalysis recognition classifier according to a training result, and repeatedly executing the training process until the classification accuracy of the urinalysis recognition classifier reaches a preset threshold value.
According to yet another embodiment of the present invention, a computer device is provided, which includes a memory for storing a computer program and a processor for executing the computer program to make the computer device execute the above-mentioned construction training method of the urine test recognition classifier.
According to still another embodiment of the present invention, there is provided a computer-readable storage medium storing the computer program used in the computer apparatus described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the construction training method and device for the urine test recognition classifier and the computer equipment, machine learning is applied to the urine test field, the urine test recognition classifier is established through the deep learning idea, the urine test recognition classifier is trained to obtain the mapping relation between the characteristic vector of the urine test paper interesting region and classification, the recognition precision is improved, the matching recognition efficiency is improved, the time cost is effectively saved, and the time is strived for early problem finding of a patient.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the present invention, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic flow chart of a training method for a urine test recognition classifier structure according to an embodiment of the present invention.
Fig. 2 shows a schematic structural diagram of a urine test strip provided by an embodiment of the present invention.
Fig. 3 is a schematic flowchart of classifier construction according to an embodiment of the present invention.
Fig. 4 shows a flow chart of urine test identification by the urine test paper identification method according to the embodiment of the present invention.
Fig. 5 is a schematic flow chart illustrating urine test identification by another urine test paper identification method according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram illustrating a configuration training device of a urine test recognition classifier according to an embodiment of the present invention.
Description of the reference symbols:
600-a structure training device of a urine test recognition classifier; 610-an obtaining module; 620-an extraction module; 630-a storage module; 640-a construction module; 650-training module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Fig. 1 is a flowchart illustrating a method for training a structure of a urinalysis recognition classifier according to a first embodiment of the present invention.
The construction training method of the urine test recognition classifier comprises the following steps:
in step S110, a region of interest of an image to be identified in each of the urine test strips in the lake region.
The urine test paper can be divided into 11 urine test paper, 14 urine test paper, early pregnancy test paper, ovulation test paper and the like according to different examination items.
Further, the urine test paper comprises two positioning blocks and a predetermined number of reaction blocks, the two positioning blocks are respectively located at two ends of the urine test paper, and the predetermined number of reaction blocks are respectively equidistantly distributed between the two positioning blocks according to a preset distribution interval.
Fig. 2 is a schematic structural diagram of a urine test strip according to an embodiment of the present invention, and a transverse direction is defined as a width direction. The urine test paper is 14 urine test paper, two positioning blocks A are arranged at two ends of the 14 urine test paper, 15 reaction blocks B are equidistantly distributed between the two positioning blocks according to a preset distance a, the width of each reaction block is 2a, and each reaction block B represents different items to be tested.
The reaction blocks of the urine test paper of each urine test item are different in material composition, and different color values are presented when urine reacts with the materials of the reaction blocks of the urine test paper of different urine test items. Or when the urine test strips of different urine test items show the same color value, the same color value also represents different detection results because the items to be tested and the material composition of the reaction block are different.
When not detected, i.e. not immersed in urine, the 15 reaction masses shown in fig. 2 may be colorless or exhibit the same color as a white plate made from the bottom, at which time the reaction masses have not reacted with the constituents of the urine; after the urine test strip is sufficiently soaked in urine and the reaction blocks B react with the urine, the colors of the 15 reaction blocks may show completely different colors or partially same/similar colors according to different examination items.
In order to facilitate the identification of the subsequent positioning blocks and the determination of the direction of the urine test paper, the two positioning blocks a may also be set to have different widths in the horizontal direction in fig. 2, and the two positioning blocks a may also be set to have different colors from the reaction block. The colors of the two positioning blocks A can be the same or different.
Further, the "acquiring a region of interest of an image to be identified" includes:
and extracting all contours in the image to be identified, wherein all contours comprise the contour of the positioning block and the contour of the reaction block.
In this embodiment, the image to be recognized may be first segmented, and all the contours in the segmented image are extracted, and it should be noted that all the extracted contours may include contours of the positioning block, contours of the reaction block, and contours of other noisy images.
In some other embodiments, all contours in the image to be recognized may also be found based on the findContours function in the OpenCV platform.
And identifying the contour corresponding to the positioning block in all the contours.
Further, "identifying the contour corresponding to the positioning block among all the contours" specifically includes:
color values and aspect ratios of all contours are identified.
Specifically, in all the contours, the color value corresponding to each contour and the length and width of the contour are respectively identified, and the aspect ratio is calculated according to the length and width.
In this embodiment, the color value may be represented as a BGR value, and may be used for processing in an OpenCV platform. In some other embodiments, the color values may also be represented as RGB values.
And acquiring a preset color value and a preset length-width ratio of the positioning block according to the prestored characteristic description file of the test paper.
Specifically, each urine test strip has a feature description file, and the feature description file includes: the color of the urine test paper positioning block, the length-width ratio of each positioning block, the number of the reaction blocks, the distribution distance between every two adjacent reaction blocks, the width ratio of the reaction blocks to the distribution distance, the type (such as Glu, Cre and the like) of the reaction blocks and the like. For different types of urine test paper, the number of the reaction blocks is different, and the detection items of the reaction blocks are different.
Comparing all the identified color values with the preset color values respectively and comparing all the identified aspect ratios with the preset aspect ratios respectively;
and identifying the contour with the same color value and the same aspect ratio as the preset color value as the contour corresponding to the positioning block.
And comparing the color value of each contour with a preset color value, comparing the aspect ratio with a preset aspect ratio, and identifying a contour as a contour corresponding to the positioning block if the color value of the contour is the same as the preset color value and the aspect ratio is the same as the preset aspect ratio.
Because there may be noise pollution in the image, when the contour having the same color value as the predetermined color value and/or the same aspect ratio as the predetermined aspect ratio cannot be found, a color threshold and an aspect ratio threshold may be set according to practical experience, and the contour S may be identified as the contour corresponding to the positioning in the following three cases:
in the first case: the color values of the contour S and the predetermined color values are different but the difference between the color values of the contour S and the predetermined color values is less than or equal to the color threshold, and the aspect ratio of the contour S and the predetermined aspect ratio are different but the difference between the aspect ratio of the contour S and the predetermined aspect ratio is less than or equal to the aspect ratio threshold;
in the second case: the color value of the contour S is different from the predetermined color value but the difference between the color value of the contour S and the predetermined color value is less than or equal to the color threshold, and the aspect ratio of the contour S is the same as the predetermined aspect ratio;
in the third case: the color values of the contour S are the same as the predetermined color values, and the aspect ratio of the contour S is different from the predetermined aspect ratio but the difference between the aspect ratio of the contour S and the predetermined aspect ratio is less than or equal to the aspect ratio threshold.
As shown in fig. 2, since there are two positioning blocks and the aspect ratios are different, the color value of each contour can be compared with the predetermined color value of the first positioning block and the aspect ratio can be compared with the predetermined aspect ratio of the first positioning block, and then the contour corresponding to the first positioning block can be identified according to the comparison result; and comparing the color value of each contour with the preset color value of a second positioning block, comparing the aspect ratio with the preset aspect ratio of the second positioning block, and identifying the contour corresponding to the second positioning block according to the comparison result.
And taking the corresponding contour of the positioning block as a reference, and acquiring preset areas of all reaction blocks between the two positioning blocks as an interested area.
Further, the step of acquiring the predetermined regions of all the reaction blocks between the two positioning blocks as the regions of interest based on the corresponding contour of the positioning blocks includes:
calculating the center points of two sides closest to each other between the outlines corresponding to the two positioning blocks, and calculating the distance between the two center points; obtaining reaction block areas with preset quantity according to the preset distribution spacing and the sizes of the reaction blocks; in each reaction block area, a preset area is extracted as an interested area by taking the mass point of the reaction block area as the center.
After the contours corresponding to the two positioning blocks are identified, two sides closest to each other between the contours corresponding to the two positioning blocks are identified, as shown in fig. 2, the side close to the reaction block B of the positioning block on the left side of the urine test strip and the side close to the reaction block B of the positioning block on the right side of the urine test strip are identified, and the distance between the center points of the two sides closest to each other is 46 a.
After two sides with the two positioning blocks closest to each other are identified, coordinates of center points of the two sides relative to the image to be identified are calculated respectively, and the distance between the two center points is calculated according to the coordinates of the center points of the two sides. The coordinates may be pixel coordinates or euclidean distance coordinates.
As shown in fig. 2, with the preset distribution distance a as a step length, a reaction block with a width of 2a can be obtained by moving one step length a, and the regions of all the reaction blocks are extracted. In the area of each reaction block, a mass point (also can be a central point) of the reaction block is calculated, the mass point is taken as the center, a preset area is extracted as an interested area,
the preset area can be a circular area, and can also be a uniform graphic area such as a square area, a triangular area and the like.
As shown in fig. 2, in each reaction block region, a circular region with a radius of 10 pixels is extracted as a region of interest, with a particle as a center.
In step S120, a feature vector of the region of interest is extracted.
Further, the "extracting the feature vector of the region of interest" includes:
converting the color gamut space of the image of the region of interest into an HSV color gamut matrix and an LAB color gamut matrix; and splitting channels of the HSV color gamut matrix and the LAB color gamut matrix to obtain an H component, an S component, a V component, an L component, an A component and a B component, and forming a feature vector by the H component, the S component, the V component, the L component, the A component and the B component.
In step S130, the feature vector of each urine test strip and the corresponding classification of the urine test strip are stored as a sample.
In step S140, a urinalysis recognition classifier is constructed from a first predetermined number of samples.
In this embodiment, the urine test recognition classifier may be a random forest classifier. The random forest classifier has the advantages of high training efficiency, easy feature selection, high data sensitivity, high result presetting accuracy and strong algorithm generalization capability. In some other embodiments, the urinalysis recognition classifier may also be a support vector machine, an Adaboost classifier, or the like.
Further, the "constructing a urinalysis recognition classifier based on a first predetermined number of samples" includes:
and constructing a plurality of decision tree classifiers according to the first preset number of samples, and combining the trained decision trees into a random forest classifier.
Further, as shown in fig. 3, constructing the decision tree classifier includes:
in step S210, an information gain of each component in the feature vector is calculated based on a first predetermined number of samples.
Each sample comprises a feature vector corresponding to the region of interest of the urine test paper and a classification corresponding to the urine test paper, and the classification can be represented by a corresponding classification label, such as class 1, class 2, and the like.
The information gain can be calculated in the following way:
Gain(A)=I(s1,s2,…sm)-E(A)
Figure BDA0001815986350000121
Figure BDA0001815986350000122
wherein s is the number of samples of the first predetermined number, and there are m classes corresponding to Ci,piBelong to class C for any sampleiProbability of pi=siS, i ∈ (1,2, … m), a component in the feature vector having v different values, by which a first predetermined number of samples are divided into v subsets, sijIs a subset sjIn the category CiSample set of (1), pijIs a subset sjAny of the samples belongs to class CiThe probability of (c).
In step S220, the component with the largest information gain is selected as the root attribute node, and the first predetermined number of samples are divided into different subsets according to the test result of the root attribute node.
And comparing the information gains of all the components, selecting the component corresponding to the maximum information gain as a root attribute node, and under the root attribute node, dividing the first preset number of samples into different subsets according to the test result of the root attribute node, wherein each subset contains the sample of the test result corresponding to the subset.
In step S230, in the samples of each subset, the information gains of all the components remaining except the component corresponding to the root attribute node are calculated, and the component with the largest information gain is taken as the child attribute node of the subset.
In the samples of one subset, the information gains of all the components except the component corresponding to the attribute node are calculated, and the component with the largest information gain is selected as the sub-attribute node of the subset. The child property nodes are constructed recursively in the manner described above in the samples of each of the remaining subsets.
In step S240, it is determined whether all samples in the divided subset point to the same class.
In all the subsets divided above, it is determined whether all the samples corresponding to the subset point to a class, if all the samples corresponding to the subset point to a class, the process proceeds to step S250, and if at least one of the samples corresponding to the subset does not point to a class, the process returns to step S230, and continues to construct sub-attribute nodes according to the samples of the at least one subset.
In step S250, a decision tree is formed by all the attribute nodes and the test results corresponding to the attribute nodes.
In step S150, the urinalysis recognition classifier is trained by a second predetermined number of samples, and the attribute nodes of the urinalysis recognition classifier are adjusted according to the training result, and the training process is repeatedly performed until the classification accuracy of the urinalysis recognition classifier reaches a preset threshold.
Wherein the second predetermined number of samples is a different sample than the first predetermined number of samples.
After the plurality of decision tree classifiers are constructed, the plurality of decision tree classifiers are respectively trained through a second preset number of pre-stored samples, and attribute nodes in the decision tree classifiers are continuously adjusted according to the classification accuracy of training results.
And repeating the training process, judging whether the classification accuracy of the training result reaches a preset threshold value after each training is finished, stopping the training if the classification accuracy of the training result reaches the preset threshold value, and finishing the training of the multiple decision tree classifiers to perform subsequent classification operation.
Further, whether the classification accuracy rates of the training results of the preset times all reach a preset threshold value can be judged, and if the classification accuracy rates of the training results of the preset times all reach the preset threshold value, the training process is ended; and if the classification accuracy of at least one training result in the training results of the preset times does not reach a preset threshold value, continuing to train the decision tree classifier.
Further, whether the classification accuracy rates of the training results of the plurality of decision tree classifiers all reach a preset threshold value can be judged, and if the classification accuracy rates of the training results of the plurality of decision tree classifiers all reach the preset threshold value, the training process is ended; and if the classification accuracy of at least one training result in the training results of the decision tree classifiers does not reach the preset threshold value, continuing to train the decision tree classifiers.
And after the training of the decision tree classifiers is finished, combining the trained decision trees into a random forest classifier.
Fig. 4 is a schematic flow chart of a method for identifying a urine test strip according to an embodiment of the present invention. The urine test identification method carries out the classification of the urine test paper by the urine test identification classifier which is constructed and trained.
The method comprises the following steps:
in step S310, a region of interest of an image to be identified in the urine test strip is acquired.
In step S320, a feature vector of the region of interest is extracted.
In step S330, the feature vector is sent to a pre-trained urinalysis recognition classifier to obtain a recognition classification value corresponding to the feature vector.
And after the feature vector of the region of interest is obtained, the feature vector is used as input and is sent into a pre-trained urine test recognition classifier, and a recognition classification value with floating point number output can be obtained through classification.
In step S340, the feature gear corresponding to the identification classification value is determined according to the pre-stored correspondence between the classification value and the feature gear.
In this embodiment, the correspondence relationship may be described by a table.
Classification value Characteristic gear
M1 1
M2 2
…… ……
If the obtained identification classification value after classification is M1, the characteristic gear corresponding to the classification value M1 is 1 gear; if the classified identification classification value is M2, the characteristic gear corresponding to the classification value M2 is 2-gear, and so on.
Fig. 5 is a schematic flow chart illustrating another urine test strip identification method according to an embodiment of the present invention. The urine test identification method carries out the classification of the urine test paper by the urine test identification classifier which is constructed and trained.
The method comprises the following steps:
in step S410, the image to be recognized is preprocessed.
Particularly, median filtering is carried out on the image to be identified, and the size of the filtering aperture is preferably (3,3), so that impulse noise can be effectively eliminated, and sharp edges of the image can be well protected.
After median filtering, the median filtered image is subjected to mean filtering, the filter aperture size is preferably size (3,3), the image is further smoothed, and noise is filtered out.
And carrying out white balance processing on the image after the average value filtering. As shown in fig. 2, the urine test paper further includes a white balance block C, and the white balance block is used as a calibration standard and a gray scale world algorithm is used to perform white balance processing, so as to effectively eliminate the influence of the illumination environment on the color appearance. If no white balance block is arranged in the urine test paper, a blank area beside the positioning block A can be used as the white balance block.
In step S420, the outlines of the positioning block and the reaction block in the pre-processed image to be recognized are extracted.
Further, all contours can be extracted by:
and performing binarization processing on the image to be recognized, wherein the binarization threshold value can be 128.
The binary image is processed morphologically, erosion and expansion operations are performed, noise can be eliminated, independent image elements can be segmented, and adjacent image elements can be connected.
Based on the image after the erosion and dilation operations, all contours are found. Wherein, all the profiles comprise a positioning block profile, a reaction block profile and the like.
Further, the preprocessed image to be recognized can be converted into an HSV color gamut space, the HSV color gamut space is subjected to channel splitting, an S channel component is extracted, and the S channel component represents the saturation. A color can be seen as the result of a mixture of a certain spectral color and white, wherein the greater the proportion of spectral color, the higher the degree to which the color approaches the spectral color, and the higher the saturation of the color. High saturation and dark and bright color. The white light component of the spectral color is 0, and the saturation reaches the highest.
All contours are extracted based on the extracted S-channel components.
In step S430, the contour of the positioning block is identified among the respective contours.
Further, the "identifying the contour of the positioning block among the respective contours" includes:
identifying color values of all the contours and aspect ratios of the contours;
acquiring a preset color value and a preset length-width ratio of a positioning block according to a prestored characteristic description file of the test paper;
comparing all the identified color values with the preset color values respectively and comparing all the identified aspect ratios with the preset aspect ratios respectively;
and identifying the contour with the color value being the same as the preset color value and the aspect ratio of the contour being the same as the preset aspect ratio as the contour corresponding to the positioning block.
In step S440, a predetermined region of the reaction block between the two positioning blocks is acquired as a region of interest based on the contour corresponding to the positioning block.
Further, the step of acquiring a predetermined region of the reaction block between the two positioning blocks as a region of interest based on the corresponding contour of the positioning block includes:
calculating the center points of two sides closest to each other between the outlines corresponding to the two positioning blocks, and calculating the distance between the two center points;
obtaining reaction block areas with preset quantity according to the preset distribution spacing and the sizes of the reaction blocks;
in each reaction block area, a preset area is extracted as an interested area by taking the mass point of the reaction block area as the center.
Further, whether the urine test paper is forward can be judged according to the positions of the two positioning blocks, wherein the direction of the paper can be determined according to a preset rule, for example, the large positioning block on the right represents the forward direction, and if the urine test paper is reverse, the urine test paper needs to be turned over; if the urine test paper is positive, the subsequent treatment is directly carried out.
Whether the urine test paper has a rotation angle can be judged according to the coordinates of the mass centers (central points) of the two positioning blocks, and if the urine test paper has the rotation angle, the urine test paper can be rotated according to the rotation angle; if the urine test paper does not rotate, the subsequent treatment is continued.
Specifically, the rotation angle of the urine test paper can be calculated by utilizing a trigonometric function according to the two positioned centroid coordinates.
When the flip operation or the rotation operation is performed, the rotation or the flip can be performed through a flip () function in the OpenCV platform.
In step S450, the gamut space of the image of the region of interest is converted into an HSV gamut matrix and an LAB gamut matrix.
In step S460, the HSV color gamut matrix and the LAB color gamut matrix are channel-split to obtain an H component, an S component, a V component, an L component, an a component, and a B component, and the H component, the S component, the V component, the L component, the a component, and the B component are combined into a feature vector.
In step S470, the feature vector is sent to a pre-trained urinalysis recognition classifier to obtain a recognition classification value corresponding to the feature vector.
Specifically, when the classification is performed by the random forest classifier, a plurality of decision tree classifiers in the random forest classifier classify the input feature vector, each decision tree classifier votes for a classification result, and the classification result with the largest number of votes is used as the output of the random forest classifier.
For example, if there are 3 decisiontrees in the random forest, the classification result of 2 subtrees is class a, and the classification result of 1 subtree is class B, then the classification result of the random forest is class a.
In step S480, the feature gear corresponding to the identification classification value is determined according to the correspondence between the classification value and the feature gear stored in advance.
Example 2
Fig. 6 is a schematic structural diagram illustrating a configuration training device of a urine test recognition classifier according to an embodiment of the present invention.
The construction training device 600 of the urine test recognition classifier comprises an acquisition module 610, an extraction module 620, a storage module 630, a construction module 640 and a training module 650.
The obtaining module 610 is configured to obtain an area of interest of an image to be identified in each urine test strip.
And an extracting module 620, configured to extract the feature vector of the region of interest.
The storage module 630 is configured to store the feature vector of each urine test strip and the classification corresponding to the urine test strip as a sample.
A construction module 640 for constructing a urinalysis recognition classifier based on the first predetermined number of samples.
The training module 650 is configured to train the urinalysis recognition classifier through a second predetermined number of samples, adjust an attribute node in the urinalysis recognition classifier according to a training result, and repeatedly perform a training process until the classification accuracy of the urinalysis recognition classifier reaches a preset threshold.
The embodiment of the invention also provides computer equipment which can comprise a smart phone, a tablet computer and the like. The computer device comprises a memory and a processor, wherein the memory can be used for storing a computer program, and the processor can make the computer device execute the functions of each module in the construction training method of the urine test recognition classifier or the construction training device of the urine test recognition classifier by operating the computer program.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The embodiment of the invention also provides a computer storage medium for storing the computer program used in the computer equipment.
The invention provides a construction training method, a device and computer equipment of a urine test recognition classifier, wherein an image to be recognized of a urine test paper is subjected to image processing, and an interested region and a feature vector of the interested region are extracted; applying machine learning to the field of urine examination, establishing a urine examination recognition classifier through the thought of deep learning, and training the constructed urine examination recognition classifier through a predetermined number of urine examination samples to obtain the urine examination recognition classifier with high classification accuracy; the characteristic vectors of the region of interest are sent to a urine test identification classifier to obtain classification values, the characteristic gears corresponding to the urine test paper are obtained according to the mapping relation between the classification values and the characteristic gears, the identification result of the urine test paper can be obtained according to the characteristic gears, the identification precision is improved, the matching identification efficiency is improved, the time cost is effectively saved, and time is won for patients to find problems early.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part of the technical solution that contributes to the prior art in essence can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.

Claims (8)

1. A structure training method of a urine test recognition classifier is characterized by comprising the following steps:
obtaining an interested area of an image to be identified in each urine test paper;
extracting a feature vector of the region of interest;
storing the characteristic vector of each urine test paper and the classification corresponding to the urine test paper as a sample;
constructing a urine test identification classifier according to a first preset number of samples;
training the urinalysis recognition classifier through a second preset number of samples, adjusting attribute nodes in the urinalysis recognition classifier according to a training result, and repeatedly executing the training process until the classification accuracy of the urinalysis recognition classifier reaches a preset threshold;
the urine test paper comprises two positioning blocks and a predetermined number of reaction blocks, the two positioning blocks are respectively positioned at two ends of the urine test paper, and the predetermined number of reaction blocks are respectively distributed between the two positioning blocks at equal intervals according to a preset distribution interval;
the step of obtaining the region of interest of the image to be identified in each urine test strip includes:
extracting the outlines of the positioning block and the reaction block in the image to be identified in each urine test paper;
identifying a contour of the locating block from the respective contours;
and taking the contour of the positioning block as a reference, acquiring a preset area of each reaction block between the two positioning blocks as an interested area, and comprising the following steps:
calculating the center points of two sides closest to each other between the outlines of the two positioning blocks, and calculating the distance between the two center points;
obtaining reaction block areas with preset quantity according to the preset distribution spacing and the sizes of the reaction blocks;
in each reaction block area, a preset area is extracted as an interested area by taking the mass point of the reaction block area as the center.
2. The structural training method of the urinalysis recognition classifier as claimed in claim 1, wherein the urinalysis recognition classifier is a random forest classifier.
3. The method for constructing a urine test recognition classifier according to claim 2, wherein the "constructing a urine test recognition classifier based on a first predetermined number of samples" includes:
and constructing a plurality of decision tree classifiers according to the first preset number of samples, and combining the trained decision trees into a random forest classifier.
4. The method for constructing and training a urinalysis recognition classifier according to claim 3, wherein the construction process of each decision tree classifier comprises:
calculating the information gain of each component in the feature vector according to a first preset number of samples;
selecting the component with the largest information gain as a root attribute node, and dividing the first preset number of samples into different subsets according to the test result of the root attribute node;
and calculating the information gain of all the components except the component corresponding to the root attribute node in the samples of each subset, taking the component with the maximum information gain as the child attribute node of the subset, and recursively dividing the subset and generating the child attribute nodes until all the samples in the divided subset point to the same classification.
5. The structural training method of the urinalysis recognition classifier as claimed in claim 1, wherein the extracting the feature vector of the region of interest comprises:
converting the color gamut space of the image of the region of interest into an HSV color gamut matrix and an LAB color gamut matrix;
and splitting channels of the HSV color gamut matrix and the LAB color gamut matrix to obtain an H component, an S component, a V component, an L component, an A component and a B component, and forming the H component, the S component, the V component, the L component, the A component and the B component into a feature vector.
6. A structure training device of a urine test recognition classifier is characterized by comprising:
the system comprises an acquisition module, a detection module and a display module, wherein the acquisition module is used for acquiring an interested area of an image to be identified in each urine test paper, the urine test paper comprises two positioning blocks and a preset number of reaction blocks, the two positioning blocks are respectively positioned at two ends of the urine test paper, and the preset number of reaction blocks are respectively equidistantly distributed between the two positioning blocks according to a preset distribution interval;
an extraction module, configured to extract a feature vector of the region of interest, including: extracting the outlines of the positioning blocks and the reaction blocks in the image to be identified in each urine test paper, identifying the outlines of the positioning blocks from the outlines, and acquiring a preset area of each reaction block between the two positioning blocks as an interested area by taking the outlines of the positioning blocks as a reference, wherein the method comprises the steps of calculating the central point of the side edge closest to the two outlines of the two positioning blocks, calculating the distance between the two central points, and obtaining a preset number of reaction block areas according to the preset distribution distance and the sizes of the reaction blocks; in each reaction block area, taking mass points of the reaction block area as a center, and extracting a preset area as an interested area;
the storage module is used for storing the characteristic vector of each urine test paper and the classification corresponding to the urine test paper into a sample;
a construction module for constructing a urine test recognition classifier based on a first predetermined number of samples;
and the training module is used for training the urinalysis recognition classifier through a second preset number of samples, adjusting attribute nodes in the urinalysis recognition classifier according to a training result, and repeatedly executing the training process until the classification accuracy of the urinalysis recognition classifier reaches a preset threshold value.
7. A computer device, characterized in that the computer device comprises a memory for storing a computer program and a processor for executing the computer program to make the computer device execute the construction training method of the urinalysis recognition classifier according to any one of claims 1 to 5.
8. A computer storage medium storing the computer program for use in the computer device of claim 7.
CN201811141847.6A 2018-09-28 2018-09-28 Structure training method and device of urine test recognition classifier and computer equipment Active CN109272001B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811141847.6A CN109272001B (en) 2018-09-28 2018-09-28 Structure training method and device of urine test recognition classifier and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811141847.6A CN109272001B (en) 2018-09-28 2018-09-28 Structure training method and device of urine test recognition classifier and computer equipment

Publications (2)

Publication Number Publication Date
CN109272001A CN109272001A (en) 2019-01-25
CN109272001B true CN109272001B (en) 2021-09-03

Family

ID=65198774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811141847.6A Active CN109272001B (en) 2018-09-28 2018-09-28 Structure training method and device of urine test recognition classifier and computer equipment

Country Status (1)

Country Link
CN (1) CN109272001B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919054B (en) * 2019-02-25 2023-04-07 电子科技大学 Machine vision-based reagent card automatic classification detection method
CN110456050B (en) * 2019-07-11 2022-07-19 台州云海医疗科技有限公司 Portable intelligent digital parasite in vitro diagnostic instrument
CN110455789A (en) * 2019-07-18 2019-11-15 深圳市象形字科技股份有限公司 It is a kind of to carry out amount of drinking water monitoring device and method using uroscopy instrument

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996287A (en) * 2014-05-26 2014-08-20 江苏大学 Vehicle forced lane changing decision-making method based on decision-making tree model
CN105160317A (en) * 2015-08-31 2015-12-16 电子科技大学 Pedestrian gender identification method based on regional blocks
CN105388147A (en) * 2015-10-21 2016-03-09 深圳市宝凯仑生物科技有限公司 Detection method for body fluid based on special test paper
CN106202886A (en) * 2016-06-29 2016-12-07 中国铁路总公司 Track circuit red band Fault Locating Method based on fuzzy coarse central Yu decision tree
CN106548213A (en) * 2016-11-30 2017-03-29 上海联影医疗科技有限公司 Blood vessel recognition methodss and device
CN106651199A (en) * 2016-12-29 2017-05-10 冶金自动化研究设计院 Steam pipe network scheduling rule system based on decision-making tree method
CN107194138A (en) * 2016-01-31 2017-09-22 青岛睿帮信息技术有限公司 A kind of fasting blood-glucose Forecasting Methodology based on physical examination data modeling
CN107206384A (en) * 2015-02-17 2017-09-26 西门子医疗保健诊断公司 For the bar coded sticker detection in the side view sample cell image of laboratory automation
CN108280440A (en) * 2018-02-09 2018-07-13 三亚中科遥感研究所 A kind of fruit-bearing forest recognition methods and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996287A (en) * 2014-05-26 2014-08-20 江苏大学 Vehicle forced lane changing decision-making method based on decision-making tree model
CN107206384A (en) * 2015-02-17 2017-09-26 西门子医疗保健诊断公司 For the bar coded sticker detection in the side view sample cell image of laboratory automation
CN105160317A (en) * 2015-08-31 2015-12-16 电子科技大学 Pedestrian gender identification method based on regional blocks
CN105388147A (en) * 2015-10-21 2016-03-09 深圳市宝凯仑生物科技有限公司 Detection method for body fluid based on special test paper
CN107194138A (en) * 2016-01-31 2017-09-22 青岛睿帮信息技术有限公司 A kind of fasting blood-glucose Forecasting Methodology based on physical examination data modeling
CN106202886A (en) * 2016-06-29 2016-12-07 中国铁路总公司 Track circuit red band Fault Locating Method based on fuzzy coarse central Yu decision tree
CN106548213A (en) * 2016-11-30 2017-03-29 上海联影医疗科技有限公司 Blood vessel recognition methodss and device
CN106651199A (en) * 2016-12-29 2017-05-10 冶金自动化研究设计院 Steam pipe network scheduling rule system based on decision-making tree method
CN108280440A (en) * 2018-02-09 2018-07-13 三亚中科遥感研究所 A kind of fruit-bearing forest recognition methods and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于OpenCV的医疗试纸检测技术;农远峰;《中国优秀硕士学位论文全文数据库信息科技辑》;20170715(第07期);第I138-905页 *

Also Published As

Publication number Publication date
CN109272001A (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN107609549B (en) Text detection method for certificate image in natural scene
Mohammad et al. Optical character recognition implementation using pattern matching
CN109272001B (en) Structure training method and device of urine test recognition classifier and computer equipment
EP3101594A1 (en) Saliency information acquisition device and saliency information acquisition method
Tan et al. Novel initialization scheme for Fuzzy C-Means algorithm on color image segmentation
CN109241970B (en) Urine test method, mobile terminal and computer readable storage medium
CN109871829B (en) Detection model training method and device based on deep learning
CN110189383B (en) Traditional Chinese medicine tongue color and fur color quantitative analysis method based on machine learning
CN108229232B (en) Method and device for scanning two-dimensional codes in batch
CN109977899B (en) Training, reasoning and new variety adding method and system for article identification
CN111161281A (en) Face region identification method and device and storage medium
CN108154132A (en) A kind of identity card text extraction method, system and equipment and storage medium
CN110443235B (en) Intelligent paper test paper total score identification method and system
CN110570442A (en) Contour detection method under complex background, terminal device and storage medium
CN110008912B (en) Social platform matching method and system based on plant identification
Huang et al. Text detection and recognition in natural scene images
CN107133964B (en) Image matting method based on Kinect
CN112991536A (en) Automatic extraction and vectorization method for geographic surface elements of thematic map
Kartika et al. Butterfly image classification using color quantization method on hsv color space and local binary pattern
Kesiman et al. An analysis of ground truth binarized image variability of palm leaf manuscripts
CN114882204A (en) Automatic ship name recognition method
Hollaus et al. MultiSpectral image binarization using GMMs
Korpela et al. The performance of a local maxima method for detecting individual tree tops in aerial photographs
Hanif et al. Blind bleed-through removal in color ancient manuscripts
Bala et al. Image simulation for automatic license plate recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 518000, Building B, Building 8, Shenzhen International Innovation Valley, Dashi Road, Xili Community, Xili Street, Nanshan District, Shenzhen City, Guangdong Province, China 307

Patentee after: SHENZHEN FEIDIAN HEALTH MANAGEMENT CO.,LTD.

Country or region after: China

Address before: 518000 b2-302, Kexing Science Park, No. 15, Keyuan Road, Yuehai street, Nanshan District, Shenzhen, Guangdong

Patentee before: SHENZHEN FEIDIAN HEALTH MANAGEMENT CO.,LTD.

Country or region before: China

CP03 Change of name, title or address