CN113989588A - Self-learning-based intelligent evaluation system and method for pentagonal drawing test - Google Patents

Self-learning-based intelligent evaluation system and method for pentagonal drawing test Download PDF

Info

Publication number
CN113989588A
CN113989588A CN202111247830.0A CN202111247830A CN113989588A CN 113989588 A CN113989588 A CN 113989588A CN 202111247830 A CN202111247830 A CN 202111247830A CN 113989588 A CN113989588 A CN 113989588A
Authority
CN
China
Prior art keywords
image
pentagonal
digital
pdt
digitized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111247830.0A
Other languages
Chinese (zh)
Inventor
李一可
郭家杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Tongrui Medical Technology Co ltd
Original Assignee
Foshan Tongrui Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Tongrui Medical Technology Co ltd filed Critical Foshan Tongrui Medical Technology Co ltd
Priority to CN202111247830.0A priority Critical patent/CN113989588A/en
Publication of CN113989588A publication Critical patent/CN113989588A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Abstract

The invention discloses a self-learning-based intelligent evaluation method for a pentagonal drawing test, which comprises the following steps of: the digital image acquisition module acquires a specified number of digital images of the pentagonal drawing; the data preprocessing module enables the acquired digital images of the pentagon drawing to form an image training set, adjusts the digital images in the image training set, and marks quadrangles and pentagons in the digital images; the image analysis module is used for obtaining a deep learning model based on image training set training; inputting the preprocessed PDT image into a deep learning model, and analyzing to obtain the type of a geometric figure and coordinate information in the PDT image; and the evaluation and display module judges whether the geometric figure in the PDT image meets the preset PDT figure condition according to the analysis result of the deep learning model and generates evaluation result information. The invention can rapidly and accurately score the digital image, and the processing result can be tracked and easily understood.

Description

Self-learning-based intelligent evaluation system and method for pentagonal drawing test
Technical Field
The invention relates to a graph analysis and evaluation method, in particular to a self-learning-based intelligent evaluation system and method for pentagonal drawing test.
Background
The Pentagon Drawing Test (PDT) is one of simple Mental State examinations (MMSE), and is widely applied to cognitive disorder screening of the elderly. The tester is asked to draw two intersecting pentagons, the intersection area of which is a quadrilateral. PDT can rapidly and reliably assess the visual spatial function of a subject, which often decreases to varying degrees in neurodegenerative diseases such as parkinson's disease and alzheimer's disease. Studies have shown that this test can also effectively distinguish various types of cognitive deficits and predict overall cognitive dysfunction. PDT is usually scored as "pass" or "fail", i.e. the entire graph is correctly drawn with a score of 1 and vice versa a score of 0.
PDT is currently scored by experts or trained technicians familiar with this test. While this task may be easy for a limited number of tests, it becomes very time consuming and challenging in the face of large data, such as extensive community cognitive function screening or years of disease history of an individual. In addition, whether a plot meets the scoring criteria is actually a subjective determination that may be affected by various subtle uncertainties in the plot, such as the presence of ambiguous shapes, slightly curved edges, and asymmetric figures. Thus, the consistency between different raters and across the same rater may be disturbed by a number of factors, including scoring criteria, experience, attention, and even possible mistakes.
With the development of Artificial Intelligence (AI), a scheme is proposed to solve this limitation and to achieve automatic scoring. Compared with human subjective judgment, the artificial intelligence program has higher efficiency and repeatability, and in addition, can reduce the cost and promote the electronization or remote application of PDT and MMSE, which accords with the current health policy of the new and popular period and the trend of future digital health.
In the prior art, a document 1 [ Park I, Kim YJ, Lee U (2020) Automatic, quality learning of the interlocking Pentagon Drawing Test (PDT) based on U-net and mobile sensor data. sensors (Switzerland)20,1283 ] describes an Automatic scoring tool for electronic PDT using deep learning development, wherein the described model is specially designed for PDT scoring collected in a smart phone or a tablet computer, and a set of self-developed, more complex scoring criteria is used, which total 4 scoring items, including the number of corners, the distance between pentagons, the integrity of the graph profile and the presence or absence of tremor, and the recorded model is designed for scoring PDT collected in a smart phone or a tablet computer. Implementing the functionality of the model requires the use of digital images as well as mobile device sensor data including spatial coordinates, time stamps, and touch events. However, these requirements not only increase the amount of computation and the size of the model, but also greatly reduce its utility. In particular, not only is the scoring criteria not universal, but the model cannot be scored accurately in the absence of any of the above data inputs, which limits its application to conventional paper PDT, and paper PDT is a more common test modality, especially in developing countries or regions. In summary, most deep learning algorithms including the above models are "black boxes", that is, the decision basis cannot be tracked or analyzed, so that the result is difficult for human to understand and judge the rationality.
In addition, a comparison document 2 (chinese patent publication with application number 202010390856.X entitled "AD scale hand-drawn cross-pentagon classification method based on convolutional deep neural network") uses a median filtering method, which is used in black and white pictures to remove discrete isolated point noise and cannot remove a large number of background shadows or bright spots, and further, the patent document directly marks the hand-drawn pictures with evaluation results, the labeling result only corresponds to a certain evaluation standard, and re-marking is required when the evaluation standard changes such as "pass/fail" to "score", and secondly, the patent document classifies the pictures by using the convolutional deep neural network method, and does not relate to picture acquisition, transmission and storage schemes, and furthermore, under the data volume of hundreds of pictures, the patent document needs to use the median filtering method, The image enhancement is trained to obtain an accuracy of 60-70%, and particularly in different images, the thicknesses of lines of hand-drawn crossed pentagons are inconsistent, so that the generalization capability of the patented method is limited.
Disclosure of Invention
The invention aims to solve the technical problem that aiming at the defects of the prior art, the self-learning-based pentagonal drawing test intelligent evaluation system and method which can quickly and accurately score a digital image and can track and easily understand the processing result are provided.
In order to solve the technical problems, the invention adopts the following technical scheme.
A self-learning-based intelligent evaluation system for pentagonal drawing test comprises a digital image acquisition module, a digital image processing module and a digital evaluation module, wherein the digital image acquisition module is used for acquiring digital images of a specified number of pentagonal drawings; the data preprocessing module is used for forming an image training set by the digital images obtained by the digital image obtaining module through drawing the digital images of the pentagons, adjusting the digital images in the image training set to highlight the quadrangles and the pentagons in the digital images, marking the quadrangles and the pentagons in the digital images, and recording coordinate information of the quadrangles and the pentagons in the digital images; the image analysis module is used for training according to quadrilateral and pentagonal coordinate information of all digital images in the image training set to obtain a deep learning model, and the deep learning model is used for analyzing the PDT image after preprocessing and obtaining the type of a geometric figure and coordinate information in the PDT image; and the evaluation and display module is used for judging whether the geometric figure in the PDT image meets the preset PDT figure condition according to the analysis result of the deep learning model and generating evaluation result information.
A self-learning-based intelligent evaluation method for pentagonal drawing test is realized based on a system, the system comprises a digital image acquisition module, a data preprocessing module, an image analysis module and an evaluation and display module, and the method comprises the following steps: step S1, the digital image acquisition module acquires a specified number of digital images of the pentagon drawing; step S2, the data preprocessing module makes the digital images of the pentagon drawing acquired by the digital image acquisition module form an image training set, and adjusts the digital images in the image training set to highlight quadrangles and pentagons in the digital images; step S3, the data preprocessing module marks quadrangle and pentagon in the digitized image and records coordinate information of the quadrangle and the pentagon in the digitized image; step S4, the image analysis module trains based on the coordinate information of the quadrangle and the pentagon of all the digitized images in the image training set to obtain a deep learning model; step S5, inputting the PDT image after preprocessing into the deep learning model, and analyzing by the deep learning model to obtain the geometric figure type and coordinate information in the PDT image; and step S6, the evaluation and display module judges whether the geometric figure in the PDT image meets the preset PDT figure condition according to the analysis result of the deep learning model, and meanwhile, evaluation result information is generated.
Preferably, in step S1, the digitized image acquired by the digital image acquisition module includes an image obtained by shooting, scanning or copying.
Preferably, in step S2, the process of preprocessing the digitized image by the data preprocessing module includes any one or more of blackening and whitening, noise reduction, target area identification, cropping and resizing.
Preferably, the step S2 includes a step of noise reduction processing on the digitized image: step S200, converting the colorful digital image from a multi-color channel image into a single-channel gray image, and selecting an optimal threshold value according to the histogram distribution of pixel gray; step S201, binarizing the grayscale map using the threshold: and setting all pixel points with the intensity lower than the threshold value as 0 and the rest as 1, and converting the gray-scale image into a black-white image in the step so as to finish the image noise reduction treatment.
Preferably, if the edge artifacts of the digitized image after the noise reduction processing in step S21 are more, the digitized image is clipped to form a new image containing only the target geometry.
Preferably, the process of cropping the digitized image includes manual cropping or automatic cropping using a pre-trained algorithm for automatically identifying the target area.
Preferably, in step S1, a part of the pentagonal drawing digitized images acquired by the digital image acquisition module further form an image test set, and in step S4, the digitized images in the image test set are preprocessed and then input to the deep learning model for verification.
Preferably, in step S6, the evaluation result information generated by the evaluation and display module includes, but is not limited to: "pass" and "fail" information; probabilities of identified quadrilaterals and pentagons; scoring of each digitized image.
In the self-learning-based intelligent evaluation system and method for the pentagonal drawing test, the digital image acquisition module can be used for acquiring digital images of the pentagonal drawing in a specified quantity, and hardware equipment which depends on the digital images comprises but is not limited to a drawing program of a computer, a mobile phone or a flat panel, a camera of the drawing program, a digital camera, a scanner and the like; the data preprocessing module extracts, reduces noise, removes colors and cuts a target area by analyzing the pixel characteristics of the digital image, and finally generates a standardized image containing target drawing; the image analysis module comprises a deep learning model such as a deep neural network, the position, the probability and other information of each pentagon and quadrangle in the image can be judged through convolution operation, and the evaluation and display module obtains a final score through analysis of the type, the quantity and the position relation of the geometric figures and displays the information. Compared with the traditional machine learning method, the invention combines the most advanced deep learning method and the understandable logic analysis method, and realizes the functions of higher efficiency, higher accuracy and stronger generalization capability compared with the traditional deep learning method.
Drawings
FIG. 1 is a block diagram of a self-learning-based intelligent evaluation system for pentagonal drawing test according to the present invention;
FIG. 2 is a flow chart of the self-learning-based intelligent evaluation method for the pentagonal drawing test of the invention;
FIG. 3 is a schematic diagram of the pattern change in the image preprocessing step;
FIG. 4 is a diagram of the structure of the YOLO model;
FIG. 5 is a diagram illustrating a prediction result and a visualization of a deep learning model;
FIG. 6 is a diagram illustrating a ROC curve comparison of deep learning models.
Detailed Description
The invention is described in more detail below with reference to the figures and examples.
The invention discloses a self-learning-based intelligent evaluation system for pentagonal drawing test, which is shown in figure 1 and comprises the following components:
the digital image acquisition module 1 is used for acquiring a specified number of digital images of pentagonal drawing;
the data preprocessing module 2 is configured to combine the digitized images of the pentagon drawing acquired by the digital image acquisition module 1 into an image training set, adjust the digitized images in the image training set to highlight quadrangles and pentagons in the digitized images, mark the quadrangles and pentagons in the digitized images, and record coordinate information of the quadrangles and the pentagons in the digitized images;
the image analysis module 3 is used for training according to quadrilateral and pentagonal coordinate information of all digitized images in the image training set to obtain a deep learning model, and the deep learning model is used for analyzing the PDT image after preprocessing and obtaining the type of a geometric figure and coordinate information in the PDT image;
and the evaluation and display module 4 is used for judging whether the geometric figure in the PDT image meets the preset PDT figure condition according to the analysis result of the deep learning model and generating evaluation result information.
In the above system, the digital image obtaining module 1 may be configured to obtain a specified number of digital images for pentagonal drawing, and certainly, the functions of generating, extracting, transmitting and/or storing the digital images may also be extended. Hardware devices on which the present invention depends include, but are not limited to, drawing programs of computers, mobile phones or tablets, cameras thereof, digital cameras, scanners, and the like; the data preprocessing module 2 is used for analyzing the pixel characteristics of the digital image, including marking and dividing an image data set for training and testing a model, and an image preprocessing function, which realizes the extraction, noise reduction, decoloring and cutting of a target area by analyzing the pixel characteristics of the digital image, and finally generates a standardized image containing a target drawing; the image analysis module 3 comprises deep learning models such as a deep neural network, the deep learning models are recognized through training objects and are used for judging information such as the position and probability of each pentagon and quadrangle in the image, the evaluation and display module 4 obtains a final score through analysis of the type, the number and the position relation of the geometric figures, and the information is displayed. Compared with the traditional machine learning method, the invention combines the most advanced deep learning method and the understandable logic analysis method, and realizes the functions of higher efficiency, higher accuracy and stronger generalization capability compared with the traditional deep learning method.
The object recognition model used by the invention depends on a deep neural network and is commonly used for various tasks in the field of computer vision, such as face recognition, automatic driving and the like. Unlike the traditional image classification model, the object identification model needs to be trained to identify the object type and position in the image, and the image classification model only needs to identify the overall classification of the image.
On the basis, the invention discloses a self-learning-based intelligent evaluation method for pentagonal drawing test, which is realized based on the system, wherein the system comprises a digital image acquisition module 1, a data preprocessing module 2, an image analysis module 3 and an evaluation and display module 4, and the method comprises the following steps of, in combination with the following drawings shown in fig. 1 and 2:
step S1, the digital image acquisition module 1 acquires a specified number of digital images of the pentagon drawing;
step S2, the data preprocessing module 2 composes the digitized images of the pentagonal drawing acquired by the digital image acquisition module 1 into an image training set, and adjusts the digitized images in the image training set to highlight the quadrangles and pentagons in the digitized images;
step S3, the data preprocessing module 2 marks the quadrangle and the pentagon in the digitized image, and records coordinate information of the quadrangle and the pentagon in the digitized image;
step S4, the image analysis module 3 trains and obtains a deep learning model based on the coordinate information of the quadrangle and the pentagon of all the digitized images in the image training set;
step S5, inputting the PDT image after preprocessing into the deep learning model, and analyzing by the deep learning model to obtain the geometric figure type and coordinate information in the PDT image;
step S6, the evaluation and display module 4 determines whether the geometric figure in the PDT image satisfies the preset PDT image condition according to the analysis result of the deep learning model, and generates evaluation result information.
Compared with the comparison documents 1 and 2, the method model of the invention has the following advantages:
first, the present invention requires only digital image data, and thus has wider applicability and can be used for both electronic and paper PDT. For example, the user may obtain the score by simply providing a digital map generated by electronic version PDT, or by taking a picture of a paper interface through a camera or scanner of the mobile device. Processing pure image data may also reduce the complexity of the entire AI framework, thereby reducing the requirements on hardware configuration and increasing computational efficiency.
Secondly, the invention establishes a set of complete and standardized image preprocessing flow, which comprises the division and marking of a data set during model training, and a series of automatic operations of target region detection, cutting, noise reduction, standardized image generation and the like. The image processing can reduce the interference of image noise such as background artifacts, uneven illumination and the like caused by original images with different qualities, and improve the model operation efficiency and the judgment accuracy.
Thirdly, the model of the invention adopts the most common scoring standards, namely 'pass' and 'fail', so that the result is easier to analyze and provides basis for the next decision; but at the same time this approach can also be used to develop more complex scoring models.
Fourthly, the model of the invention uses a target pattern recognition technology, and can automatically position and display all pentagons and quadrigons in the original image, thereby intuitively reflecting the judgment basis of the model, facilitating the understanding of users and deeply debugging or optimizing the model.
Further, in step S1, the digital image acquired by the digital image acquisition module 1 includes a captured, scanned or copied image.
In this step S1, the digital image acquisition module 1 may acquire the digital image of the pentagon drawing from the cloud or a local database, for example, a plan implementer may acquire existing PTD drawing data from a medical institution or a public database, or may acquire new data by himself. If the original drawing exists on a paper interface, it is first digitized, and available methods include, but are not limited to: shooting by using a mobile phone camera, shooting by using a digital camera, scanning by using a scanner and the like. The process sets the proper lighting condition, shooting angle and range as much as possible to reduce the background noise of the picture. The image is saved in a JPG format, and the images in other formats can be saved as JPG by any image processing software. If the original drawing is a digital image, the digital image is only needed to be exported and stored into a JPG format. In practice, the data used came from two independent health service institutions, including 399 PDT maps for institution A and 424 for institution B. The original drawing is paper, the paper is digitized by shooting through a mobile phone camera and stored into a JPG file, and the resolution of each dimension of the image is at least 1500 pixels. These images are used to train and test deep learning models.
In step S2, the process of preprocessing the digitized image by the data preprocessing module 2 preferably includes any one or more of whitening, noise reduction, target area identification, cropping and resizing. Further, the step S2 includes a noise reduction processing step for the digitized image:
step S200, converting the colorful digital image from a multi-color channel image into a single-channel gray image, and selecting an optimal threshold value according to the histogram distribution of pixel gray;
step S201, binarizing the grayscale map using the threshold: and setting all pixel points with the intensity lower than the threshold value as 0 and the rest as 1, and converting the gray-scale image into a black-white image in the step so as to finish the image noise reduction treatment. If the edge artifact of the digitized image after the noise reduction processing in step S21 is large, the digitized image is clipped to form a new picture including only the target geometry. The process of cutting the digital image comprises manual cutting or automatic cutting realized by utilizing a pre-trained algorithm for automatically identifying a target area.
As an alternative, the identification and clipping of the target region in the data preprocessing module are performed manually, because the background noise of the original data is relatively high, which is to ensure the quality of the training data and the accuracy of the model. However, this step can also be used to identify and extract the whole target map by specifically training an AI model for object identification, thereby achieving automation of the whole process. Namely, the CNN model proposed in the foregoing steps can be trained to realize automatic target region recognition and extraction, and the specific method is similar to the foregoing training recognition process for quadrangles and pentagons, except that the target geometry is changed from four pentagons to the whole drawing.
The implementation process of the image preprocessing specifically refers to the following first embodiment and second embodiment:
example one
Referring to fig. 3, the data preprocessing process in the present invention can be divided into necessary steps and optional steps, the necessary steps include dividing the data set into a training set and a test set when training the model, labeling the images, and recording the coordinate information of all the quadrangles and pentagons in each image. The coordinate information and the images are used for training the deep learning model in the next step. The optional step is to pre-process the digital image, and the processing method adopted in this embodiment includes one or more steps of performing blackening and whitening, noise reduction, target area identification, clipping, resizing, and the like on the digitized image, so as to reduce image artifacts and standardize the picture size, so as to further improve the efficiency and the performance of the model during training. Specifically, color pictures are first taken from a multi-color channel such as: converting the three colors of red, green, blue, RGB and cyan, red, yellow and black CMYK into a single-channel gray scale image, and selecting an optimal threshold value according to the histogram distribution of the pixel gray scale. And (4) binarizing the gray level image by using the threshold, namely setting all pixel points with the intensity lower than the threshold to be 0 and setting the rest to be 1. The conversion converts the gray image into a black-and-white image, thereby realizing the purpose of noise reduction. If the image edge artifacts are still more, it can be selected to be properly cropped to form a new image mainly containing the target map. The target area range needs to be identified, and automation can be realized through manual implementation or through training of an algorithm for automatically identifying the target area. The latter usually uses CNN-based neural networks, such as YOLO (you Only Look one), R-CNN (Region-based CNN), FastR-CNN, etc., see in particular the deep learning network introduction of the next step. And after the target area is identified, cutting the target area by using a rectangular window, and adjusting the target area into a final image with an ideal size according to the requirements of a deep learning network selected in the next step. If the background artifact and size of the original black-and-white image have reached the required requirements, it can also not be cut.
Example two
In this embodiment, the original dataset is subjected to layered 5-fold cross validation, that is, the dataset is divided into 5 equal parts, the PDT passing and failing sample number ratios in each subset are consistent, in the experimental process of training and validation model, 20% of samples are reserved each time, n is 165 for testing, the remaining four samples are 80%, and n is 658 for training, the process is repeated 5 times, each time one test set is corresponded, and the final result is that all samples are tested once without repetition.
On this basis, the step S2 includes the following steps:
step S210, converting the digital image from an RGB channel image into a single-channel gray image;
step S211, for the grayscale image, determines the threshold value of each digitized image using the following formula:
h=η1min(Ixy)+η2max(Ixy)+(1-η12)mean(Ixy);
wherein: eta is the proportional weight, Ixy (1. ltoreq. x. ltoreq. Lx, 1. ltoreq. y. ltoreq. Ly) is the pixel gray scale of the whole image;
step S212, after converting into black and white images, manually cutting out the target geometric figure by using a rectangular window lx × ly, and expanding the cutting area into a square dxd image according to the following formula:
d=max(lx,ly)。
preferably, in step S1, a part of the digitized images of the pentagonal drawing acquired by the digital image acquisition module 1 further form an image test set, and in step S4, the digitized images in the image test set are preprocessed and then input to the deep learning model for inspection. However, this is only a preferred implementation manner of the present invention, and the present invention preferably extracts a part of the digitized image of the pentagonal drawing to form an image test set, which can generate the image test set quickly and conveniently, but is not limited to this in practical application, and according to actual environmental conditions, the image test set can also be formed in other manners, for example, a small number of images can be collected from the same or related mechanisms to form the image test set.
In practical application, the step of analyzing the image involves a deep learning model, and the implementation method is separately described according to a training phase and an application phase. In the training phase, the data preprocessed in the above steps are used for training the deep learning model of object detection. Such models are generally based on CNN structures, including but not limited to R-CNN, Fast R-CNN, YOLO, or SSD (Single Shot Multibox Detector), among others. The implementer can select one of the models and adjust the hyper-parameter tuning of the model to achieve satisfactory results. The performance of the deep learning model obtained by training is verified through a test set and applied to practice. In the application stage, any new PDT picture is input into the trained model, the type and the direction of the geometric figure contained in the model are obtained after analysis, and the information is used for judging the result in the next step. If the image data is subjected to noise reduction, clipping and the like in optional image optimization processing steps during training, the same method is recommended to be adopted for processing a new picture during application.
For example, a deep learning network selects a YOLO model, a neural network structure of the model includes main parts such as BackBone, PANet, Output, and the like, and the model is divided into several different sub models, i.e., small, medium, large, and extra large, according to the complexity, please refer to fig. 4. This implementation used a medium-sized YOLO (YOLOv5m), approximately 2100 ten thousand parameters, 2 output layers, and performed 100 epoch training rounds on the model in a PyTorch environment. The resulting model was examined using the test set and the results are given in the next step.
In a preferred manner, in step S6, the evaluation result information generated by the evaluation and display module 4 includes, but is not limited to:
"pass" and "fail" information;
probabilities of identified quadrilaterals and pentagons;
scoring of each digitized image.
For the evaluation result generation process, the following is exemplified: the graph is scored according to the orientation information of the quadrangles and the pentagons obtained by the analysis of the deep learning model, and the scoring method can be divided into two types of passing and failing, or other more complex scoring standards are used, such as scoring in percentage or other manners according to the number of recognized pentagons, the completeness of the pentagons, the distance between the pentagons and the like. The algorithm may be adjusted accordingly based on the scoring criteria. The final result can be output as a file or displayed on a graphical interface of the target device, such as a mobile phone APP interface or computer image browsing software, and meanwhile, the generated evaluation result information is uploaded to the cloud database.
As a preferred technique, the scoring method uses "pass" and "fail" criteria. Specifically, a PDT map is considered "pass" only if: there are two and only two pentagons and one quadrilateral in the figure, and the coordinates of the quadrilateral area are contained within the area where the two pentagons intersect. The rest of the samples which do not meet the above standard are judged to be failed. The final displayed result includes the score of each graph, and the range and probability maps 5a-f for the identified quadrilateral and pentagon in the graph, with the original on the left and the display on the right. The prediction capability of the model established in the invention is verified in the test set of the layered 5-fold cross validation experiment, and the average accuracy of the AI model is 94%, the sensitivity is 95%, the accuracy is 93% and the specificity is 93% based on the final consensus score of 3 neurologists. The conventional image classification models (reference documents 1 and 2) were only 62%, 62%, 65% and 63%, respectively, after the identical data testing. FIG. 6 shows a comparison of ROC curves for two deep learning models, where the larger the area under the curve indicates the better performance of the model, the left side of the conventional image classification model has an average area of 0.725, and the right side of the model used in the present invention has an average area of 0.954. The results show that the method can greatly improve the expression of the model and has high practical application value.
In an alternative embodiment of the invention, the output of the evaluation and display module 4 displays, in addition to the passing of the drawing, the probability of each of the identified pentagons and quadrigons. This probability information essentially reflects the AI model's judgment of the standard degree for each geometry. Thus, the user may set a "threshold" for the prototype model to default to 0, i.e., if all AI's are identified as being present, even though the shape may be different from the standard regular tetragon, excluding from further decision criteria geometries with probabilities below this value, e.g., 0.5, which may further improve the performance of the model. In other words, such an arrangement allows the user to increase the "rigor" of the score, i.e., to approach the standard regular four-pentagon to meet the requirement.
The invention discloses a self-learning-based intelligent evaluation system and method for pentagonal drawing test, which provides an AI frame and a corresponding system and can realize automatic scoring of PDT. The framework requires only digital image data to achieve rapid, accurate scoring, and the results have the advantage of being repeatable, traceable, and understandable. The framework can be integrated with modern digital equipment for digital drawing test and can also be used for grading the traditional paper test. The characteristics can reliably assist clinicians in big data analysis and provide decision bases, and accurate and efficient digital medical treatment is realized.
Compared with a comparison document 2 (the Chinese patent publication with application number of 202010390856.X and name of 'hand-drawn cross-pentagonal classification method for AD scale based on convolutional deep neural network'), the main difference between the invention and the comparison document 2 is as follows: 1. in the aspect of converting a color/gray image into a black and white image, the contrast file 2 does not describe a specific calculation method of the image binarization threshold value, the invention uses a dynamic threshold value mode based on histogram analysis to generate the binarization image, and can calculate a proper threshold value aiming at the images with different brightness and contrast, so that the hand-drawn pattern is highlighted above the background. 2. In the aspect of noise removal, the method for filtering the median of the comparison file 2 has the advantages that the method is used in black and white pictures to remove discrete isolated point noise, and large background shadows or bright spots cannot be removed; the dynamic threshold method provided by the invention is provided aiming at the diversity of picture sources, background shadows or bright spots can be compressed to the maximum extent, and the discrete point noise cannot influence the identification effect of the hand-drawn lines. According to the invention, no further filtering and denoising are carried out after binarization, so that the image preprocessing efficiency is improved. 3. In the aspect of picture marking, the comparison document 2 directly marks the hand drawing picture by using the evaluation result, the label result only corresponds to a certain evaluation standard, when the evaluation standard is changed, for example, the evaluation standard is changed into 'pass/fail', new marking is needed, but the steps for marking the quadrangle and the pentagon do not need to be changed, and only the codes corresponding to the logic of the evaluation standard are required to be modified. 4. In the aspect of analysis, the comparison file 2 classifies pictures by using a convolution deep neural network method, and the hand-drawing pattern is identified and logically judged by combining a pattern identification and set theory method, so that the method is closer to the practice of actual diagnosis of doctors, and is easier to understand and correct. 5. The comparison document 2 is only limited to the steps of picture processing, and does not relate to a picture acquisition, transmission and storage scheme. 6. Under the same data quantity of hundreds of pictures, the comparison file 2 needs to be trained by using median filtering and image enhancement to obtain the accuracy of 60-70%, the accuracy of more than 90% is still obtained without using the median filtering and the image enhancement, and the noise reduction filtering and the image enhancement are not necessary. 7. The thickness of the hand-drawn crossed pentagonal lines in different images is not consistent, which limits the generalization capability of the method of the comparison file 2, and the method has good fault-tolerant and generalization capabilities.
Compared with the prior art, the beneficial effects of the invention comprise the following points:
first, the present invention can be applied to different kinds of artificial mapping result analysis, including paper PDT and electronic PDT; the system can be developed into a mobile phone APP, computer software or a background program, is combined with various electronic devices such as a smart phone, a drawing board and a digital camera, and is suitable for different medical or scientific research environments;
secondly, the method comprises a reliable image preprocessing flow, and can reduce image noise and picture size to the maximum extent under the condition of not sacrificing effective information of the image, thereby improving the performance of the model and the calculation efficiency. Meanwhile, the automatic threshold value setting and cutting device has the functions of automatically setting the threshold value and cutting, and can accurately and efficiently process pictures of various formats acquired from different regions, different devices and different environments. Furthermore, the binarization image processing based on the dynamic threshold value plays a role in filtering and noise reduction to a certain extent, because the hand-drawn graph and the background can be roughly distinguished based on histogram analysis, and the binarization threshold value is calculated and obtained based on the histogram analysis, so that part of noise is classified into the background and removed. Meanwhile, the interference of the pixel noise of the local area to the scoring result is effectively reduced by combining the pattern recognition and the set theory analysis. Therefore, although the scheme does not design a filter for specific noise characteristics, the method can generate better fault tolerance rate, stability and prediction accuracy than the convolution deep neural network using filtering preprocessing.
Thirdly, compared with the traditional Supervised transfer learning (Supervised transfer learning) and image classification (image classification) methods, the deep learning method combining the automatic scoring scheme with the image target detection and the mathematical logic has the advantages of less required data amount, stronger generalization capability, traceable and easy-to-understand results and can use a more complex and refined scoring method according to actual needs.
In addition, the automatic scoring method can achieve expert-level performance through testing, so that clinicians can be reliably assisted in data analysis, and the accuracy and efficiency of decision making are improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents or improvements made within the technical scope of the present invention should be included in the scope of the present invention.

Claims (10)

1. The utility model provides a pentagonal drawing test intelligence evaluation system based on self-learning which characterized in that, including:
the digital image acquisition module (1) is used for acquiring a specified number of digital images of the pentagonal drawing;
the data preprocessing module (2) is used for forming a digital image training set from the digital images obtained by the digital image obtaining module (1) through drawing pentagons and drawing the digital images, adjusting the digital images in the image training set to highlight quadrangles and pentagons in the digital images, marking the quadrangles and the pentagons in the digital images, and recording coordinate information of the quadrangles and the pentagons in the digital images;
the image analysis module (3) is used for training according to quadrilateral and pentagonal coordinate information of all digitized images in the image training set to obtain a deep learning model, and the deep learning model is used for analyzing the PDT image after preprocessing and obtaining the type of a geometric figure and coordinate information in the PDT image;
and the evaluation and display module (4) is used for judging whether the geometric figure in the PDT image meets the preset PDT figure condition according to the analysis result of the deep learning model and generating evaluation result information.
2. A self-learning-based intelligent evaluation method for pentagonal drawing test is characterized in that the method is realized based on a system, the system comprises a digital image acquisition module (1), a data preprocessing module (2), an image analysis module (3) and an evaluation and display module (4), and the method comprises the following steps:
step S1, the digital image acquisition module (1) acquires a specified number of pentagon drawing digital images;
step S2, the data preprocessing module (2) combines the digital images of the pentagon drawing acquired by the digital image acquisition module (1) into an image training set, and adjusts the digital images in the image training set to highlight the quadrangle and the pentagon in the digital images;
step S3, the data preprocessing module (2) marks quadrangle and pentagon in the digitized image and records coordinate information of the quadrangle and the pentagon in the digitized image;
step S4, the image analysis module (3) trains and obtains a deep learning model based on the coordinate information of the quadrangle and the pentagon of all the digitized images in the image training set;
step S5, inputting the PDT image after preprocessing into the deep learning model, and analyzing by the deep learning model to obtain the geometric figure type and coordinate information in the PDT image;
and step S6, the evaluation and display module (4) judges whether the geometric figure in the PDT image meets the preset PDT figure condition according to the analysis result of the deep learning model, and meanwhile, evaluation result information is generated.
3. The self-learning based intelligent evaluation method for pentagonal plotting tests as claimed in claim 2, wherein in step S1, the digitized image obtained by the digital image obtaining module (1) comprises a photographed, scanned or copied image.
4. The self-learning based intelligent evaluation method for pentagonal drawing test as claimed in claim 2, wherein in step S2, the data preprocessing module (2) preprocesses the digitized image including any one or more of whitening, noise reduction, target area identification, cropping and resizing.
5. The self-learning based intelligent evaluation method for pentagonal drawing test as claimed in claim 2, wherein the step S2 comprises the step of denoising the digitized image:
step S200, converting the colorful digital image from a multi-color channel image into a single-channel gray image, and selecting an optimal threshold value according to the histogram distribution of pixel gray;
step S201, binarizing the grayscale map using the threshold: and setting all pixel points with the intensity lower than the threshold value as 0 and the rest as 1, and converting the gray-scale image into a black-white image in the step so as to finish the image noise reduction treatment.
6. The self-learning-based intelligent evaluation method for pentagonal drawing test as claimed in claim 5, wherein if the digitized image after the noise reduction processing in step S21 has more edge artifacts, the digitized image is clipped to form a new picture only containing the target geometry.
7. The self-learning based intelligent evaluation method for pentagonal graphic test as claimed in claim 6, wherein the process of cropping the digitized image comprises manual cropping or automatic cropping using a pre-trained automatic target area recognition algorithm.
8. The self-learning based intelligent evaluation method for pentagonal graphic test as claimed in claim 2, wherein the step S2 comprises the steps of:
step S210, converting the digital image from an RGB channel image into a single-channel gray image;
step S211, for the grayscale image, determines the threshold value of each digitized image using the following formula:
h=η1min(Ixy)+η2max(Ixy)+(1-η12)mean(Ixy);
wherein: eta is the proportional weight, Ixy (1. ltoreq. x. ltoreq. Lx, 1. ltoreq. y. ltoreq. Ly) is the pixel gray scale of the whole image;
step S212, after converting into black and white images, manually cutting out the target geometric figure by using a rectangular window (lxxly), and expanding the cut region into a square (d × d) image according to the following formula:
d=max(lx,ly)。
9. the self-learning-based intelligent evaluation method for pentagonal drawing test as claimed in claim 2, wherein in step S1, a part of digitized images of pentagonal drawing obtained by the digital image obtaining module (1) further forms an image test set, and in step S4, the digitized images in the image test set are preprocessed and inputted to the deep learning model for verification.
10. The self-learning based intelligent evaluation method for pentagonal graphic test as claimed in claim 2, characterized in that in step S6, the evaluation and display module (4) generates evaluation result information including but not limited to:
"pass" and "fail" information;
probabilities of identified quadrilaterals and pentagons;
scoring of each digitized image.
CN202111247830.0A 2021-10-26 2021-10-26 Self-learning-based intelligent evaluation system and method for pentagonal drawing test Pending CN113989588A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111247830.0A CN113989588A (en) 2021-10-26 2021-10-26 Self-learning-based intelligent evaluation system and method for pentagonal drawing test

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111247830.0A CN113989588A (en) 2021-10-26 2021-10-26 Self-learning-based intelligent evaluation system and method for pentagonal drawing test

Publications (1)

Publication Number Publication Date
CN113989588A true CN113989588A (en) 2022-01-28

Family

ID=79741608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111247830.0A Pending CN113989588A (en) 2021-10-26 2021-10-26 Self-learning-based intelligent evaluation system and method for pentagonal drawing test

Country Status (1)

Country Link
CN (1) CN113989588A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782964A (en) * 2022-06-20 2022-07-22 阿里巴巴(中国)有限公司 Image processing method, storage medium, and electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782964A (en) * 2022-06-20 2022-07-22 阿里巴巴(中国)有限公司 Image processing method, storage medium, and electronic device

Similar Documents

Publication Publication Date Title
CN110033456B (en) Medical image processing method, device, equipment and system
US20190304098A1 (en) Segmenting ultrasound images
WO2020024127A1 (en) Bone age assessment and height prediction model, system thereof and prediction method therefor
WO2021139258A1 (en) Image recognition based cell recognition and counting method and apparatus, and computer device
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN110490892A (en) A kind of Thyroid ultrasound image tubercle automatic positioning recognition methods based on USFaster R-CNN
CN111507426B (en) Non-reference image quality grading evaluation method and device based on visual fusion characteristics
CN112380900A (en) Deep learning-based cervical fluid-based cell digital image classification method and system
WO2023137914A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN103488974A (en) Facial expression recognition method and system based on simulated biological vision neural network
WO2022198898A1 (en) Picture classification method and apparatus, and device
TW202004776A (en) Establishing method of bone age assessment and height prediction model, bone age assessment and height prediction system, and bone age assessment and height prediction method
CN102982542A (en) Fundus image vascular segmentation method based on phase congruency
CN108186051A (en) A kind of image processing method and processing system of the automatic measurement fetus Double Tops electrical path length from ultrasonoscopy
CN108876756A (en) The measure and device of image similarity
CN112862744A (en) Intelligent detection method for internal defects of capacitor based on ultrasonic image
CN115082776A (en) Electric energy meter automatic detection system and method based on image recognition
WO2022088856A1 (en) Fundus image recognition method and apparatus, and device
CN113989588A (en) Self-learning-based intelligent evaluation system and method for pentagonal drawing test
Huang et al. HEp-2 cell images classification based on textural and statistic features using self-organizing map
Badeka et al. Evaluation of LBP variants in retinal blood vessels segmentation using machine learning
CN112927215A (en) Automatic analysis method for digestive tract biopsy pathological section
CN110910497B (en) Method and system for realizing augmented reality map
CN115131355B (en) Intelligent method for detecting waterproof cloth abnormity by using electronic equipment data
CN116168328A (en) Thyroid nodule ultrasonic inspection system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication