CN117314949A - Personnel injury detection and identification method based on artificial intelligence - Google Patents

Personnel injury detection and identification method based on artificial intelligence Download PDF

Info

Publication number
CN117314949A
CN117314949A CN202311594840.0A CN202311594840A CN117314949A CN 117314949 A CN117314949 A CN 117314949A CN 202311594840 A CN202311594840 A CN 202311594840A CN 117314949 A CN117314949 A CN 117314949A
Authority
CN
China
Prior art keywords
segmentation threshold
initial segmentation
class
indicate
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311594840.0A
Other languages
Chinese (zh)
Other versions
CN117314949B (en
Inventor
田强
曹梦晨
王玲玲
于静
孙印亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Yuanshuo Shangchi Health Technology Co ltd
Original Assignee
Shandong Yuanshuo Shangchi Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Yuanshuo Shangchi Health Technology Co ltd filed Critical Shandong Yuanshuo Shangchi Health Technology Co ltd
Priority to CN202311594840.0A priority Critical patent/CN117314949B/en
Publication of CN117314949A publication Critical patent/CN117314949A/en
Application granted granted Critical
Publication of CN117314949B publication Critical patent/CN117314949B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/772Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a personal injury detection and identification method based on artificial intelligence. The method comprises the following steps: acquiring a plurality of injury images of a person; acquiring a plurality of initial segmentation thresholds of each injury image, acquiring inter-class characteristics of each initial segmentation threshold, acquiring intra-class characteristics of each initial segmentation threshold, and acquiring adjustment parameters of each initial segmentation threshold according to the inter-class characteristics and the intra-class characteristics of each initial segmentation threshold; and adjusting each initial segmentation threshold according to the adjustment parameters of each initial segmentation threshold to obtain a plurality of final segmentation thresholds of each injury image, obtaining a plurality of segmentation areas of each injury image according to the plurality of final segmentation thresholds, and carrying out injury detection and identification according to the plurality of segmentation areas of each injury image. Therefore, each injury area is accurately segmented, and accurate injury detection and identification are realized.

Description

Personnel injury detection and identification method based on artificial intelligence
Technical Field
The invention relates to the technical field of image processing, in particular to a personal injury detection and identification method based on artificial intelligence.
Background
The detection and identification of personal injury has important significance in the medical field. Accurate detection and timely identification of injury are important for rapid treatment decisions, recovery of injured persons and reasonable utilization of medical resources. The traditional injury detection method generally depends on experience and manual analysis of doctors, and has the problems of high subjectivity, long time consumption, easy error and the like. With the rapid development of artificial intelligence technology, computer vision, deep learning, machine learning and other technologies have been significantly advanced in the field of medical diagnosis. These techniques can process large-scale medical data, identify complex injury patterns, and provide accurate injury analysis results in a short time.
In order to realize the injury identification by using the neural network, the neural network needs to be trained, and the training of the neural network needs to acquire a data set. The injury image in the data set generally needs to be manually marked. Because the number of the injury images to be marked in the data set is large, people can only mark each injury region approximately, the injury region obtained by the marking mode is not accurate enough, and the neural network can learn some unnecessary characteristics during training and learning, so that the training efficiency of the neural network can be affected.
In order to achieve accurate labeling of the injured areas, each injured area needs to be accurately segmented. The injury region segmentation is typically achieved by a threshold segmentation method. The main factor affecting the accuracy of the segmentation of the injured area when the threshold segmentation is performed is the set segmentation threshold. There is more than one degree of injury in the image, and multiple segmentation thresholds are required to separate the different degrees of injury. The traditional method mostly adopts a maximum inter-class variance method to obtain the segmentation threshold value. The maximum inter-class variance method is accurate in obtaining a single threshold value, and the method is not accurate enough in obtaining multiple segmentation threshold values. Therefore, how to accurately obtain a plurality of segmentation thresholds and further realize accurate injury detection becomes a problem to be solved.
Disclosure of Invention
In order to solve the technical problems, the invention provides an artificial intelligence-based personal injury detection and identification method, which adopts the following technical scheme:
acquiring a plurality of injury images of a person;
obtaining a plurality of initial segmentation thresholds according to the severe change condition of data in a statistical histogram of each injury image, obtaining a plurality of local intervals according to the initial segmentation thresholds, obtaining inter-class characteristics of each initial segmentation threshold according to the data difference between two adjacent local intervals of each initial segmentation threshold, obtaining intra-class characteristics of each initial segmentation threshold according to the inter-class data similarity of two adjacent local intervals of each initial segmentation threshold, and obtaining adjustment parameters of each initial segmentation threshold according to the inter-class characteristics and the intra-class characteristics of each initial segmentation threshold;
and adjusting each initial segmentation threshold according to the adjustment parameters of each initial segmentation threshold to obtain a plurality of final segmentation thresholds of each injury image, segmenting each injury image by using the plurality of final segmentation thresholds to obtain a plurality of segmentation areas of each injury image, and detecting and identifying the injury according to the plurality of segmentation areas of each injury image.
Preferably, the obtaining a plurality of initial segmentation thresholds according to the severe change condition of the data in the statistical histogram of each injury image includes the specific steps of:
counting the gray values of all pixels of each wounded image to obtain a gray histogram of each wounded image, fitting the gray histogram of the wounded image to a curve, calculating the steepness of each data point on the curve of the gray histogram of the wounded image by using the existing steepness calculation method, and maximizing the steepnessPersonal dataGray value of point as initial segmentation threshold, +.>Representing a preset number.
Preferably, the obtaining a plurality of local intervals according to the initial segmentation threshold includes the specific steps of:
the method comprises the steps of obtaining a maximum gray value and a minimum gray value in each wounded image, taking a value area between the minimum gray value and the maximum gray value as an integral interval, and dividing the integral interval into a plurality of local intervals by utilizing an initial dividing threshold.
Preferably, the obtaining the inter-class feature of each initial segmentation threshold according to the data difference between two adjacent local intervals of each initial segmentation threshold includes the following specific steps:
obtaining pixels with gray values belonging to each local interval from each wounded image, and marking the pixels as pixels of each local interval; the method comprises the steps that two local sections adjacent to each initial segmentation threshold are obtained in all local sections and are marked as reference local sections of each initial segmentation threshold, wherein the reference local section on the left side of each initial segmentation threshold is marked as a first reference local section of each initial segmentation threshold, and the reference local section on the right side of each initial segmentation threshold is marked as a second reference local section of each initial segmentation threshold;
taking each initial segmentation threshold value as a segmentation threshold value of the pixel of the reference local interval, calculating and analyzing the gray value of the pixel of the reference local interval by using the existing inter-class variance calculation method to obtain the inter-class variance of each initial segmentation threshold value, and recording the inter-class variance as the reference inter-class variance of each initial segmentation threshold value;
obtaining the inter-class distance of each initial segmentation threshold;
the calculation method for obtaining the inter-class characteristics of each initial segmentation threshold according to the inter-class distance and the reference inter-class variance of each initial segmentation threshold comprises the following steps:
wherein,indicate->Inter-class distance of the initial segmentation threshold, +.>Indicate->Reference inter-class variance of the initial segmentation threshold, < ->Representing a linear normalization process,/->Indicate->Inter-class features for the initial segmentation threshold.
Preferably, the obtaining the inter-class distance of each initial segmentation threshold includes the specific steps of:
wherein,indicate->First reference local interval of initial segmentation threshold>Gray value->Indicate->First +.in the second reference local interval of the initial segmentation threshold>Gray value->Indicate->The first reference local interval of each initial segmentation threshold contains gray value number,/number of gray values>Indicate->The second reference local interval of the initial segmentation threshold contains gray value number,/number>Indicate->The first ∈of the reference local interval of the initial segmentation threshold>The weight of the individual gray values is determined,indicate->The number of pixels in the second reference partial interval of the initial segmentation threshold,/>Indicate->Pixels in a first reference local interval of the initial segmentation thresholdCount (n)/(l)>Indicate->First +.in the second reference local interval of the initial segmentation threshold>Number of pixels corresponding to gray value, +.>Indicate->First reference local interval of initial segmentation threshold>Number of pixels corresponding to gray value, +.>Representing natural constant->Indicate->Inter-class distance of the initial segmentation threshold, +.>Representing absolute value symbols.
Preferably, the obtaining the intra-class feature of each initial segmentation threshold according to the intra-interval data similarity of two adjacent local intervals of each initial segmentation threshold includes the specific steps of:
acquiring intra-class distribution differences of each initial segmentation threshold;
the method for calculating the information confusion in each initial segmentation threshold comprises the following steps:
wherein,indicate->The first reference local interval of each initial segmentation threshold contains gray value number,/number of gray values>Indicate->The second reference local interval of the initial segmentation threshold contains gray value number,/number>Indicate->First reference local interval of initial segmentation threshold>Frequency of occurrence of pixels of individual gray values in the wounded image,/for each pixel of the gray values>Indicate->First +.in the second reference local interval of the initial segmentation threshold>Frequency of occurrence of pixels of individual gray values in the wounded image,/for each pixel of the gray values>Indicate->Information confusion within class of the initial segmentation threshold,/->Representing a base 2 logarithmic function;
the method for calculating the intra-class features of each initial segmentation threshold according to intra-class distribution differences and intra-class information confusion of each initial segmentation threshold comprises the following steps:
wherein,indicate->Intra-class distribution differences of the initial segmentation threshold,/->Indicate->Information confusion within class of the initial segmentation threshold,/->Representing natural constant->Representing a linear normalization process,/->Indicate->Intra-class features for each initial segmentation threshold.
Preferably, the obtaining the intra-class distribution difference of each initial segmentation threshold includes the specific steps of:
wherein,indicate->First +.in the second reference local interval of the initial segmentation threshold>Number of pixels corresponding to gray value, +.>Indicate->First reference local interval of initial segmentation threshold>Number of pixels corresponding to gray value, +.>Indicate->The average pixel number corresponding to all gray values in the first reference local interval of the initial segmentation threshold,/>Indicate->The average number of pixels corresponding to all gray values in the second reference local interval of the initial segmentation threshold,indicate->The first reference local interval of each initial segmentation threshold contains gray value number,/number of gray values>Indicate->The second reference local interval of the initial segmentation threshold contains gray value number,/number>Indicate->Intra-class distribution differences of the initial segmentation threshold,/->Representing absolute value symbols.
Preferably, the obtaining the adjustment parameter of each initial segmentation threshold according to the inter-class feature and the intra-class feature of each initial segmentation threshold includes the following specific steps:
wherein,indicate->Intra-class feature of the initial segmentation threshold, +.>Indicate->Inter-class features of the initial segmentation threshold, +.>Indicate->Adjustment parameters of the initial segmentation threshold, +.>Representing natural constants.
Preferably, the adjusting the initial segmentation threshold according to the adjustment parameter of each initial segmentation threshold to obtain a plurality of final segmentation thresholds of each injury image includes the following specific steps:
for any initial segmentation threshold, acquiring a maximum gray value in a second reference local section of the initial segmentation threshold as a first cut-off value, taking the initial segmentation threshold as a first segmentation threshold, acquiring an adjustment parameter of the first segmentation threshold, and combining the adjustment parameter of the first segmentation threshold with a preset adjustment parameter thresholdComparing, when the adjustment parameter of the first segmentation threshold is smaller than the preset adjustment parameter threshold +.>When the first segmentation threshold is used as a final segmentation threshold;
when the adjustment parameter of the first segmentation threshold is greater than or equal to the preset adjustment parameter thresholdAt the same time, the first division threshold value is combined with the preset first adjustment amount +.>And (2) as a second segmentation threshold, acquiring an adjustment parameter of the second segmentation threshold, and adjusting the adjustment parameter of the second segmentation threshold and a preset adjustment parameter threshold +.>Comparing, when the adjustment parameter of the second segmentation threshold is smaller than the preset adjustment parameter threshold +.>When the first segmentation threshold is used as a final segmentation threshold; and the like until a final segmentation threshold value or a segmentation threshold value which is more than or equal to a first cut-off value is obtained;
when the segmentation threshold value is greater than or equal to the first cut-off value, acquiring the minimum gray value in the first reference local interval of the initial segmentation threshold value, marking the minimum gray value as a second cut-off value, and presetting a second adjustment amountReplacing the preset first adjustment amount->And replacing the first cut-off value with the second cut-off value, and adjusting the initial segmentation threshold values to obtain final segmentation threshold values corresponding to each initial segmentation threshold value.
Preferably, the detecting and identifying of the injury is performed according to a plurality of divided areas of each injury image, including the following specific steps:
labeling the severity of the injury of each divided area in each injury image by people, forming a data set by all the injury images with labels, constructing an injury detection and identification network, and completing the training of the injury detection and identification network by using the data set; and inputting the newly acquired injury image into a trained injury identification network to realize injury detection and identification.
The invention has the following beneficial effects:
in order to realize accurate injury detection and identification, the label of the injury image of the injury detection and identification network needs to be ensured to be accurate, and in order to improve the label accuracy of the injury image, each injury region in the separated injury image needs to be ensured to be accurate. In order to divide an accurate injury region, the accuracy of the division threshold needs to be improved. Therefore, firstly, acquiring the injury image, and obtaining an initial segmentation threshold according to the variation characteristics of the statistical histogram of the gray values of pixels in the injury image, wherein the obtained initial segmentation threshold is not accurate enough; to adjust the initial segmentation threshold, the initial segmentation threshold is evaluated. When the segmentation effect of the initial segmentation threshold is good, the information difference between the injury areas segmented by the initial segmentation threshold is large, and the information similarity inside each injury area segmented by the initial segmentation threshold is high. Thus, the inter-class characteristics of the initial segmentation threshold are obtained by analyzing the data difference of the two reference local areas segmented by the initial segmentation threshold. And obtaining the intra-class characteristics of the initial segmentation threshold value by analyzing the data similarity in each reference local area segmented by the initial segmentation threshold value. And obtaining an adjustment parameter of each initial segmentation threshold according to the intra-class characteristic and the inter-class characteristic of each initial segmentation threshold, wherein the segmentation effect of the initial segmentation threshold can be reflected through the adjustment parameter. And adjusting each initial segmentation threshold according to the adjustment parameters of each initial segmentation threshold to obtain a final segmentation threshold. The final segmentation threshold has more accurate segmentation effect. And (3) carrying out segmentation processing on the injury image by utilizing the final segmentation threshold value to obtain a plurality of segmentation areas of each injury image, and labeling based on the plurality of segmentation areas, so that the accuracy of training data of the injury detection and identification network is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a personal injury detecting and identifying method based on artificial intelligence according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description is given below of the artificial intelligence-based personal injury detection and identification method according to the invention, which is provided by combining the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
An embodiment of a personal injury detection and identification method based on artificial intelligence:
the following specifically describes a specific scheme of the artificial intelligence-based personal injury detection and identification method provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a personal injury detecting and identifying method based on artificial intelligence according to an embodiment of the invention is shown, where the method includes:
s001: and acquiring a wounded image of the person.
It should be noted that, in order to implement the injury identification by using the neural network, the neural network needs to be trained, and the training of the neural network needs to acquire the data set. The injury image in the data set generally needs to be manually marked. Because the number of the injury images to be marked in the data set is large, people can only mark each injury region approximately, the injury region obtained by the marking mode is not accurate enough, and each injury region needs to be accurately segmented in order to realize accurate injury region marking. In order to accurately divide each injury area, an injury image of a person needs to be acquired first.
Specifically, a plurality of wounded images of personnel disclosed in a hospital are acquired, the size of the wounded image with the largest size is recorded as a reference size, the size of each wounded image is adjusted by using the existing scale adjustment method, and the size of each wounded image is adjusted as the reference size. The gray image of the injury image is obtained by gray processing, and the gray image of the injury image is still recorded as the injury image for convenience in description.
S002: acquiring a plurality of initial segmentation thresholds of each injury image, calculating the intra-class characteristics of each initial segmentation threshold, calculating the inter-class characteristics of each initial segmentation threshold, and obtaining the adjustment parameters of each initial segmentation threshold according to the intra-class characteristics and the inter-class characteristics of each initial segmentation threshold.
In order to perform the segmentation process on each of the injury images, the segmentation threshold of each of the injury images is obtained according to the gray value information in each of the injury images. Normally, the segmentation threshold is generally located at the valley position of the gray level histogram of the wounded image, and the data at the valley position is changed more severely, so that the initial segmentation threshold can be obtained by analyzing the data change condition in the gray level histogram.
Specifically, the gray value of all pixels of each wounded image is counted to obtain the gray histogram of each wounded image, the gray histogram of the wounded image is fitted with a curve, the steepness of each data point on the curve of the gray histogram of the wounded image is calculated by using the existing steepness calculation method, and the steepness is maximizedGray values of data points as initial segmentation threshold, +.>Representing a preset number, the present embodiment is with +.>Taking 50 as an example, other embodiments may take other values, and the present embodiment is not particularly limited.
So far, the initial segmentation threshold is obtained by simply analyzing and obtaining the gray value distribution characteristics, so that the accuracy is lower. The initial segmentation threshold is then evaluated to adjust the segmentation effect of the initial segmentation threshold.
The threshold division divides pixels having similar gray values into one divided region, and pixels having large differences in gray values into different divided regions. Thus, based thereon, intra-class features and inter-class features for each segmentation threshold are obtained.
Further, a maximum gray value and a minimum gray value in each wounded image are obtained, a value area between the minimum gray value and the maximum gray value is used as an integral section, and the integral section is divided into a plurality of local sections by using an initial dividing threshold. And acquiring pixels with gray values belonging to each local interval from each injury image, and marking the pixels as pixels of each local interval. The method comprises the steps of obtaining two local intervals adjacent to each initial segmentation threshold value in all local intervals, marking the two local intervals adjacent to each initial segmentation threshold value as reference local intervals of each initial segmentation threshold value, marking the reference local interval on the left side of each initial segmentation threshold value as a first reference local interval of each initial segmentation threshold value, marking the reference local interval on the right side of each initial segmentation threshold value as a second reference local interval of each initial segmentation threshold value, marking each initial segmentation threshold value as a segmentation threshold value of a pixel of the reference local interval, and calculating and analyzing gray values of the pixels of the reference local interval by using an existing inter-class variance calculation method to obtain inter-class variances of the initial segmentation threshold values and marking the inter-class variances of the pixels of each initial segmentation threshold value as the reference inter-class variances of each initial segmentation threshold value.
The computing method of the inter-class characteristics of each initial segmentation threshold value comprises the following steps:
wherein,indicate->First reference local interval of initial segmentation threshold>Gray value->Indicate->First +.in the second reference local interval of the initial segmentation threshold>Gray value->Indicate->The first reference local interval of each initial segmentation threshold contains gray value number,/number of gray values>Indicate->The second reference local interval of the initial segmentation threshold contains gray value number,/number>Indicate->The first ∈of the reference local interval of the initial segmentation threshold>The weight of the gray value, the larger this value is, the more +.>First two reference local intervals of initial segmentation threshold>The larger the difference in gray values of the individual pixels is, and thus based on +.>The segmentation effect of the initial segmentation threshold is good for each gray value, < >>Indicate->The number of pixels in the second reference partial interval of the initial segmentation threshold,/>Indicate->The number of pixels in the first reference partial interval of the initial segmentation threshold,/>Indicate->First +.in the second reference local interval of the initial segmentation threshold>The number of pixels corresponding to the number of gray values,indicate->First reference local interval of initial segmentation threshold>Number of pixels corresponding to gray value, +.>Representing natural constants. />It is reflected that the larger the difference of the gray values between the two reference partial sections is, the larger the difference of the number of pixels corresponding to the gray values is, and thus the larger the difference between the two reference partial sections is. />Indicate->The larger the value of the inter-class distance of the initial segmentation threshold value, which indicates that the larger the difference of the gray values of the pixels of the two reference segmentation intervals of the initial segmentation threshold value, the better the segmentation effect of the initial segmentation threshold value, the better the +.>Indicate->The reference inter-class variance of the initial segmentation threshold, the larger this value is, the more +.>The better the segmentation effect of the initial segmentation threshold is, < >>Representing a linear normalization process,/->Indicate->Inter-class features of the initial segmentation threshold by which the +.>Inter-class difference conditions of two reference local intervals segmented by the initial segmentation threshold values. A larger value of this value indicates a better segmentation effect, < >>Representing absolute value symbols.
Thus, the inter-class characteristics of each initial segmentation threshold are obtained, and the gray level difference condition of pixels in two local intervals segmented by each initial segmentation threshold can be reflected through the inter-class characteristics.
It should be noted that, the evaluation of the quality of a division threshold value is not only to analyze the difference of the gray values of the pixels of the same division threshold value in different local intervals, but also to analyze the similarity of the gray values of the pixels of the same local interval.
Further, the method for calculating the intra-class features of each initial segmentation threshold comprises the following steps:
wherein,indicate->First +.in the second reference local interval of the initial segmentation threshold>Number of pixels corresponding to gray value, +.>Indicate->First reference local interval of initial segmentation threshold>Number of pixels corresponding to gray value, +.>Indicate->The average pixel number corresponding to all gray values in the first reference local interval of the initial segmentation threshold,/>Indicate->The average number of pixels corresponding to all gray values in the second reference local interval of the initial segmentation threshold,indicate->The first reference local interval of each initial segmentation threshold contains gray value number,/number of gray values>Indicate->The second reference local interval of the initial segmentation threshold contains gray value number,/number>Reflect->Distribution difference of gray value corresponding pixels in first reference local interval of initial segmentation threshold value, +.>Reflect->The difference in the distribution of gray values corresponding to pixels in the second reference local interval of the initial segmentation threshold, the larger these two values are, the more +.>The more non-uniform the distribution of each gray value corresponding to the pixels in each reference partial interval of the initial segmentation threshold, the more the peak-to-valley variation exists, and the class difference exists between the gray values due to the peak-to-valley variation, so that other thresholds are needed in each reference partial interval, the more the intra-class difference is, and the less the intra-class features describing the intra-class similarity should be>Represent the firstIntra-class distribution differences of the initial segmentation thresholds; />Indicate->First reference local interval of initial segmentation threshold>Frequency of occurrence of pixels of individual gray values in the wounded image,/for each pixel of the gray values>Indicate->First +.in the second reference local interval of the initial segmentation threshold>Frequency of occurrence of pixels of individual gray values in the wounded image,/for each pixel of the gray values>Indicate->Information entropy of all gray value-corresponding pixels in the second reference local interval of the initial segmentation threshold, which value reflects the +.>In the case where the pixels corresponding to all gray values in the second reference partial section of the initial segmentation threshold carry an amount of information,indicate->Information entropy of all gray value-corresponding pixels in the first reference local interval of the initial segmentation threshold, which value reflects the +.>And the pixels corresponding to all gray values in the first reference local interval of the initial segmentation threshold values carry information quantity. />Indicate->Within-class information confusion for individual initial segmentation thresholdsSex (S)/(S)>Representing natural constant->Representing a linear normalization process,/->Indicate->Within-class features of the initial segmentation threshold by which to react to +.>And (3) the similarity of the two reference local intervals segmented by the initial segmentation threshold values. The larger the value, the better the segmentation effect. />Representing absolute value symbols, ++>A logarithmic function with a base of 2 is shown.
Thus, the intra-class feature and the inter-class feature of each initial segmentation threshold are obtained, and the segmentation effect of the initial segmentation threshold can be reflected by the two values. In order to obtain a better segmentation threshold, each initial segmentation threshold needs to be adjusted, so that the two values are used to obtain the adjustment parameters of each initial segmentation threshold.
Further, the calculation method of the adjustment parameter of each initial segmentation threshold value is as follows:
wherein,indicate->Within-class features of an initial segmentation threshold, the larger this value is, the more +.>The higher the intra-class similarity of the two reference local intervals segmented by the initial segmentation threshold, the better the segmentation effect of the initial segmentation threshold,indicate->Inter-class features of an initial segmentation threshold, the larger this value is, the more +.>The larger the difference between the two reference local intervals divided by the initial dividing threshold is, the better the dividing effect of the initial dividing threshold is>Indicate->An adjustment parameter of an initial segmentation threshold, the larger the value is, the worse the segmentation effect of the initial segmentation threshold, the larger the adjustment is needed, the +.>Representing natural constants.
S003: and obtaining a final segmentation threshold according to the adjustment parameters of the initial segmentation threshold, and segmenting the injury image according to the final segmentation threshold to obtain a plurality of segmentation areas.
It should be noted that, the adjustment parameter of each initial segmentation threshold is obtained in the above process, and the final segmentation threshold is obtained by adjusting the initial segmentation threshold based on the adjustment parameter of each initial segmentation threshold.
Specifically, for any one initial segmentation threshold, the maximum gray value in the second reference local section of the initial segmentation threshold is obtained and recorded as a first cut-off value, and the initial segmentation threshold is taken as a first segmentation thresholdAcquiring an adjustment parameter of a first segmentation threshold, and combining the adjustment parameter of the first segmentation threshold with a preset adjustment parameter thresholdComparing, when the adjustment parameter of the first segmentation threshold is smaller than the preset adjustment parameter threshold +.>When the first segmentation threshold is used as a final segmentation threshold;
when the adjustment parameter of the first segmentation threshold is greater than or equal to the preset adjustment parameter thresholdAt the same time, the first division threshold value is combined with the preset first adjustment amount +.>And (2) as a second segmentation threshold, acquiring an adjustment parameter of the second segmentation threshold, and adjusting the adjustment parameter of the second segmentation threshold and a preset adjustment parameter threshold +.>Comparing, when the adjustment parameter of the second segmentation threshold is smaller than the preset adjustment parameter threshold +.>When the first segmentation threshold is used as a final segmentation threshold; and the like until a final segmentation threshold value or a segmentation threshold value greater than or equal to a first cut-off value is obtained.
When the segmentation threshold value is greater than or equal to the first cut-off value, acquiring the minimum gray value in the first reference local interval of the initial segmentation threshold value, marking the minimum gray value as a second cut-off value, and presetting a second adjustment amountReplacing the preset first adjustment amount->Replacing the first cut-off value with the second cut-off value, and adjusting the initial segmentation threshold to obtain each initial segmentation threshold pairA final segmentation threshold value is applied.
The embodiment is toTake 10>Get 1>Taking-1 as an example, other embodiments may take other values, and the example is not particularly limited.
Further, the final segmentation threshold values are utilized to segment each injury image to obtain a plurality of segmentation areas.
S004: and carrying out injury detection and identification according to a plurality of segmentation areas of each injury image.
Specifically, label labeling is performed on the severity of the injury of each divided area in each injury image, all the injury images with labels form a data set, an injury detection and identification network is constructed, the injury detection and identification network in the embodiment is a YoloV13 network, and the training of the injury detection and identification network is completed by using the data set. And inputting the newly acquired injury image into a trained injury identification network to realize injury detection and identification.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (10)

1. The artificial intelligence-based personal injury detection and identification method is characterized by comprising the following steps of:
acquiring a plurality of injury images of a person;
obtaining a plurality of initial segmentation thresholds according to the severe change condition of data in a statistical histogram of each injury image, obtaining a plurality of local intervals according to the initial segmentation thresholds, obtaining inter-class characteristics of each initial segmentation threshold according to the data difference between two adjacent local intervals of each initial segmentation threshold, obtaining intra-class characteristics of each initial segmentation threshold according to the inter-class data similarity of two adjacent local intervals of each initial segmentation threshold, and obtaining adjustment parameters of each initial segmentation threshold according to the inter-class characteristics and the intra-class characteristics of each initial segmentation threshold;
and adjusting each initial segmentation threshold according to the adjustment parameters of each initial segmentation threshold to obtain a plurality of final segmentation thresholds of each injury image, segmenting each injury image by using the plurality of final segmentation thresholds to obtain a plurality of segmentation areas of each injury image, and detecting and identifying the injury according to the plurality of segmentation areas of each injury image.
2. The artificial intelligence-based personal injury detection and identification method according to claim 1, wherein the obtaining of the initial segmentation threshold according to the severe variation of the data in the statistical histogram of each injury image comprises the following specific steps:
counting the gray values of all pixels of each wounded image to obtain a gray histogram of each wounded image, fitting the gray histogram of the wounded image to a curve, calculating the steepness of each data point on the curve of the gray histogram of the wounded image by using the existing steepness calculation method, and maximizing the steepnessGray values of data points as initial segmentation threshold, +.>Representing a preset number.
3. The artificial intelligence-based personal injury detection and identification method according to claim 1, wherein the obtaining a plurality of local intervals according to the initial segmentation threshold comprises the following specific steps:
the method comprises the steps of obtaining a maximum gray value and a minimum gray value in each wounded image, taking a value area between the minimum gray value and the maximum gray value as an integral interval, and dividing the integral interval into a plurality of local intervals by utilizing an initial dividing threshold.
4. The artificial intelligence-based personal injury detection and identification method according to claim 1, wherein the obtaining the inter-class feature of each initial segmentation threshold according to the data difference between two adjacent local intervals of each initial segmentation threshold comprises the following specific steps:
obtaining pixels with gray values belonging to each local interval from each wounded image, and marking the pixels as pixels of each local interval; the method comprises the steps that two local sections adjacent to each initial segmentation threshold are obtained in all local sections and are marked as reference local sections of each initial segmentation threshold, wherein the reference local section on the left side of each initial segmentation threshold is marked as a first reference local section of each initial segmentation threshold, and the reference local section on the right side of each initial segmentation threshold is marked as a second reference local section of each initial segmentation threshold;
taking each initial segmentation threshold value as a segmentation threshold value of the pixel of the reference local interval, calculating and analyzing the gray value of the pixel of the reference local interval by using the existing inter-class variance calculation method to obtain the inter-class variance of each initial segmentation threshold value, and recording the inter-class variance as the reference inter-class variance of each initial segmentation threshold value;
obtaining the inter-class distance of each initial segmentation threshold;
the calculation method for obtaining the inter-class characteristics of each initial segmentation threshold according to the inter-class distance and the reference inter-class variance of each initial segmentation threshold comprises the following steps:
wherein,indicate->Inter-class distance of the initial segmentation threshold, +.>Indicate->Reference inter-class variance of the initial segmentation threshold, < ->Representing a linear normalization process,/->Indicate->Inter-class features for the initial segmentation threshold.
5. The artificial intelligence based personal injury detection and identification method of claim 4, wherein the obtaining the inter-class distance of each initial segmentation threshold comprises the following specific steps:
wherein,indicate->First reference local interval of initial segmentation threshold>Gray value->Indicate->First +.in the second reference local interval of the initial segmentation threshold>Gray value->Indicate->The first reference local interval of each initial segmentation threshold contains gray value number,/number of gray values>Indicate->The second reference local interval of the initial segmentation threshold contains gray value number,/number>Indicate->The first ∈of the reference local interval of the initial segmentation threshold>Weight of individual gray values, +.>Indicate->The number of pixels in the second reference partial interval of the initial segmentation threshold,/>Indicate->The number of pixels in the first reference partial interval of the initial segmentation threshold,/>Indicate->First +.in the second reference local interval of the initial segmentation threshold>Number of pixels corresponding to gray value, +.>Indicate->First reference local interval of initial segmentation threshold>Number of pixels corresponding to gray value, +.>Representing natural constant->Indicate->Inter-class of initial segmentation thresholdsDistance (L)>Representing absolute value symbols.
6. The artificial intelligence based personal injury detection and identification method according to claim 4, wherein the obtaining the intra-class feature of each initial segmentation threshold according to the intra-interval data similarity of two adjacent local intervals of each initial segmentation threshold comprises the following specific steps:
acquiring intra-class distribution differences of each initial segmentation threshold;
the method for calculating the information confusion in each initial segmentation threshold comprises the following steps:
wherein,indicate->The first reference local interval of each initial segmentation threshold contains gray value number,/number of gray values>Represent the firstThe second reference local interval of the initial segmentation threshold contains gray value number,/number>Indicate->First reference local interval of initial segmentation threshold>Frequency of occurrence of pixels of individual gray values in the wounded image,/for each pixel of the gray values>Indicate->First +.in the second reference local interval of the initial segmentation threshold>Frequency of occurrence of pixels of individual gray values in the wounded image,/for each pixel of the gray values>Indicate->Information confusion within class of the initial segmentation threshold,/->Representing a base 2 logarithmic function;
the method for calculating the intra-class features of each initial segmentation threshold according to intra-class distribution differences and intra-class information confusion of each initial segmentation threshold comprises the following steps:
wherein,indicate->Intra-class distribution differences of the initial segmentation threshold,/->Indicate->Information confusion within class of the initial segmentation threshold,/->Representing natural constant->Representing a linear normalization process,/->Indicate->Intra-class features for each initial segmentation threshold.
7. The artificial intelligence based personal injury detection and identification method of claim 6 wherein the obtaining of intra-class distribution differences for each initial segmentation threshold comprises the specific steps of:
wherein,indicate->First +.in the second reference local interval of the initial segmentation threshold>Number of pixels corresponding to gray value, +.>Indicate->First reference local interval of initial segmentation threshold>The number of pixels corresponding to the number of gray values,indicate->The average pixel number corresponding to all gray values in the first reference local interval of the initial segmentation threshold,/>Indicate->Average pixel number corresponding to all gray values in the second reference local interval of the initial segmentation threshold,/>Indicate->The first reference local interval of each initial segmentation threshold contains gray value number,/number of gray values>Indicate->The second reference local interval of the initial segmentation threshold contains gray value number,/number>Indicate->Intra-class distribution differences of the initial segmentation threshold,/->Representing absolute value symbols.
8. The artificial intelligence-based personal injury detection and identification method according to claim 1, wherein the obtaining the adjustment parameter of each initial segmentation threshold according to the inter-class feature and the intra-class feature of each initial segmentation threshold comprises the following specific steps:
wherein,indicate->Intra-class feature of the initial segmentation threshold, +.>Indicate->Inter-class features of the initial segmentation threshold, +.>Indicate->Adjustment parameters of the initial segmentation threshold, +.>Representing natural constants.
9. The artificial intelligence based personal injury detection and identification method according to claim 4, wherein the step of adjusting each initial segmentation threshold according to the adjustment parameter of each initial segmentation threshold to obtain a plurality of final segmentation thresholds of each injury image comprises the following specific steps:
for any initial segmentation threshold, acquiring a maximum gray value in a second reference local section of the initial segmentation threshold as a first cut-off value, taking the initial segmentation threshold as a first segmentation threshold, acquiring an adjustment parameter of the first segmentation threshold, and combining the adjustment parameter of the first segmentation threshold with a preset adjustment parameter thresholdComparing, when the adjustment parameter of the first segmentation threshold is smaller than the preset adjustment parameter threshold +.>When the first segmentation threshold is used as a final segmentation threshold;
when the adjustment parameter of the first segmentation threshold is greater than or equal to the preset adjustment parameter thresholdAt the same time, the first division threshold value is combined with the preset first adjustment amount +.>And (2) as a second segmentation threshold, acquiring an adjustment parameter of the second segmentation threshold, and adjusting the adjustment parameter of the second segmentation threshold and a preset adjustment parameter threshold +.>Comparing, when the adjustment parameter of the second segmentation threshold is smaller than the preset adjustment parameter thresholdWhen the first segmentation threshold is used as a final segmentation threshold; and the like until a final segmentation threshold value or a segmentation threshold value which is more than or equal to a first cut-off value is obtained;
when the segmentation threshold value is larger than or equal to the first cut-off value, the minimum gray value in the first reference local interval of the initial segmentation threshold value is acquired and recorded as the second cut-off valueWill preset a second adjustment amountReplacing the preset first adjustment amount->And replacing the first cut-off value with the second cut-off value, and adjusting the initial segmentation threshold values to obtain final segmentation threshold values corresponding to each initial segmentation threshold value.
10. The artificial intelligence based personal injury detection and identification method as set forth in claim 1, wherein the injury detection and identification is performed according to a plurality of divided areas of each injury image, comprising the specific steps of:
labeling the severity of the injury of each divided area in each injury image by people, forming a data set by all the injury images with labels, constructing an injury detection and identification network, and completing the training of the injury detection and identification network by using the data set; and inputting the newly acquired injury image into a trained injury identification network to realize injury detection and identification.
CN202311594840.0A 2023-11-28 2023-11-28 Personnel injury detection and identification method based on artificial intelligence Active CN117314949B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311594840.0A CN117314949B (en) 2023-11-28 2023-11-28 Personnel injury detection and identification method based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311594840.0A CN117314949B (en) 2023-11-28 2023-11-28 Personnel injury detection and identification method based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN117314949A true CN117314949A (en) 2023-12-29
CN117314949B CN117314949B (en) 2024-02-20

Family

ID=89288676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311594840.0A Active CN117314949B (en) 2023-11-28 2023-11-28 Personnel injury detection and identification method based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN117314949B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080267498A1 (en) * 2007-04-30 2008-10-30 Mark Shaw Unsupervised color image segmentation by dynamic color gradient thresholding
CN107767388A (en) * 2017-11-01 2018-03-06 重庆邮电大学 A kind of image partition method of combination cloud model and level set
CN109035289A (en) * 2018-07-27 2018-12-18 重庆师范大学 Purple soil image segmentation extracting method based on Chebyshev inequality H threshold value
CN109658424A (en) * 2018-12-07 2019-04-19 中央民族大学 A kind of improved robust two dimension OTSU threshold image segmentation method
WO2021000524A1 (en) * 2019-07-03 2021-01-07 研祥智能科技股份有限公司 Hole protection cap detection method and apparatus, computer device and storage medium
CN112669959A (en) * 2020-12-17 2021-04-16 中国医学科学院皮肤病医院(中国医学科学院皮肤病研究所) Vitiligo state of illness automatic assessment method based on image
CN114240989A (en) * 2021-11-30 2022-03-25 中国工商银行股份有限公司 Image segmentation method and device, electronic equipment and computer storage medium
CN115170487A (en) * 2022-06-24 2022-10-11 浙江同济科技职业学院 Tidal bore steepness indirect calculation method based on image gray scale change rate
CN115578389A (en) * 2022-12-08 2023-01-06 青岛澳芯瑞能半导体科技有限公司 Defect detection method of groove MOS device
CN115797351A (en) * 2023-02-08 2023-03-14 山东第一医科大学(山东省医学科学院) Abnormity detection method for photovoltaic cell panel
CN116137036A (en) * 2023-04-19 2023-05-19 吉林省英华恒瑞生物科技有限公司 Gene detection data intelligent processing system based on machine learning
WO2023134792A2 (en) * 2022-12-15 2023-07-20 苏州迈创信息技术有限公司 Led lamp wick defect detection method
CN116542966A (en) * 2023-06-28 2023-08-04 贵州医科大学附属医院 Intelligent bone age analysis method for children endocrine abnormality detection

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080267498A1 (en) * 2007-04-30 2008-10-30 Mark Shaw Unsupervised color image segmentation by dynamic color gradient thresholding
CN107767388A (en) * 2017-11-01 2018-03-06 重庆邮电大学 A kind of image partition method of combination cloud model and level set
CN109035289A (en) * 2018-07-27 2018-12-18 重庆师范大学 Purple soil image segmentation extracting method based on Chebyshev inequality H threshold value
CN109658424A (en) * 2018-12-07 2019-04-19 中央民族大学 A kind of improved robust two dimension OTSU threshold image segmentation method
WO2021000524A1 (en) * 2019-07-03 2021-01-07 研祥智能科技股份有限公司 Hole protection cap detection method and apparatus, computer device and storage medium
CN112669959A (en) * 2020-12-17 2021-04-16 中国医学科学院皮肤病医院(中国医学科学院皮肤病研究所) Vitiligo state of illness automatic assessment method based on image
CN114240989A (en) * 2021-11-30 2022-03-25 中国工商银行股份有限公司 Image segmentation method and device, electronic equipment and computer storage medium
CN115170487A (en) * 2022-06-24 2022-10-11 浙江同济科技职业学院 Tidal bore steepness indirect calculation method based on image gray scale change rate
CN115578389A (en) * 2022-12-08 2023-01-06 青岛澳芯瑞能半导体科技有限公司 Defect detection method of groove MOS device
WO2023134792A2 (en) * 2022-12-15 2023-07-20 苏州迈创信息技术有限公司 Led lamp wick defect detection method
CN115797351A (en) * 2023-02-08 2023-03-14 山东第一医科大学(山东省医学科学院) Abnormity detection method for photovoltaic cell panel
CN116137036A (en) * 2023-04-19 2023-05-19 吉林省英华恒瑞生物科技有限公司 Gene detection data intelligent processing system based on machine learning
CN116542966A (en) * 2023-06-28 2023-08-04 贵州医科大学附属医院 Intelligent bone age analysis method for children endocrine abnormality detection

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JINGDONG LIU: "Corrosion detection around rivets based on improved Otsu algorithm", 《2023 IEEE 3RD INTERNATIONAL CONFERENCE ON ELECTRONIC TECHNOLOGY, COMMUNICATION AND INFORMATION (ICETCI)》, pages 1270 - 1275 *
王健等: "基于二维激光雷达的自适应阈值聚类分割方法", 《中国激光》, vol. 48, no. 16, pages 176 - 183 *
石祥滨第: "一种双阈值红外行人分割方法", 《计算机工程》, vol. 38, no. 12, pages 5 - 8 *

Also Published As

Publication number Publication date
CN117314949B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
WO2021238455A1 (en) Data processing method and device, and computer-readable storage medium
CN115330800B (en) Automatic segmentation method of radiotherapy target area based on image processing
CN110969626B (en) Method for extracting hippocampus of human brain nuclear magnetic resonance image based on 3D neural network
CN109035269B (en) Cervical cell pathological section pathological cell segmentation method and system
CN104123561B (en) Fuzzy C-mean algorithm remote sensing image automatic classification method based on spatial attraction model
CN103996018B (en) Face identification method based on 4DLBP
CN116152505B (en) Bone target identification and segmentation method based on X-ray data
CN109766838B (en) Gait cycle detection method based on convolutional neural network
CN114723704A (en) Textile quality evaluation method based on image processing
CN111582111A (en) Cell component segmentation method based on semantic segmentation
CN112819747A (en) Method for automatically diagnosing benign and malignant nodules based on lung tomography image
CN115994907B (en) Intelligent processing system and method for comprehensive information of food detection mechanism
CN104217213A (en) Medical image multi-stage classification method based on symmetry theory
CN116912255B (en) Follicular region segmentation method for ovarian tissue analysis
CN113628197A (en) Weakly supervised full-section histopathology image classification method based on contrast learning
CN115115598B (en) Global Gabor filtering and local LBP feature-based laryngeal cancer cell image classification method
CN116402824A (en) Endocrine abnormality detection method based on children bone age X-ray film
CN115311689A (en) Cattle face identification feature extraction model construction method and cattle face identification method
CN104835155A (en) Fractal-based early-stage breast cancer calcification point computer auxiliary detection method
CN117314949B (en) Personnel injury detection and identification method based on artificial intelligence
CN115861308B (en) Acer truncatum disease detection method
CN109886320B (en) Human femoral X-ray intelligent recognition method and system
CN109191452B (en) Peritoneal transfer automatic marking method for abdominal cavity CT image based on active learning
CN114862868B (en) Cerebral apoplexy final infarction area division method based on CT perfusion source data
CN110647870B (en) Method for calculating approximate entropy of resting state fMRI data based on sliding window

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant