CN111523507A - Artificial intelligence wound assessment area measuring and calculating method and device - Google Patents

Artificial intelligence wound assessment area measuring and calculating method and device Download PDF

Info

Publication number
CN111523507A
CN111523507A CN202010379502.5A CN202010379502A CN111523507A CN 111523507 A CN111523507 A CN 111523507A CN 202010379502 A CN202010379502 A CN 202010379502A CN 111523507 A CN111523507 A CN 111523507A
Authority
CN
China
Prior art keywords
wound
artificial intelligence
picture
assessment area
pspnet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010379502.5A
Other languages
Chinese (zh)
Inventor
王成臣
张蕾
谢梁
刘乾
张瀚
张竹影
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiahe Artificial Intelligence Technology Co ltd
Original Assignee
Shanghai Jiahe Artificial Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiahe Artificial Intelligence Technology Co ltd filed Critical Shanghai Jiahe Artificial Intelligence Technology Co ltd
Priority to CN202010379502.5A priority Critical patent/CN111523507A/en
Publication of CN111523507A publication Critical patent/CN111523507A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an artificial intelligence wound assessment area measuring and calculating method and device, wherein the method comprises the following steps: s1, constructing a feature extraction network and a convolutional neural network for measuring and calculating the wound assessment area; s2, taking a wound picture and marking the wound picture by using a depth learning marking tool; s3, generating a wound picture training set based on the marked wound picture; s4, inputting the wound picture training set into a feature extraction network and a convolutional neural network for image data training to generate a training model; and S5, based on the trained feature extraction network and the convolutional neural network, performing wound assessment area measurement on the shot wound picture to obtain wound information. The method has the advantages that through an artificial intelligence algorithm, human resources are saved, and efficiency and accuracy are improved.

Description

Artificial intelligence wound assessment area measuring and calculating method and device
[ technical field ] A method for producing a semiconductor device
The invention relates to the field of wound image identification, in particular to an artificial intelligence wound assessment area measuring and calculating method and device.
[ background of the invention ]
At present, clinical medical staff measure the size of a wound by using a ruler to record the length and width of the wound and calculate the area of the wound. The disadvantages of this approach: (1) the measuring tool needs to be contacted with the wound, so that the hidden danger of cross infection exists; (2) the minimum unit of measurement by the ruler is centimeter, and the centimeter measurement is not fine enough in later judging the progress of the wound; (3) for wounds with irregular wound edges, the difficulty in measuring the length and the width is high.
There is currently a "wound measurement" of APP ("wound measurement record") whose functions include: archiving the photographing of the wound, calculating the area and recording the basic data of the patient. The disadvantages of this approach: (1) the area calculation is carried out after the length and the width of the wound are manually input; (2) wound assessment stays only on the computation of the wound area; (3) the clinical use of medical personnel is not much.
There is also presently disclosed a three-dimensional wound scanning system, a system developed by the second university of military medicine under the name of a skin chronic ulcer area calculation device, namely the woundlevervalc wound scanning device. The device consists of a handheld three-dimensional scanner, a wound three-dimensional model reconstruction system and a wound interactive display system. The main function of the handheld three-dimensional scanner is image acquisition, and the Kinect for Windows handheld three-dimensional scanner is integrated with a color camera, a depth (infrared) camera and an infrared projector sensor. The device is used for calculating the area of the skin chronic ulcer and is not widely used clinically. The disadvantages of this device are: only an accurate measurement of the wound area is achieved, but in other aspects of wound assessment, such as patient systemic assessment, local assessment, etc., no assessment is achieved.
The invention patent of China, application No. 201610320444.2, first Hospital of Beijing university, applicant discloses a method and a device for estimating wound area, the device comprises: the image recognition module is used for carrying out image recognition on the shot image and recognizing a two-dimensional scale image from the shot image; the proportion confirms the module, is used for utilizing the two-dimentional scale picture identified, confirm the proportion of picture and actual size; the image clustering module is used for clustering the shot images according to colors to obtain a plurality of types of wound images corresponding to different types of tissue components; and the area calculation module is used for respectively estimating the wound areas of different tissue components by utilizing the proportion of the image to the actual size and the multiple types of wound images corresponding to the different types of tissue components. The Chinese patent of invention "an apparatus and method for calculating wound area", application No. 201410551026.5, Kunzhan Wari medical science and technology Co., Ltd ", discloses an apparatus for calculating wound area, comprising: a reference dressing accessory disposed on the wound dressing surface and having the wound of area to be calculated within its boundaries; and an image processing device for calculating the wound area from the reference dressing accessory and the wound image. The wound assessment device and system based on the artificial intelligence algorithm are found through retrieval and are not reported.
The multi-receptive-field Pyramid Network PSPNet (Pyramid matching Network) is a multi-scale estimation Network for integrating context information improved on a characteristic Pyramid Network (FPN), more context information is introduced, and when a segmentation layer has more global information, the probability of occurrence of segmentation errors is lower; this idea is currently applied in many image domains, and there are many ways to introduce more context information, such as: 1. the field of view of the separation layer is increased, which is most intuitive, the wider the field of view, the more things are seen; there are also many ways to increase the receptive field, such as hole convolution (punctured convolution), which is an implementation that is successfully applied on the deepab algorithm; global mean pooling, which is also a way to increase the receptive field, of PSPNet; 2. the fusion of deep-layer features and shallow-layer features increases semantic information of the shallow-layer features, so that enough context information and target detail information exist when shallow-layer segmentation is carried out, which is as early as the method in the FCN, but a certain optimization space is provided for selection of a fusion strategy and a segmentation layer.
[ summary of the invention ]
The invention aims to provide a method for measuring and calculating the estimated area of a wound, which saves human resources and improves the efficiency and the accuracy rate through an artificial intelligence algorithm.
In order to achieve the purpose, the technical scheme adopted by the invention is an artificial intelligent wound assessment area measuring and calculating method, which comprises the following steps:
s1, constructing a feature extraction network and a convolutional neural network for measuring and calculating the wound assessment area;
s2, taking a wound picture and marking the wound picture by using a deep learning marking tool;
s3, generating a wound picture training set based on the marked wound picture;
s4, inputting the wound picture training set into a feature extraction network and a convolutional neural network for image data training to generate a training model;
and S5, based on the trained feature extraction network and the convolutional neural network, performing wound assessment area measurement on the shot wound picture to obtain wound information.
Preferably, the feature extraction network is a multi-receptive-field pyramid network PSPNet or a feature pyramid network FPN.
Preferably, the PSPNet aggregates context information using a pyramid pooling module, the pyramid pooling module being layered with 4 layers; the pyramid pooling module level 1 has a kernel size of 60 × 60 and a step size of 60; the pyramid pooling module level 2 has a kernel size of 30 × 30 and a step size of 30; the pyramid pooling module level 3 has a kernel size of 20 × 20 and a step size of 20; the pyramid pooling module level 4 has a kernel size of 10 × 10 and a step size of 10; the convolutional neural network is 101 layers.
Preferably, the step S1 uses the tensoflow-gpu framework to construct PSPNet and 101-layer convolutional neural network; the step S2 is to label the wound picture by using a deep learning labeling tool Labelme; the step S3 is to generate a josn file in batch by the deep learning labeling tool Labelme as a training set of wound pictures.
Preferably, the PSPNet and convolutional neural network deep learning parameters are set as the training iteration number 30000, the stochastic gradient descent method 0.9, the learning rate 0.1 dynamic attenuation, and the weight attenuation 0.0001.
Preferably, in step S4, the wound image training set is input to the PSPNet and the convolutional neural network, and the PSPNet and the convolutional neural network deep learning parameters are continuously adjusted to improve the accuracy and generate the training model through image data enhancement and image data training.
Preferably, the image data enhancement includes random brightness, random saturation, random contrast processing, and scaling and rotation processing.
Preferably, the step S5 of extracting the feature map of the feature extraction network PSPNet specifically includes the following steps:
s51, inputting wound picture data to the PSPNet;
s52, outputting a 1/8 size characteristic diagram by applying pre-training expansion convolution;
s53, the pyramid pooling module has the output size of (1, 1, 2048) feature map at level 1, the output size of (2, 2, 2048) feature map at level 2, the output size of (3, 3, 2048) feature map at level 3 and the output size of (4, 4, 2048) feature map at level 4;
and S54, obtaining a prediction characteristic diagram of the wound picture through the PSPNet convolution layer.
The second purpose of the invention is to provide a device for measuring and calculating the estimated area of the wound, which saves human resources and improves efficiency and accuracy rate through an artificial intelligence algorithm.
In order to achieve the second objective, the invention adopts a technical solution that is an artificial intelligence wound assessment area measurement and calculation device, including a multispectral camera, a computer and a computer program running on the computer, wherein the multispectral camera is used for shooting wound pictures, and the computer program executes the artificial intelligence wound assessment area measurement and calculation method.
Preferably, the artificial intelligence wound evaluation area measuring and calculating device further comprises an electronic nose, and the electronic nose is used for identifying the smell of the wound.
The invention has the following beneficial effects: 1. the kit is used for clinical evaluation of various wounds, including pressure injury, scalds, chronic ulcers, diabetic feet and the like; 2. the carrying is convenient and fast, and the utility model is suitable for clinical use; 3. the wound area of a patient is evaluated, the wound area is evaluated in a non-contact way through an AI technology without manual measurement of medical staff, the wound area is recorded and stored by a computer of the system, and the wound area in each wound diagnosis and treatment is recorded in the system for diagnosis and treatment reference and diagnosis and treatment effect evaluation; 4. the evaluation calculation of the area overcomes the personal difference when the picture is shot, and the system gives the guidance and correction of the homogeneity when the focus is fixed, thereby ensuring the accuracy of each evaluation data.
[ description of the drawings ]
Fig. 1 is a diagram of steps of an artificial intelligence method for estimating the area of a wound.
Fig. 2 is a schematic diagram of an artificial intelligence device for measuring and calculating the estimated area of a wound.
Fig. 3 is a diagram of the steps of an artificial intelligence wound assessment method.
Fig. 4 is a schematic diagram of an artificial intelligence wound assessment intelligent terminal communication.
Fig. 5 is a diagram of an artificial intelligence wound assessment integrated management system architecture.
Fig. 6 is a block diagram of a server program module of an artificial intelligence integrated wound assessment management system.
[ detailed description ] embodiments
The invention is further described with reference to the following examples and with reference to the accompanying drawings.
Example 1
The embodiment realizes an artificial intelligence wound assessment area measuring and calculating method.
FIG. 1 is a step diagram of an artificial intelligence method for measuring and calculating the estimated area of a wound. As shown in fig. 1, an artificial intelligence method for measuring and calculating the estimated area of a wound includes the following steps:
s1, constructing a feature extraction network and a convolutional neural network for measuring and calculating the wound assessment area;
s2, taking a wound picture and marking the wound picture by using a deep learning marking tool;
s3, generating a wound picture training set based on the marked wound picture;
s4, inputting the wound picture training set into a feature extraction network and a convolutional neural network for image data training to generate a training model;
and S5, based on the trained feature extraction network and the convolutional neural network, performing wound assessment area measurement on the shot wound picture to obtain wound information.
Preferably, the feature extraction network is a multi-receptive-field pyramid network PSPNet or a feature pyramid network FPN.
Preferably, the PSPNet aggregates context information using a pyramid pooling module, the pyramid pooling module being layered with 4 layers; the pyramid pooling module level 1 has a kernel size of 60 × 60 and a step size of 60; the pyramid pooling module level 2 has a kernel size of 30 × 30 and a step size of 30; the pyramid pooling module level 3 has a kernel size of 20 × 20 and a step size of 20; the pyramid pooling module level 4 has a kernel size of 10 × 10 and a step size of 10; the convolutional neural network is 101 layers.
Preferably, the step S1 uses the tensoflow-gpu framework to construct PSPNet and 101-layer convolutional neural network; the step S2 is to label the wound picture by using a deep learning labeling tool Labelme; the step S3 is to generate a josn file in batch by the deep learning labeling tool Labelme as a training set of wound pictures.
Preferably, the PSPNet and convolutional neural network deep learning parameters are set as the training iteration number 30000, the stochastic gradient descent method 0.9, the learning rate 0.1 dynamic attenuation, and the weight attenuation 0.0001.
Preferably, in step S4, the wound image training set is input to the PSPNet and the convolutional neural network, and the PSPNet and the convolutional neural network deep learning parameters are continuously adjusted to improve the accuracy and generate the training model through image data enhancement and image data training.
Preferably, the image data enhancement includes random brightness, random saturation, random contrast processing, and scaling and rotation processing.
Preferably, the step S5 of extracting the feature map of the feature extraction network PSPNet specifically includes the following steps:
s51, inputting wound picture data to the PSPNet;
s52, outputting a 1/8 size characteristic diagram by applying pre-training expansion convolution;
s53, the pyramid pooling module has the output size of (1, 1, 2048) feature map at level 1, the output size of (2, 2, 2048) feature map at level 2, the output size of (3, 3, 2048) feature map at level 3 and the output size of (4, 4, 2048) feature map at level 4;
and S54, obtaining a prediction characteristic diagram of the wound picture through the PSPNet convolution layer.
Specifically, the implementation labels pictures: labeling the picture by using a Labelme tool to generate a josn file; and generating the josn files in batches, and generating a training set by marking the atlas.
The image segmentation model training is implemented: and (3) constructing a pspnet image segmentation model and a 101-layer convolutional neural network by using a tensoflow-gpu framework, inputting a training set, enhancing data, training image data, continuously adjusting parameters to improve the accuracy and generating a training model.
Specifically, the parameters are set: argparse. argmentparser ();
training iteration number 30000;
stochastic Gradient Descent (SGD) with a parameter of 0.9;
tf.train.GradientDescentOptimizer();
learning rate 0.1 dynamic decay;
the weight is attenuated by 0.0001.
Gradient Descent (Gradient Descent) is the most basic type of optimizer, and is currently mainly classified into three Gradient Descent methods: standard Gradient Descent (GD), random Gradient Descent (SGD), and Batch Gradient Descent (BGD).
Specifically, image data enhancement:
random _ brightness ();
random _ saturation ();
random contrast tf. image. random _ contrast ();
image, resize _ images ();
rotation tensoflow.consistency.image.rotate ();
specifically, the PSPNet model structure:
step 1, inputting picture data;
2, outputting the size of the characteristic map 1/8 by applying expansion convolution and pre-training;
step 3, a pyramid pooling module aggregates context information, and pyramid levels are 4 layers;
level 1: the kernel size is 60x60, the step size is 60, and the output size is (1, 1, 2048) feature map;
and (2) level: the kernel size is 30x30, the step size is 30, and the output size is (2, 2, 2048) feature map;
and (3) level: the kernel size is 20x20, the step size is 20, and the output size is (3, 3, 2048) feature map;
and 4, level 4: the kernel size is 10x10, the step size is 10, and the output size is (4, 4, 2048) feature map;
4, obtaining a final prediction characteristic diagram through the convolution layer;
convolution (conv/bn/ReLU), reducing the number of channels to 512, outputting the size to be (60, 60, 512), up-sampling, and scaling the space size (linear interpolation) to the space size of the original input.
Example 2
The embodiment realizes an artificial intelligence wound assessment area measuring and calculating device.
Fig. 2 is a schematic diagram of an artificial intelligence device for measuring and calculating the estimated area of a wound. As shown in fig. 2, an artificial intelligence device for measuring and calculating an estimated area of a wound includes a multispectral camera, a computer, and a computer program running on the computer, wherein the multispectral camera is used for taking a picture of a wound, and the computer program executes the artificial intelligence method for measuring and calculating an estimated area of a wound according to embodiment 1.
Preferably, the artificial intelligence wound evaluation area measuring and calculating device further comprises an electronic nose, and the electronic nose is used for identifying the smell of the wound.
The device is used in a wound identification link, manual evaluation is achieved, manpower resources are saved, and efficiency and accuracy are improved.
The device is provided with a multispectral camera (a camera, an infrared camera, a thermal radiation camera and the like) and an electronic nose device, and wound information such as various tissues in a wound and the like can be obtained by taking a picture and inputting a model.
Currently, there are two types of wound assessment, one is manual measurement with a ruler as a reference object, the other is image processing for identifying various tissues according to colors, and the embodiment adopts artificial intelligence for deep learning and identifying various wound tissues.
The electronic nose is also called as a smell scanner, and is a novel instrument for quickly detecting food developed in the 90 s of the 20 th century. It uses special sensor and pattern recognition system to provide the whole information of tested sample quickly, indicating the implicit characteristics of sample. The electronic nose is an instrument which is composed of a selective electrochemical sensor array and a proper identification method, can identify simple and complex smells, and can obtain a result consistent with human sensory evaluation.
Example 3
This embodiment implements an artificial intelligence wound assessment method. The method of this embodiment controls the artificial intelligence wound assessment area measurement and calculation device of embodiment 2, and the artificial intelligence wound assessment area measurement and calculation method of embodiment 1 is used to realize artificial intelligence wound assessment.
FIG. 3 is a diagram of the steps of an artificial intelligence wound assessment method. As shown in fig. 3, an artificial intelligence wound assessment method includes the following steps:
t1, establishing communication between the artificial intelligent wound assessment intelligent terminal and the artificial intelligent wound assessment area measuring and calculating device;
t2, instructing an artificial intelligent wound assessment area measuring and calculating device to shoot a wound picture by an artificial intelligent wound assessment intelligent terminal;
t3, inputting the shot wound picture into the trained feature extraction network and the convolutional neural network by the artificial intelligent wound assessment area measuring and calculating device;
t4, carrying out wound assessment area measurement and calculation on the shot wound picture by an artificial intelligent wound assessment area measurement and calculation device to obtain wound information;
and T5, generating an artificial intelligent wound assessment report based on the wound information by the artificial intelligent wound assessment area measuring and calculating device, and sending the artificial intelligent wound assessment report to the artificial intelligent wound assessment intelligent terminal.
Preferably, the feature extraction network is a multi-receptive-field pyramid network PSPNet or a feature pyramid network FPN.
Preferably, the artificial intelligence wound evaluation area measuring and calculating device further comprises an electronic nose, and the artificial intelligence wound evaluation report comprises wound odor information.
Example 4
This embodiment realizes an artificial intelligence wound aassessment intelligent terminal. The intelligent terminal of this embodiment controls the artificial intelligence wound assessment area measurement and calculation device of embodiment 2, and the artificial intelligence wound assessment area measurement and calculation method of embodiment 1 is used to realize artificial intelligence wound assessment.
Fig. 4 is a schematic diagram of an artificial intelligence wound assessment intelligent terminal communication. As shown in fig. 4, an artificial intelligence wound assessment intelligent terminal includes an intelligent terminal and an APP program running on the intelligent terminal, where the APP program performs the following operations:
l1, establishing communication between the artificial intelligent wound assessment intelligent terminal and the artificial intelligent wound assessment area measuring and calculating device;
l2, logging in an artificial intelligent wound assessment comprehensive management server by the identity of a doctor to obtain patient information including the wound information of the patient for the doctor;
l3, commanding an artificial intelligent wound assessment area measuring and calculating device to shoot a wound picture, and carrying out wound assessment area measuring and calculating on the shot wound picture based on the trained feature extraction network and the convolutional neural network to obtain wound information;
and L4, acquiring an artificial intelligence wound assessment report generated based on the wound information and sent by the artificial intelligence wound assessment area measuring and calculating device.
Preferably, the feature extraction network is a multi-receptive-field pyramid network PSPNet or a feature pyramid network FPN.
Preferably, the artificial intelligence wound assessment area measuring and calculating device communicated with the artificial intelligence wound assessment intelligent terminal further comprises an electronic nose, and the artificial intelligence wound assessment report comprises wound smell information.
Preferably, the smart terminal is a smart phone or a PAD.
Preferably, the APP program further performs the following operations: l6, command artificial intelligence wound assessment area measurement device to take wound dressing pictures and let doctor input patient wound supplement report.
Preferably, the patient information includes patient basic information, a patient wound list and a patient wound evaluation report list.
Preferably, the APP procedure includes the operations of modifying/deleting patient basic information, modifying/deleting a patient wound list, and generating a patient wound assessment report contrasted list.
The APP program comprises a login page used for logging in the artificial intelligent wound assessment area measuring and calculating device; comprises a home page for displaying doctor information and doctor information; comprises a patient homepage for displaying the basic information of the patient and the wound information of the patient; including a new wound page for adding wounds (including wound type and wound location) to the patient wound information; the wound comparison page is used for displaying a comparison list of the patient wound evaluation report; includes a personal center page for displaying personal information of a doctor of a user.
Example 5
The embodiment realizes an artificial intelligence wound assessment comprehensive management system.
FIG. 5 is a diagram of an artificial intelligence integrated wound assessment management system. As shown in fig. 5, an artificial intelligence wound assessment integrated management system includes an artificial intelligence wound assessment integrated management server, a plurality of artificial intelligence wound assessment intelligent terminals and a plurality of artificial intelligence wound assessment area measuring and calculating devices, wherein the artificial intelligence wound assessment integrated management server, the plurality of artificial intelligence wound assessment intelligent terminals and the plurality of artificial intelligence wound assessment area measuring and calculating devices are in communication with each other through communication links.
FIG. 6 is a block diagram of a server program module of an artificial intelligence integrated wound assessment management system. As shown in fig. 6, the artificial intelligence wound assessment integrated management server has an equipment management program module for managing the plurality of artificial intelligence wound assessment area measuring and calculating devices; the artificial intelligent wound assessment integrated management server is provided with a doctor management program module and is used for managing doctor information and the mapping relation between a doctor and the artificial intelligent wound assessment intelligent terminal; the artificial intelligence wound assessment integrated management server is provided with a patient management program module for managing patient information, wherein the patient information comprises wound information of patients, and the wound information of the patients comprises artificial intelligence wound assessment report information;
the computer program running on the artificial intelligent wound assessment intelligent terminal and the artificial intelligent wound assessment area measuring and calculating device executes the following operations:
m1, an artificial intelligence wound assessment intelligent terminal and an artificial intelligence wound assessment area measuring and calculating device are paired, and one-to-one control connection is established;
m2, logging in an artificial intelligent wound assessment comprehensive management server by an artificial intelligent wound assessment intelligent terminal in the manner of doctor identity to acquire patient information including the wound information of the patient;
m3, shooting a wound picture by an artificial intelligent wound assessment area measuring and calculating device, and carrying out wound assessment area measurement and calculation on the shot wound picture based on the trained feature extraction network and the convolutional neural network to obtain wound information;
m4, generating an artificial intelligence wound assessment report based on the wound information by the artificial intelligence wound assessment area measuring and calculating device, and sending the artificial intelligence wound assessment report to the artificial intelligence wound assessment intelligent terminal and the artificial intelligence wound assessment comprehensive management server.
Preferably, the feature extraction network is a multi-receptive-field pyramid network PSPNet or a feature pyramid network FPN.
Preferably, the artificial intelligence wound evaluation area measuring and calculating device further comprises an electronic nose, and the artificial intelligence wound evaluation report comprises wound odor information.
Preferably, the artificial intelligence wound evaluation area measuring and calculating device shoots a wound dressing picture and sends the wound dressing picture to the artificial intelligence wound evaluation intelligent terminal, and the artificial intelligence wound evaluation intelligent terminal is provided with a program module for inputting a patient wound supplement report.
Preferably, the artificial intelligence wound evaluation area measuring and calculating device has an equipment unique identification code, and the equipment management program module detects and identifies the artificial intelligence wound evaluation area measuring and calculating device through the equipment unique identification code.
Preferably, the artificial intelligence wound assessment integrated management server has a hospital management program module for managing hospital information, wherein the hospital information includes a hospital list using the artificial intelligence wound assessment area measuring and calculating device; the artificial intelligence wound assessment integrated management server is provided with a department management program module for managing department information, and the department information comprises a department list using the artificial intelligence wound assessment area measuring and calculating device.
Preferably, when the artificial intelligence wound assessment intelligent terminal logs in the artificial intelligence wound assessment comprehensive management server in the doctor identity, the login security is encrypted, the login password is configured by a dual md5+ system, and each hospital adopts different key encryption.
Preferably, the artificial intelligence wound assessment integrated management server has an administrator program module, and the administrator program module manages information on the artificial intelligence wound assessment integrated management server.
Specifically, the artificial intelligence wound assessment integrated management server software platform of the embodiment is implemented by programming in the technical language php, python, vue.
Software platform framework and function:
the equipment is uniquely identified, and each equipment has a unique recorded pattern identification code by detecting the equipment through the equipment unique identification code;
the login security is encrypted, the user information is encrypted, the password is encrypted by configuring different keys of each hospital through a dual md5+ system during login, the encrypted password cannot be decrypted, and meanwhile, a unique and unrepeated token is returned after login is successful and is updated after 2 hours, so that the data security and the program encryption are ensured;
counting the number, displaying the list, obtaining a current patient list of the number of patients in the current month after successful login, and collecting and screening the patients on a patient list page;
AI evaluation, wherein the AI evaluation function transmits the picture to an algorithm server by shooting the picture, and after algorithm analysis, a wound mark is drawn, the wound area, smell, temperature and depth are measured, a series of data and pictures are stored, and meanwhile, a user can shoot a dressing map in a report returned by evaluation and data which cannot be calculated by part of algorithms for supplement;
uploading and encrypting the file, uploading the picture to a file storage server through the wound data record by acquiring the token, and displaying and playing the type and date corresponding to the file;
the Web end authority sets the authority key word of each function block through the group table, makes roles and associates related authorities through the role table, associates the corresponding roles of the user through the user table so as to solve the problem that different roles display different function blocks and have authorities with different functions;
the user information is stored and requested at a speed, the user information combines and stores the user information and token by using a redis technology, so that the effects of high reading speed and high safety are achieved;
the data comparison technology stores data in a queue through a redis queue technology and sorts the data;
the queue principle is that the data is enqueued and dequeued to achieve the safety and readability of the data;
re-labeling the Web-end wound picture, and solving the problems of picture browsing (stepless zooming and translation), vector data, text and label display through an AILabel.js technology; vector data drawing, editing and labeling: rectangle, polygon, painting, broken line, dotting, text and marker; therefore, the image is re-marked, and the json after marking is stored, so that the model can be verified again by a later algorithm.
The artificial intelligent wound assessment integrated management server software platform comprises a login interface used for logging in the artificial intelligent wound assessment integrated management server software platform; the system comprises a big data center, a data center and a data center, wherein the big data center is used for storing information such as all patients, wounds of the patients, wound assessment reports of the patients and the like, storing information of an artificial intelligent wound assessment area measuring and calculating device, and storing information of doctors, departments, hospitals and the like; the system comprises an operation interface, which is used for performing adding/deleting operation on the information, performing statistical analysis on related data information and the like.
Medical personnel use this embodiment artificial intelligence wound aassessment integrated management system:
1. the wound assessment process is simple and convenient, and after medical staff scan the wound, the system performs measurement, calculation and recording, so that the working time of the medical staff is saved;
2. wound assessment is complete, including area, volume, color, odor, exudate, etc. of the wound;
3. the remote consultation can realize the joint evaluation and treatment scheme making of wound experts in different medical institutions;
4. the remote consultation can realize multidisciplinary team MDT consultation, and a multidisciplinary treatment scheme is formulated on wound diagnosis and treatment; MDT refers to multidisciplinary consultation in medicine, and MDT english means multidisciplinary Team collaboration (multiple disciplinity Team).
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing associated hardware, and the program may be stored in a computer-readable storage medium, where the storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and additions can be made without departing from the principle of the present invention, and these should also be considered as the protection scope of the present invention.

Claims (10)

1. An artificial intelligence wound assessment area measuring and calculating method is characterized by comprising the following steps:
s1, constructing a feature extraction network and a convolutional neural network for measuring and calculating the wound assessment area;
s2, taking a wound picture and marking the wound picture by using a deep learning marking tool;
s3, generating a wound picture training set based on the marked wound picture;
s4, inputting the wound picture training set into a feature extraction network and a convolutional neural network for image data training to generate a training model;
and S5, based on the trained feature extraction network and the convolutional neural network, performing wound assessment area measurement on the shot wound picture to obtain wound information.
2. The artificial intelligence wound assessment area measurement and calculation method according to claim 1, wherein: the feature extraction network is a multi-receptive-field pyramid network PSPNet or a feature pyramid network FPN.
3. The artificial intelligence wound assessment area measurement and calculation method according to claim 2, wherein: the PSPNet uses a pyramid pooling module to aggregate context information, the pyramid pooling module is layered with 4 layers; the pyramid pooling module is level 1, the kernel size is 60 × 60, and the step size is 60; the pyramid pooling module level 2 has a kernel size of 30 × 30 and a step size of 30; the pyramid pooling module level 3 has a kernel size of 20 × 20 and a step size of 20; the pyramid pooling module level 4 has a kernel size of 10 × 10 and a step size of 10; the convolutional neural network is 101 layers.
4. The artificial intelligence wound assessment area measurement and calculation method according to claim 3, wherein: the step S1 uses tensoflow-gpu framework to construct PSPNet and 101-layer convolutional neural network; the step S2 is to label the wound picture by using a deep learning labeling tool Labelme; the step S3 is that a josn file is generated by a deep learning labeling tool Labelme in batch as a wound picture training set.
5. The artificial intelligence wound assessment area measurement and calculation method according to claim 4, wherein: the PSPNet and convolutional neural network deep learning parameters are set to be 30000 of training iteration times, 0.9 of a random gradient descent method, 0.1 of learning rate dynamic attenuation and 0.0001 of weight attenuation.
6. The artificial intelligence wound assessment area measurement and calculation method according to claim 5, wherein: and step S4, inputting the wound picture training set to the PSPNet and the convolutional neural network, and continuously adjusting the PSPNet and convolutional neural network deep learning parameters to improve the accuracy rate and generate a training model through image data enhancement and image data training.
7. The artificial intelligence wound assessment area measurement and calculation method according to claim 6, wherein: the image data enhancement comprises random brightness, random saturation, random contrast processing, scaling and rotation processing.
8. The method according to claim 7, wherein the step S5 of extracting the feature map from the PSPNet comprises the following steps:
s51, inputting wound picture data to the PSPNet;
s52, outputting a 1/8 size characteristic diagram by applying pre-training expansion convolution;
s53, the pyramid pooling module has the output size of (1, 1, 2048) feature map at level 1, the output size of (2, 2, 2048) feature map at level 2, the output size of (3, 3, 2048) feature map at level 3 and the output size of (4, 4, 2048) feature map at level 4;
and S54, obtaining a prediction characteristic diagram of the wound picture through the PSPNet convolution layer.
9. An artificial intelligence wound assessment area estimation device, comprising a multispectral camera, a computer and a computer program running on the computer, wherein the multispectral camera is used for taking a picture of a wound, and the computer program is used for executing an artificial intelligence wound assessment area estimation method according to any one of claims 1 to 8.
10. The artificial intelligence wound assessment area measurement device of claim 9, wherein: the artificial intelligence wound assessment area measuring and calculating device further comprises an electronic nose, and the electronic nose is used for identifying wound smell.
CN202010379502.5A 2020-05-07 2020-05-07 Artificial intelligence wound assessment area measuring and calculating method and device Pending CN111523507A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010379502.5A CN111523507A (en) 2020-05-07 2020-05-07 Artificial intelligence wound assessment area measuring and calculating method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010379502.5A CN111523507A (en) 2020-05-07 2020-05-07 Artificial intelligence wound assessment area measuring and calculating method and device

Publications (1)

Publication Number Publication Date
CN111523507A true CN111523507A (en) 2020-08-11

Family

ID=71912226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010379502.5A Pending CN111523507A (en) 2020-05-07 2020-05-07 Artificial intelligence wound assessment area measuring and calculating method and device

Country Status (1)

Country Link
CN (1) CN111523507A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668484A (en) * 2020-12-29 2021-04-16 上海工程技术大学 Method for detecting access distance of moving and static nodes of automatic shutter of switch
CN114882098A (en) * 2021-09-26 2022-08-09 上海交通大学医学院附属第九人民医院 Method, system and readable storage medium for measuring area of specific region of living body

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140088402A1 (en) * 2012-09-25 2014-03-27 Innovative Therapies, Inc. Wound measurement on smart phones
CN106023269A (en) * 2016-05-16 2016-10-12 北京大学第医院 Method and device for estimating wound area
CN111067531A (en) * 2019-12-11 2020-04-28 中南大学湘雅医院 Wound measuring method and device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140088402A1 (en) * 2012-09-25 2014-03-27 Innovative Therapies, Inc. Wound measurement on smart phones
CN106023269A (en) * 2016-05-16 2016-10-12 北京大学第医院 Method and device for estimating wound area
CN111067531A (en) * 2019-12-11 2020-04-28 中南大学湘雅医院 Wound measuring method and device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HENGSHUANG ZHAO,JIANPING SHI,ET AL.: "Pyramid Scene Parsing Network" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668484A (en) * 2020-12-29 2021-04-16 上海工程技术大学 Method for detecting access distance of moving and static nodes of automatic shutter of switch
CN112668484B (en) * 2020-12-29 2023-04-21 上海工程技术大学 Method for detecting access distance between dynamic and static nodes of automatic switch machine shutter
CN114882098A (en) * 2021-09-26 2022-08-09 上海交通大学医学院附属第九人民医院 Method, system and readable storage medium for measuring area of specific region of living body

Similar Documents

Publication Publication Date Title
US11783480B2 (en) Semi-automated system for real-time wound image segmentation and photogrammetry on a mobile platform
CN106709917A (en) Neural network model training method, device and system
CN107247971B (en) Intelligent analysis method and system for ultrasonic thyroid nodule risk index
CN108461129A (en) A kind of medical image mask method, device and user terminal based on image authentication
CN107548497A (en) Adapted treatments management system with Workflow Management engine
CN105512467A (en) Digit visualization mobile terminal medical method
CN109887077B (en) Method and apparatus for generating three-dimensional model
CN109636910A (en) A kind of cranium face restored method generating confrontation network based on depth
CN111523507A (en) Artificial intelligence wound assessment area measuring and calculating method and device
CN110189324B (en) Medical image processing method and processing device
Pires et al. Wound area assessment using mobile application
CN110300547A (en) Medical information virtual reality server system, medical information virtual reality program, medical information virtual reality system, the creation method and medical information virtual reality data of medical information virtual reality data
CN110334566A (en) Fingerprint extraction method inside and outside a kind of OCT based on three-dimensional full convolutional neural networks
CN115170629A (en) Wound information acquisition method, device, equipment and storage medium
CN116228787A (en) Image sketching method, device, computer equipment and storage medium
CN113240661A (en) Deep learning-based lumbar vertebra analysis method, device, equipment and storage medium
CN117877691B (en) Intelligent wound information acquisition system based on image recognition
Benbelkacem et al. Lung infection region quantification, recognition, and virtual reality rendering of CT scan of COVID-19
CN111523508A (en) Artificial intelligence wound assessment method and intelligent terminal
CN111523506A (en) Artificial intelligence wound evaluation integrated management system
CN113096811B (en) Diabetes foot image processing and risk early warning equipment based on infrared thermal imaging
CN110008922A (en) Image processing method, unit, medium for terminal device
WO2021052150A1 (en) Radiation therapy plan recommendation method and apparatus, electronic device, and storage medium
CN115268531B (en) Water flow temperature regulation control method, device and equipment for intelligent bathtub and storage medium
CN115762721A (en) Medical image quality control method and system based on computer vision technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination