CN115511783A - Self-iteration wound evaluation system based on deep camera shooting and deep learning - Google Patents

Self-iteration wound evaluation system based on deep camera shooting and deep learning Download PDF

Info

Publication number
CN115511783A
CN115511783A CN202210923919.2A CN202210923919A CN115511783A CN 115511783 A CN115511783 A CN 115511783A CN 202210923919 A CN202210923919 A CN 202210923919A CN 115511783 A CN115511783 A CN 115511783A
Authority
CN
China
Prior art keywords
wound
module
image
deep learning
iterative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210923919.2A
Other languages
Chinese (zh)
Inventor
陈佳丽
宁宁
刘颖
李佩芳
屈俊宏
邓悟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
West China Hospital of Sichuan University
Original Assignee
West China Hospital of Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by West China Hospital of Sichuan University filed Critical West China Hospital of Sichuan University
Publication of CN115511783A publication Critical patent/CN115511783A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of wound assessment, in particular to a self-iterative wound assessment system based on deep camera shooting and deep learning. The system of the present invention comprises: the collecting unit is used for collecting the wound picture and sending the wound picture to the processing unit; error correction is carried out on the evaluation result of the processing unit, and the error correction is fed back to the processing unit; the processing unit is used for automatically judging the area and the volume of the wound; correcting self-iteration according to the error of the acquisition unit, evolving a model, and continuously improving the evaluation accuracy; synchronously transmitting the wound picture and the evaluation result to the cloud end; and the cloud end is used for remotely checking and modifying the evaluation result. The system can accurately evaluate the condition of the wound and has good application prospect.

Description

Self-iterative wound assessment system based on deep camera shooting and deep learning
Technical Field
The invention relates to the technical field of wound assessment, in particular to a self-iterative wound assessment system based on deep camera shooting and deep learning.
Background
Traditional wound assessment is achieved by training and learning medical staff to master relevant wound assessment knowledge and then manually judging according to actual clinical performance. Because the wound is often irregular and has diversity, and the difference of the levels of medical personnel at all levels, actually in the wound assessment, the medical personnel often need to spend a long time to carry out necessary assessment on one wound, and when the wound area is calculated through a wound measuring ruler, the area of the irregular wound is difficult to accurately obtain, and the estimation is carried out in a mode.
Present wound aassessment relies on medical personnel's experience completely, according to the different types of wound, different treatment scheme need be selected to the stage, and medical personnel need long-time training and study just can realize accurate aassessment to the wound, medical personnel's cultivation cycle is longer, the cost of cultivateing simultaneously is higher, it assesses to medical personnel on-line to have to shoot to the wound among the prior art, such mode is comparatively convenient, but because the angle of shooing, multiple factors such as definition all can influence medical personnel to the judgement of wound, consequently often can produce the aassessment result to the wound not accurate enough.
Machine learning has been applied in many medical diagnostic fields, and in wound assessment, people also build relevant machine learning models to assess wounds. For example: CN202010379502.5 artificial intelligence wound assessment area measuring and calculating method and device carry out wound assessment area measuring and calculating on shot wound pictures through a convolutional neural network model to obtain wound information. However, the existing methods can only evaluate information on the basis of area and the like. More information on wound depth, color changes, etc. cannot be analyzed, which limits the application of machine learning in wound assessment.
Disclosure of Invention
The invention aims to provide a self-iterative wound assessment system based on deep camera shooting and deep learning to assist in judging acute/chronic wound healing conditions and infection conditions, perform multi-dimensional wound assessment based on picture analysis, form a wound healing change rule, assist medical staff in making a wound treatment decision scheme and perform accurate intervention according to a healing rate and the infection conditions.
A self-iterative wound assessment system based on depth camera and deep learning, comprising:
the acquisition unit is used for acquiring a wound picture and sending the wound picture to the processing unit; error correction is carried out on the evaluation result of the processing unit, and the evaluation result is fed back to the processing unit;
the processing unit is used for automatically judging the area and the volume of the wound; correcting self-iteration according to the error of the acquisition unit, evolving a model, and continuously improving the evaluation accuracy; synchronously transmitting the wound picture and the evaluation result to the cloud end;
and the cloud end is used for remotely checking and modifying the evaluation result.
Preferably, the acquisition unit is an intelligent terminal capable of executing an AI program and having a network function.
Preferably, the intelligent terminal comprises a computing device with network function, a mobile phone, a PDA or a PAD, wherein the computing device comprises a desktop computer, a server or a workstation.
Preferably, the acquisition unit comprises an image acquisition module, a data display module and a marking module; the image acquisition module comprises an intelligent terminal capable of executing an image acquisition program and the image acquisition program and is used for acquiring wound image information, the marking module is used for sketching and marking images through the editing and marking program, and the data display module is used for displaying the data information of the image acquisition module and the marking module.
Preferably, the image acquisition module is a depth camera.
Preferably, the image acquisition module is capable of acquiring a planar image of the wound and multi-point depth information of the image.
Preferably, the processing unit includes a processing module, an error correction module, and a database, the database includes a trained wound assessment model, and the processing module is configured to perform wound assessment on the wound picture acquired by the acquisition unit according to the wound assessment model in the database, and feed back the output result to the acquisition unit, the error correction module is configured to perform manual error correction on the assessment result of the processing module, and output the corrected result to the wound assessment model in the database for iterative improvement.
Preferably, the database includes a deep learning training engine, a deep learning analysis engine and a data set self-iterative procedure.
Preferably, the processing procedure of the processing module includes the following steps:
performing wound evaluation on the wound image according to the wound evaluation model, and outputting a coordinate set of a wound evaluation result in an original plane image;
calculating an acquisition distance and a real scale ratio according to the image depth information, calculating a wound area according to the real scale ratio and a coordinate set, and calculating a wound volume according to the coordinate set, the image depth information and the real scale ratio;
and synthesizing the wound evaluation result and the original image, feeding the result back to the acquisition unit, and displaying the result in a visual mode.
Preferably, the software in the acquisition unit and the processing unit includes APP, applet, H5, hybrid APP, and cloud APP.
Compared with the prior art, the invention has the beneficial effects that:
according to the method, firstly, a wound picture is collected, the area of the wound can be automatically judged through an AI (artificial intelligence) engine based on deep learning and a trained model, if depth information is collected through a depth camera, the volume of the wound can be calculated, and in the using process, under the condition that evaluation errors are corrected by corners, the system can iterate by itself, evolve the model and continuously improve the evaluation accuracy.
On the basis of measuring and calculating the wound area, the wound depth and the wound color change can be further detected, and the suspicious infection focus can be estimated through the color change in the wound. Meanwhile, the wound healing condition including the wound area reduction degree, the wound depth reduction degree and the wound color depth conversion degree is further judged through continuous measurement and data analysis, the wound healing change rule is automatically output, and a wound treatment decision scheme is assisted to be made.
Obviously, many modifications, substitutions, and variations are possible in light of the above teachings of the invention, without departing from the basic technical spirit of the invention, as defined by the following claims.
The present invention will be described in further detail with reference to the following examples. This should not be understood as limiting the scope of the above-described subject matter of the present invention to the following examples. All the technologies realized based on the above contents of the present invention belong to the scope of the present invention.
Drawings
FIG. 1 is a logic block diagram of the system of the present invention;
FIG. 2 is a logic block diagram of an acquisition unit of the present invention;
FIG. 3 is a logic block diagram of a processing unit according to the present invention;
fig. 4 is an example of the system of example 1 evaluating a wound.
In the figure: 1. a collection unit; 2. a processing unit; 3. a cloud end; 4. an image acquisition module; 5. a data display module; 6. a marking module; 7. a processing module; 8. an error correction module; 9. a database.
Detailed Description
It should be noted that, in the embodiment, the algorithm of the steps of data acquisition, transmission, storage, processing, etc. which are not specifically described, and the hardware structure, the circuit connection, etc. which are not specifically described may be implemented by the content disclosed in the prior art.
Example 1
Referring to fig. 1-3, the present invention provides a technical solution: the self-iteration wound evaluation system based on deep camera shooting and deep learning comprises a collecting unit 1 and a processing unit 2, the processing unit 2 is connected with the collecting unit 1 and a cloud end 3, the collecting unit 1 is used for collecting wound pictures and sending the wound pictures to the processing unit 2, the processing unit 2 is used for automatically judging the area and the volume of a wound, meanwhile, error correction can be performed on evaluation results of the processing unit 2 through the collecting unit 1, the evaluation results are fed back to the processing unit 2 for self iteration, the evaluation accuracy is continuously improved through an evolution model, the wound pictures and the evaluation results can be synchronously transmitted to the cloud end 3 through the processing unit 2, remote check and modification of the evaluation results are achieved through the cloud end 3, the cloud end 3 can check the evaluation results generated by the processing module 7, and meanwhile, the cloud end 3 can perform error correction on the evaluation results of the processing module 7 through an error correction module 8.
The utility model discloses a wound image information monitoring system, including collection unit 1, image acquisition module 6, marking module 6, collection unit 1 is for can carrying out the AI procedure, take the intelligent terminal of network function, and collection unit 1 includes image acquisition module 4, data display module 5, marking module 6, image acquisition module 4 is including the intelligent terminal and the image acquisition procedure that can carry out the collection image procedure for gather wound image information, marking module 6 is used for drawing the mark through editing the mark procedure to the image, data display module 5 is used for showing image acquisition module 4, marking module 6's data information.
The processing unit 2 comprises a processing module 7, an error correction module 8 and a database 9, the database 9 comprises a trained wound assessment model, the processing module 7 is used for carrying out wound assessment on wound pictures acquired by the acquisition unit 1 according to the wound assessment model in the database 9 and feeding back an output result to the acquisition unit 1, and the error correction module 8 is used for carrying out manual error correction on an assessment result of the processing module 7 and outputting the corrected result to the wound assessment model in the database 9 for iterative improvement.
Intelligent terminal is including can carrying out AI procedure, taking network function's computing equipment, cell-phone, PDA, PAD, computing equipment includes desktop computer, server, workstation, image acquisition module 4 is the degree of depth camera, and the software in acquisition unit 1 and the processing unit 2 includes APP, applet, H5, mix APP, high in the clouds APP, including deep learning training engine, deep learning analysis engine and data set self-iteration procedure in database 9.
The image acquisition module 4 can acquire a planar image of a wound and multi-point depth information of the image, the processing module 7 performs wound assessment on the wound image acquired by the acquisition unit 1 according to a wound assessment model trained in the database 9, outputs a coordinate set of an assessment result in an original planar image, calculates an acquisition distance according to image depth information, calculates a real scale proportion, calculates a wound area according to the real scale proportion and the coordinate set, calculates a wound volume according to the coordinate set and the acquired depth information by combining the real scale proportion, synthesizes a wound assessment result and an original image, and feeds back the wound assessment result and the original image to the data display module 5 for display in a visual mode.
The system is divided into software and hardware, wherein the hardware comprises: the intelligent terminal (acquisition unit 1) can execute an image acquisition program, can mark images in handwriting, touch and other modes (marking module 6), can acquire a camera lens with certain depth information, is called a depth camera for short (data display module 5), can execute an AI program, and is a computing device with a network function, including but not limited to a server (processing module 7), wherein the computing device comprises a deep learning training engine, a deep learning analysis engine and a data set self-iteration program.
The software comprises: collecting, editing and marking images and communicating programs; an AI program capable of analyzing and training the images and the labels; a well-trained wound assessment model.
The application process of the system comprises the following steps:
1. an operator uses an intelligent terminal (an acquisition unit 1) and a depth camera (an image acquisition module 4) to take a picture (image acquisition) of a wound;
2. acquiring a planar image of the wound and depth information of multiple points of the image by shooting;
3. the image and depth information is transmitted to the processing unit 2;
4. the processing module 7 performs wound assessment according to the trained wound assessment model in the database 9, and outputs a coordinate set of an assessment result in the original plane image;
5. according to the depth information, the processing module 7 calculates the acquisition distance and the real scale proportion;
6. according to the real scale proportion and the coordinate set, the processing module 7 calculates the area of the wound;
7. according to the coordinate set and the acquired depth information, the processing module 7 calculates the wound volume by combining the real scale proportion;
8. the processing module 7 synthesizes the wound evaluation result and the original image, and feeds back the wound evaluation result and the original image to the data display module 5 to be displayed in a visual mode, and also can feed back the wound evaluation result and the original image to the cloud 3;
9. the operator can confirm that the wound assessment result image can be manually calibrated through the marking module 6, and can also be calibrated through the cloud end 3;
10. the terminal returns the result after evaluation and confirmation to the error correction module 8, and the approved result is put into the iteration data set in the database 9 through the error correction module 8;
11. the database 9 performs iterative training according to the iterative data set and the strategy, and improves the quality of the model.
An example of wound assessment using the system of the present embodiment is shown in fig. 4, where the left image is an example of a user taking an uploaded picture of a wound and the right image is an example of wound assessment.
The embodiment can be seen that the invention realizes a self-iterative wound assessment system based on deep camera shooting and deep learning, can assist in judging the acute/chronic wound healing condition and the infection condition, performs multidimensional wound assessment based on picture analysis, and forms a wound healing change rule to assist medical staff to make a wound treatment decision scheme, and performs accurate intervention on the healing rate and the infection condition. Therefore, the invention has good application prospect clinically.

Claims (10)

1. A self-iterative wound assessment system based on depth camera and deep learning, comprising:
the collecting unit (1) is used for collecting wound pictures and sending the wound pictures to the processing unit (2); error correction is carried out on the evaluation result of the processing unit (2), and the evaluation result is fed back to the processing unit (2);
the processing unit (2) is used for automatically judging the area and the volume of the wound; correcting self-iteration according to the error of the acquisition unit (1), evolving a model, and continuously improving the evaluation accuracy; synchronously transmitting the wound picture and the evaluation result to the cloud end (3);
and the cloud end (3) is used for remotely checking and modifying the evaluation result.
2. The depth camera and deep learning based self-iterative wound assessment system of claim 1, wherein: the acquisition unit (1) is an intelligent terminal which can execute an AI program and has a network function.
3. The depth-camera and depth-learning based self-iterative wound assessment system of claim 2, wherein: the intelligent terminal comprises a computing device with a network function, a mobile phone, a PDA or a PAD, wherein the computing device can execute an AI program, and comprises a desktop computer, a server or a workstation.
4. The depth camera and deep learning based self-iterative wound assessment system of claim 1, wherein: the acquisition unit (1) comprises an image acquisition module (4), a data display module (5) and a marking module (6); the image acquisition module (4) comprises an intelligent terminal capable of executing an image acquisition program and the image acquisition program and is used for acquiring wound image information, the marking module (6) is used for drawing and marking images through an editing and marking program, and the data display module (5) is used for displaying data information of the image acquisition module (4) and the marking module (6).
5. The depth camera and deep learning based self-iterative wound assessment system of claim 4, wherein: the image acquisition module (4) is a depth camera.
6. The depth camera and deep learning based self-iterative wound assessment system of claim 4, wherein: the image acquisition module (4) can acquire a planar image of a wound and multi-point position depth information of the image.
7. The depth camera and deep learning based self-iterative wound assessment system of claim 1, wherein: the processing unit (2) comprises a processing module (7), an error correction module (8) and a database (9), the database (9) comprises a trained wound assessment model, the processing module (7) is used for carrying out wound assessment on wound pictures acquired by the acquisition unit (1) according to the wound assessment model in the database (9) and feeding output results back to the acquisition unit (1), and the error correction module (8) is used for carrying out manual error correction on assessment results of the processing module (7) and outputting corrected results to the wound assessment model in the database (9) for iterative improvement.
8. The depth camera and deep learning based self-iterative wound assessment system of claim 7, wherein: the database (9) comprises a deep learning training engine, a deep learning analysis engine and a data set self-iteration program.
9. The depth camera and deep learning based self-iterative wound assessment system of claim 7, wherein: the processing process of the processing module (7) comprises the following steps:
performing wound evaluation on the wound image according to the wound evaluation model, and outputting a coordinate set of a wound evaluation result in an original plane image;
calculating an acquisition distance and a real scale ratio according to the image depth information, calculating a wound area according to the real scale ratio and a coordinate set, and calculating a wound volume according to the coordinate set, the image depth information and the real scale ratio;
and synthesizing the wound evaluation result and the original image, feeding the result back to the acquisition unit (1) and displaying the result in a visual mode.
10. The depth camera and deep learning based self-iterative wound assessment system of claim 1, wherein: software in the acquisition unit (1) and the processing unit (2) comprises APP, small programs, H5, mixed APP and cloud APP.
CN202210923919.2A 2021-11-24 2022-08-02 Self-iteration wound evaluation system based on deep camera shooting and deep learning Pending CN115511783A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111399643.4A CN114066872A (en) 2021-11-24 2021-11-24 Self-iteration wound evaluation system based on deep camera shooting and deep learning
CN2021113996434 2021-11-24

Publications (1)

Publication Number Publication Date
CN115511783A true CN115511783A (en) 2022-12-23

Family

ID=80276002

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111399643.4A Pending CN114066872A (en) 2021-11-24 2021-11-24 Self-iteration wound evaluation system based on deep camera shooting and deep learning
CN202210923919.2A Pending CN115511783A (en) 2021-11-24 2022-08-02 Self-iteration wound evaluation system based on deep camera shooting and deep learning

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202111399643.4A Pending CN114066872A (en) 2021-11-24 2021-11-24 Self-iteration wound evaluation system based on deep camera shooting and deep learning

Country Status (1)

Country Link
CN (2) CN114066872A (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663699A (en) * 2022-03-08 2022-06-24 中南大学湘雅医院 Method for identifying wound injured tissue type and predicting wound healing time with high precision
CN115170629A (en) * 2022-09-08 2022-10-11 杭州海康慧影科技有限公司 Wound information acquisition method, device, equipment and storage medium
CN115252124B (en) * 2022-09-27 2022-12-20 山东博达医疗用品股份有限公司 Suture usage estimation method and system based on injury picture data analysis

Also Published As

Publication number Publication date
CN114066872A (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN115511783A (en) Self-iteration wound evaluation system based on deep camera shooting and deep learning
US20190096060A1 (en) Method and apparatus for annotating medical image
CN111950396B (en) Meter reading neural network identification method
CN104949617B (en) For the object three-dimensional dimension estimating system and method for object encapsulation
CN111067531A (en) Wound measuring method and device and storage medium
CN107229560A (en) A kind of interface display effect testing method, image specimen page acquisition methods and device
CN111768418A (en) Image segmentation method and device and training method of image segmentation model
CN111782529B (en) Test method and device for auxiliary diagnosis system, computer equipment and storage medium
CN115170629A (en) Wound information acquisition method, device, equipment and storage medium
CN114121232A (en) Classification model training method and device, electronic equipment and storage medium
CN103729642A (en) Meat quality detection system and method based on ultrasonic signals
CN117275613A (en) Blending method of mask formula
CN111144506B (en) Liver bag worm identification method based on ultrasonic image, storage medium and ultrasonic equipment
CN110837748B (en) Remote gait acquisition and analysis system
CN116704401A (en) Grading verification method and device for operation type examination, electronic equipment and storage medium
CN116597246A (en) Model training method, target detection method, electronic device and storage medium
CN114913086B (en) Face image quality enhancement method based on generation countermeasure network
CN114580915B (en) Intelligent evaluation method and system for hair planting effect of novel microneedle technology
CN115530762A (en) CBCT temporomandibular joint automatic positioning method and system
CN108805121B (en) License plate detection and positioning method, device, equipment and computer readable medium
CN116109570A (en) Bone mineral density measuring method, device, equipment and storage medium
CN214410073U (en) Three-dimensional detection positioning system combining industrial camera and depth camera
CN112950582B (en) 3D lung focus segmentation method and device based on deep learning
CN110764642B (en) Method and device for calibrating visual projection
CN115273204A (en) Classroom teaching quality monitoring method based on facial expression analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination