CN114913153A - Deep learning technology-based wound identification and area measurement system and method - Google Patents
Deep learning technology-based wound identification and area measurement system and method Download PDFInfo
- Publication number
- CN114913153A CN114913153A CN202210526309.9A CN202210526309A CN114913153A CN 114913153 A CN114913153 A CN 114913153A CN 202210526309 A CN202210526309 A CN 202210526309A CN 114913153 A CN114913153 A CN 114913153A
- Authority
- CN
- China
- Prior art keywords
- wound
- area
- image
- processing equipment
- intelligent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000005259 measurement Methods 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000013135 deep learning Methods 0.000 title claims abstract description 26
- 238000005516 engineering process Methods 0.000 title claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 57
- 230000011218 segmentation Effects 0.000 claims abstract description 27
- 230000006870 function Effects 0.000 claims abstract description 25
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 18
- 238000003062 neural network model Methods 0.000 claims abstract description 14
- 239000000284 extract Substances 0.000 claims abstract description 6
- 238000012549 training Methods 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 10
- 230000002452 interceptive effect Effects 0.000 claims description 10
- 230000010354 integration Effects 0.000 claims description 8
- 238000013459 approach Methods 0.000 claims description 5
- 238000004891 communication Methods 0.000 claims description 5
- 238000000691 measurement method Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 2
- 238000013473 artificial intelligence Methods 0.000 abstract 1
- 206010052428 Wound Diseases 0.000 description 129
- 208000027418 Wounds and injury Diseases 0.000 description 129
- 238000010586 diagram Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000001684 chronic effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229920003023 plastic Polymers 0.000 description 1
- 239000002985 plastic film Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000008733 trauma Effects 0.000 description 1
- 230000029663 wound healing Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1072—Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1075—Measuring physical dimensions, e.g. size of the entire body or parts thereof for measuring dimensions by non-invasive methods, e.g. for determining thickness of tissue layer
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1079—Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Dentistry (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Pathology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a wound identification and area measurement system and method based on a deep learning technology. The method adopts image shooting and image intelligent processing equipment with a camera, and adopts a priori two-dimensional graph with a known area to obtain prior information of wound area measurement, so as to obtain a relation function between pixel number and height, and construct a deep convolutional neural network model; the camera shoots an image of a wound area, and the height of the wound area is measured by the laser ranging sensor and transmitted to the intelligent wound assessment terminal; the image recognition semantic segmentation module extracts the outline of the wound area, obtains the pixel number of the wound area, and obtains the area of the wound area by combining the area of the prior two-dimensional graph and a relation function of the pixel number and the height; the artificial intelligence deep convolution neural network model is used for segmenting an original wound image, the pixel number and height relation function is used as prior information to measure the area of a wound, and the requirements of being separated from a reference object, accurately measuring and transmitting back in real time can be met.
Description
Technical Field
The invention relates to the field of trauma image processing, in particular to a wound identification and area measurement system and method based on a deep learning technology.
Background
The progress of the information technology enables the wound treatment in an emergency environment to emphasize immediate rescue treatment, stable resuscitation and continuous seamless medical treatment after the wound more than the traditional firearm wound treatment. Therefore, there is a need to enhance medical construction work based on digitization of wounded information. Meanwhile, in the clinical field, the wound area is a key factor for judging the wound healing characteristics, and the improvement of information equipment such as image shooting and image intelligent processing equipment and storage provides help for the wound management and treatment of chronic wound patients. At present, the wound measuring methods in the medical industry comprise a linear measuring method, a wound depicting method, a body surface area measuring method and the like, and all the measuring methods have the defects that when the wound is in an irregular pattern or has more bent shapes, the wound shape is not easy to depict, and the area measurement is difficult; the measurement of dimensions in multiple directions is required, which takes a long time; the transparent plastic film used needs to be in contact with the wound, risking that the wound is infected. The body surface area measurement is to measure the wound by using a film with grids, and the grid is used for estimating the wound area, the method is quick and convenient, but the measured wound area is larger than the actual area.
In addition, there is a medical wound area measurement system, which obtains a photograph by photographing a wound of a patient, automatically calculates the size of the wound according to the photograph and a reference object to obtain data, generates a case report according to the wound photograph, and uploads the wound photograph and the case report to a server. However, this system requires estimation of the wound area based on the relationship between the size of the reference object and the size of the wound, and therefore, the captured wound image requires the reference object or the scale, which is not effective.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a wound identification and area measurement system and method based on a deep learning technology, which not only meet the requirements of wound identification and area accurate measurement in emergency environment and clinical field, break through the constraint of reference objects, enable the segmentation and area measurement of the wound to be more convenient and effective, but also solve the problem that the irregular shape of the gradual change characteristic of the wound is difficult to measure.
One objective of the present invention is to provide a wound identification and area measurement system based on deep learning technology.
The invention discloses a wound identification and area measurement system based on a deep learning technology, which comprises: the system comprises image shooting and image intelligent processing equipment, a laser ranging sensor and a wound evaluation intelligent terminal; the image shooting and image intelligent processing equipment is provided with a camera; the laser ranging sensor is fixed on the image shooting and image intelligent processing equipment and is connected to the wound assessment intelligent terminal through a data line; the image shooting and image intelligent processing equipment is connected to the wound assessment intelligent terminal through wireless communication; the intelligent wound assessment terminal comprises an image recognition semantic segmentation module, a prior information module, an area calculation module and an interactive interface; a trained deep convolutional neural network model is stored in the image recognition semantic segmentation module; the prior information module stores a relation function between the pixel number and the height obtained by adopting prior two-dimensional graph measurement, and the prior two-dimensional graph is a two-dimensional graph with a known area S;
the image recognition semantic segmentation module adopts a deep learning framework to construct a depth convolution neural network model, acquires a plurality of wound pictures, adopts a deep learning marking tool to mark wound region characteristic information of the wound pictures, marks the outline and the area of the wound region, generates wound picture training sets on the marked wound pictures in batches, and trains the depth convolution neural network model; the data of the wound picture training set is enhanced, so that the enhanced data integration multiple is enlarged; continuously adjusting learning parameters to improve the accuracy and enable the loss function to approach the minimum value, so as to obtain a trained deep convolutional neural network model;
the image shooting and image intelligent processing equipment shoots images of the prior two-dimensional graph at a series of different heights, and the laser ranging sensor obtains a series of heights h of the corresponding image shooting and image intelligent processing equipment 1 、h 2 、....、h N N is the collection times, and N is a natural number; the image shooting and image intelligent processing equipment respectively transmits the image of the prior two-dimensional graph and the corresponding height of the laser ranging sensor to the wound assessment intelligent terminal; the pixel number lambda occupied by the image of the prior two-dimensional graph at the corresponding height is obtained by a prior information module of the wound assessment intelligent terminal 1 、λ 2 、…、λ N Obtaining a plurality of groups of corresponding heights and pixel numbers; fitting a plurality of groups of corresponding heights and pixel numbers to obtain a relation function between the pixel numbers and the heights;
the image shooting and image intelligent processing equipment is positioned above the wound area, the camera carried by the image shooting and image intelligent processing equipment collects the image of the wound area, and the laser ranging sensor measures the height h between the image shooting and image intelligent processing equipment and the wound area at the moment x (ii) a The image shooting and image intelligent processing equipment respectively transmits the image of the prior two-dimensional graph and the corresponding height of the laser ranging sensor to the wound assessment intelligent terminal; an image recognition semantic segmentation module of the wound assessment intelligent terminal performs wound area characteristic information contour segmentation on the image of the wound area by adopting a trained deep convolutional neural network model, extracts the contour of the wound area and obtains the pixel number lambda of the wound area x (ii) a The area calculation module of the intelligent wound assessment terminal calculates the pixel number lambda of the wound area according to the calculated pixel number lambda x And combining the area S of the prior two-dimensional graph and the pixel number and height relation function obtained by the prior information module to obtain the area of the wound area at any height, and displaying the area of the wound area by the interactive interface.
The number of the wound picture training sets is 100-10000; and the enhanced data integration multiple is expanded by 2-10 times.
And fitting the multiple groups of corresponding heights and pixel numbers by adopting one of a least square method, a Lagrange interpolation method and a Newton interpolation method.
The image shooting and image intelligent processing equipment adopts a smart phone or a tablet personal computer.
The prior two-dimensional graph adopts a graph with a regular shape.
The invention also aims to provide a wound identification and area measurement method based on the deep learning technology.
The invention discloses a wound identification and area measurement method based on a deep learning technology, which comprises the following steps:
1) wound identification and area measurement system construction:
the laser ranging sensor is fixed on the image shooting and image intelligent processing equipment and is connected to the wound assessment intelligent terminal through a data line; the image shooting and image intelligent processing equipment is connected to the wound assessment intelligent terminal through wireless communication; the intelligent wound assessment terminal comprises an image recognition semantic segmentation module, a prior information module, an area calculation module and an interactive interface; a trained deep convolution neural network model is stored in the image recognition semantic segmentation module; a priori information module stores a relation function of the pixel number and the height obtained by adopting the measurement of a priori two-dimensional graph, wherein the priori two-dimensional graph is a two-dimensional graph with a known area S;
2) constructing a deep convolutional neural network model:
the image recognition semantic segmentation module adopts a deep learning framework to construct a depth convolution neural network model, acquires a plurality of wound pictures, adopts a deep learning marking tool to mark wound region characteristic information of the wound pictures, marks the outline and the area of the wound region, generates wound picture training sets on the marked wound pictures in batches, and trains the depth convolution neural network model; the data of the wound picture training set is enhanced, so that the enhanced data integration multiple is enlarged; continuously adjusting learning parameters to improve the accuracy and enable the loss function to approach the minimum value, so as to obtain a trained deep convolutional neural network model;
3) obtaining prior information of area measurement of a wound area:
the image shooting and image intelligent processing equipment shoots images of the prior two-dimensional graph at a series of different heights, and the laser ranging sensor obtains a series of heights h of the corresponding image shooting and image intelligent processing equipment 1 、h 2 、...、h N N is the collection times, and N is a natural number; the image shooting and image intelligent processing equipment respectively transmits the image of the prior two-dimensional graph and the corresponding height of the laser ranging sensor to the wound assessment intelligent terminal; the prior information module of the intelligent wound assessment terminal obtains the pixel number lambda occupied by the image of the prior two-dimensional graph at the corresponding height 1 、λ 2 、…、λ N Obtaining a plurality of groups of corresponding heights and pixel numbers; fitting a plurality of groups of corresponding heights and pixel numbers to obtain a relation function between the pixel numbers and the heights;
4) the image shooting and image intelligent processing equipment is positioned above the wound area, the camera carried by the image shooting and image intelligent processing equipment collects the image of the wound area, and the laser ranging sensor measures the height h between the image shooting and image intelligent processing equipment and the wound area at the moment x (ii) a The image shooting and image intelligent processing equipment respectively transmits the heights of the wound area and the heights corresponding to the laser ranging sensors to the wound assessment intelligent terminal; an image recognition semantic segmentation module of the wound assessment intelligent terminal performs wound area characteristic information contour segmentation on the image of the wound area by adopting a trained deep convolutional neural network model, extracts the contour of the wound area and obtains the pixel number lambda of the wound area x (ii) a The area calculation module of the intelligent wound assessment terminal calculates the pixel number lambda of the wound area according to the calculated pixel number lambda x And combining the area S of the prior two-dimensional graph and the pixel number and height relation function obtained by the prior information module to obtain the area of the wound area at any height, and displaying the area of the wound area by the interactive interface.
In the step 2), the number of the wound picture training sets is 100-10000; and the enhanced data integration multiple is expanded by 2-10 times.
In the step 3), one of a least square method, a Lagrangian interpolation method and a Newton interpolation method is adopted by fitting the corresponding heights and the pixel numbers of the plurality of groups.
The invention has the advantages that:
according to the invention, the original wound image is segmented by using the artificial intelligent deep convolution neural network model, and the area of the wound is measured by taking the pixel number and height relation function as prior information, so that the requirements of separating from a reference object, accurately measuring and transmitting back in real time can be met.
Drawings
FIG. 1 is a block diagram of a deep learning technique based wound identification and area measurement system according to the present invention;
FIG. 2 is a schematic diagram of a system for wound identification and area measurement based on deep learning techniques of the present invention for obtaining prior information;
fig. 3 is a flowchart of a method for wound identification and area measurement based on deep learning technology according to the present invention.
Detailed Description
The invention will be further elucidated by means of specific embodiments in the following with reference to the drawing.
As shown in fig. 1, the deep learning technique-based wound identification and area measurement system of the present embodiment includes: the system comprises an image shooting and image intelligent processing device 1, a camera 2, a laser ranging sensor 3 and a wound assessment intelligent terminal 4; the image shooting and image intelligent processing equipment 1 is provided with a camera 2; the laser ranging sensor 3 is fixed on the image shooting and image intelligent processing device 1, the laser ranging sensor 3 is connected to the wound assessment intelligent terminal 4 through a data line, the wound assessment intelligent terminal 4 supplies power to the laser ranging sensor 3 through the data line, the height data acquired by the laser ranging sensor 3 is transmitted to the wound assessment intelligent terminal 4 through the data line, and the wound assessment intelligent terminal 4 supplies power to the laser ranging sensor 3 through the data line and transmits data; the image shooting and image intelligent processing equipment 1 is connected to the wound assessment intelligent terminal 4 through wireless communication; the intelligent wound assessment terminal 4 comprises an image recognition semantic segmentation module, a prior information module, an area calculation module and an interactive interface; a trained deep convolutional neural network model is stored in the image recognition semantic segmentation module; the prior information module is internally stored with a relation function of the pixel number and the height obtained by measuring the prior two-dimensional graph 5, wherein the prior two-dimensional graph is a triangle, a square or a circle with a known area S. The image shooting and image intelligent processing device 1 adopts a smart phone.
The method for identifying and measuring the area of a wound based on the deep learning technique of the embodiment, as shown in fig. 3, includes the following steps:
1) a wound identification and area measurement system is set up, as shown in fig. 1;
2) constructing a deep convolutional neural network model:
the image recognition semantic segmentation module adopts a deep learning framework to construct an STDC-Net deep convolution neural network model, firstly, a plurality of known wound pictures are obtained to serve as training data sets, a mode of collecting wound pictures on the internet or shooting the wound pictures through experiments is adopted as a creating mode, then, a deep learning marking tool Lableme is adopted to mark wound region characteristic information on the wound pictures, the outline and the area of a wound region are marked, the marked wound pictures are generated into wound picture training sets in batches, the number is 520, and the deep convolution neural network model is trained; meanwhile, the wound picture training set is subjected to rotation, scaling, random brightness and random saturation processing, data enhancement is carried out, the integrated multiple of the enhanced data is expanded, and the number of the expanded data is 3500; the parameters of the STDC-Net deep convolution neural network model are set as follows: the iteration times are 1620 times, and the learning rate is 0.01; continuously adjusting learning parameters to improve the accuracy and enable the loss function to approach the minimum value, so as to obtain a trained deep convolutional neural network model;
3) obtaining prior information of area measurement of a wound area:
the image shooting and image intelligent processing equipment 1 shoots images of the prior two-dimensional graph at a series of different heights, and the laser ranging sensor 3 obtains a series of heights h of the corresponding image shooting and image intelligent processing equipment 1 1 、h 2 、h 3 、....、h N N is the collection times, and N is a natural number more than or equal to 30; the image shooting and image intelligent processing device 1 is toThe image of the prior two-dimensional graph and the corresponding height of the laser ranging sensor 3 are respectively transmitted to the wound assessment intelligent terminal 4; the prior information module of the intelligent wound assessment terminal 4 obtains the pixel number lambda occupied by the image of the prior two-dimensional graph at the corresponding height 1 、λ 2 、λ 3 、…、λ N Obtaining a plurality of groups of corresponding heights and pixel numbers; fitting multiple groups of corresponding heights and pixel numbers to obtain a function λ ═ f (h) of the relation between the pixel numbers and the heights, wherein λ is the pixel number, h is the height, and f represents the function, as shown in fig. 2;
4) the image shooting and image intelligent processing equipment 1 is positioned above the wound area, the camera 2 of the image shooting and image intelligent processing equipment 1 collects images of the wound area, and the laser ranging sensor 3 measures the height h of the image shooting and image intelligent processing equipment 1 from the wound area at the moment x (ii) a The image shooting and image intelligent processing equipment 1 respectively transmits the image of the prior two-dimensional graph and the corresponding height to the wound assessment intelligent terminal 4 by the laser ranging sensor 3; an image recognition semantic segmentation module of the wound assessment intelligent terminal 4 performs wound area feature information contour segmentation on the image of the wound area by adopting a trained deep convolutional neural network model, extracts the contour of the wound area, and obtains the pixel number lambda of the wound area x (ii) a The area calculation module of the intelligent terminal 4 for wound assessment calculates the pixel number lambda of the wound area according to the calculated pixel number lambda x And combining the area S of the prior two-dimensional graph and the pixel number and height relation function obtained by the prior information module to obtain the area of the wound area at any height, and displaying the area of the wound area by the interactive interface.
It is finally noted that the disclosed embodiments are intended to aid in the further understanding of the invention, but that those skilled in the art will appreciate that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and the appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.
Claims (7)
1. A wound identification and area measurement system based on deep learning techniques, the wound identification and area measurement system comprising: the system comprises image shooting and image intelligent processing equipment, a laser ranging sensor and a wound evaluation intelligent terminal; the image shooting and image intelligent processing equipment is provided with a camera; the laser ranging sensor is fixed on the image shooting and image intelligent processing equipment and is connected to the wound assessment intelligent terminal through a data line; the image shooting and image intelligent processing equipment is connected to the wound assessment intelligent terminal through wireless communication; the intelligent wound assessment terminal comprises an image recognition semantic segmentation module, a prior information module, an area calculation module and an interactive interface; a trained deep convolutional neural network model is stored in the image recognition semantic segmentation module; a priori information module stores a relation function of the pixel number and the height obtained by adopting the measurement of a priori two-dimensional graph, wherein the priori two-dimensional graph is a two-dimensional graph with a known area S;
the image recognition semantic segmentation module adopts a deep learning framework to construct a depth convolution neural network model, acquires a plurality of wound pictures, adopts a deep learning marking tool to mark wound region characteristic information of the wound pictures, marks the outline and the area of the wound region, generates wound picture training sets on the marked wound pictures in batches, and trains the depth convolution neural network model; the data of the wound picture training set is enhanced, so that the enhanced data integration multiple is enlarged; continuously adjusting learning parameters to improve the accuracy and enable the loss function to approach the minimum value, so as to obtain a trained deep convolutional neural network model;
the image shooting and image intelligent processing equipment shoots images of the prior two-dimensional graph at a series of different heights, and the laser ranging sensor obtains a series of heights h of the corresponding image shooting and image intelligent processing equipment 1 、h 2 、....、h N N is the collection times, and N is a natural number; the image shooting and image intelligent processing equipment respectively transmits the image of the prior two-dimensional graph and the corresponding height of the laser ranging sensor to the wound assessment intelligent terminal; priori information module obtaining of intelligent wound assessment terminalNumber of pixels lambda occupied by image corresponding to prior two-dimensional figure at height 1 、λ 2 、…、λ N Obtaining a plurality of groups of corresponding heights and pixel numbers; fitting a plurality of groups of corresponding heights and pixel numbers to obtain a relation function between the pixel numbers and the heights;
the image shooting and image intelligent processing equipment is positioned above the wound area, the camera carried by the image shooting and image intelligent processing equipment collects the image of the wound area, and the laser ranging sensor measures the height h between the image shooting and image intelligent processing equipment and the wound area at the moment x (ii) a The image shooting and image intelligent processing equipment respectively transmits the image of the prior two-dimensional graph and the corresponding height of the laser ranging sensor to the wound assessment intelligent terminal; an image recognition semantic segmentation module of the wound assessment intelligent terminal performs wound area characteristic information contour segmentation on the image of the wound area by adopting a trained deep convolutional neural network model, extracts the contour of the wound area and obtains the pixel number lambda of the wound area x (ii) a The area calculation module of the intelligent wound assessment terminal calculates the pixel number lambda of the wound area according to the calculated pixel number lambda x And combining the area S of the prior two-dimensional graph and the pixel number and height relation function obtained by the prior information module to obtain the area of the wound area at any height, and displaying the area of the wound area by the interactive interface.
2. The wound identification and area measurement system of claim 1, wherein the number of the wound picture training sets is 100-10000; and the enhanced data integration multiple is expanded by 2-10 times.
3. The wound identification and area measurement system of claim 1, wherein the a priori two-dimensional pattern is a regular shaped pattern.
4. The method for identifying and measuring the wound identification and area measurement system based on the deep learning technology as claimed in claim 1, wherein the method for identifying and measuring comprises the following steps:
1) wound identification and area measurement system construction:
the laser ranging sensor is fixed on the image shooting and image intelligent processing equipment and is connected to the wound assessment intelligent terminal through a data line; the image shooting and image intelligent processing equipment is connected to the wound assessment intelligent terminal through wireless communication; the intelligent wound assessment terminal comprises an image recognition semantic segmentation module, a prior information module, an area calculation module and an interactive interface; a trained deep convolutional neural network model is stored in the image recognition semantic segmentation module; a priori information module stores a relation function of the pixel number and the height obtained by adopting the measurement of a priori two-dimensional graph, wherein the priori two-dimensional graph is a two-dimensional graph with a known area S;
2) constructing a deep convolutional neural network model:
the image recognition semantic segmentation module adopts a deep learning framework to construct a depth convolution neural network model, acquires a plurality of wound pictures, adopts a deep learning marking tool to mark wound region characteristic information of the wound pictures, marks the outline and the area of the wound region, generates wound picture training sets on the marked wound pictures in batches, and trains the depth convolution neural network model; the data of the wound picture training set is enhanced, so that the enhanced data integration multiple is enlarged; continuously adjusting learning parameters to improve the accuracy and enable the loss function to approach the minimum value, so as to obtain a trained deep convolutional neural network model;
3) obtaining prior information of area measurement of a wound area:
the image shooting and image intelligent processing equipment shoots images of the prior two-dimensional graph at a series of different heights, and the laser ranging sensor obtains a series of heights h of the corresponding image shooting and image intelligent processing equipment 1 、h 2 、...、h N N is the collection times, and N is a natural number; the image shooting and image intelligent processing equipment respectively transmits the image of the prior two-dimensional graph and the corresponding height of the laser ranging sensor to the wound assessment intelligent terminal; the prior information module of the intelligent wound assessment terminal obtains an image of a prior two-dimensional graph at a corresponding heightOccupied pixel number lambda 1 、λ 2 、…、λ N Obtaining a plurality of groups of corresponding heights and pixel numbers; fitting a plurality of groups of corresponding heights and pixel numbers to obtain a relation function between the pixel numbers and the heights;
4) the image shooting and image intelligent processing equipment is positioned above the wound area, the camera of the image shooting and image intelligent processing equipment collects the image of the wound area, and the laser ranging sensor measures the height h between the image shooting and image intelligent processing equipment and the wound area x (ii) a The image shooting and image intelligent processing equipment respectively transmits the heights of the wound area and the heights corresponding to the laser ranging sensors to the wound assessment intelligent terminal; an image recognition semantic segmentation module of the wound assessment intelligent terminal performs wound area characteristic information contour segmentation on the image of the wound area by adopting a trained deep convolutional neural network model, extracts the contour of the wound area and obtains the pixel number lambda of the wound area x (ii) a The area calculation module of the intelligent wound assessment terminal calculates the pixel number lambda of the wound area according to the calculated pixel number lambda x And combining the area S of the prior two-dimensional graph and the pixel number and height relation function obtained by the prior information module to obtain the area of the wound area at any height, and displaying the area of the wound area by the interactive interface.
5. The identification and measurement method according to claim 4, wherein in the step 2), the number of the training sets of the wound images is 100 to 10000.
6. The identification and measurement method according to claim 4, wherein in the step 2), the enhanced data integration multiple is enlarged by 2-10 times.
7. The identification and measurement method according to claim 4, wherein in step 3), one of a least square method, a Lagrangian interpolation method and a Newton interpolation method is adopted by fitting the plurality of sets of corresponding heights and pixel numbers.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210526309.9A CN114913153A (en) | 2022-05-16 | 2022-05-16 | Deep learning technology-based wound identification and area measurement system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210526309.9A CN114913153A (en) | 2022-05-16 | 2022-05-16 | Deep learning technology-based wound identification and area measurement system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114913153A true CN114913153A (en) | 2022-08-16 |
Family
ID=82767009
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210526309.9A Withdrawn CN114913153A (en) | 2022-05-16 | 2022-05-16 | Deep learning technology-based wound identification and area measurement system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114913153A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115170629A (en) * | 2022-09-08 | 2022-10-11 | 杭州海康慧影科技有限公司 | Wound information acquisition method, device, equipment and storage medium |
CN118037698A (en) * | 2024-03-13 | 2024-05-14 | 中国人民解放军总医院第一医学中心 | Wound intelligent monitoring and management system for field operations |
-
2022
- 2022-05-16 CN CN202210526309.9A patent/CN114913153A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115170629A (en) * | 2022-09-08 | 2022-10-11 | 杭州海康慧影科技有限公司 | Wound information acquisition method, device, equipment and storage medium |
CN118037698A (en) * | 2024-03-13 | 2024-05-14 | 中国人民解放军总医院第一医学中心 | Wound intelligent monitoring and management system for field operations |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112184705B (en) | Human body acupuncture point identification, positioning and application system based on computer vision technology | |
CN114913153A (en) | Deep learning technology-based wound identification and area measurement system and method | |
KR20220066366A (en) | Predictive individual 3D body model | |
WO2021000423A1 (en) | Pig weight measurement method and apparatus | |
CN109176512A (en) | A kind of method, robot and the control device of motion sensing control robot | |
CN109815865B (en) | Water level identification method and system based on virtual water gauge | |
WO2017133009A1 (en) | Method for positioning human joint using depth image of convolutional neural network | |
WO2020103417A1 (en) | Bmi evaluation method and device, and computer readable storage medium | |
CN108564120B (en) | Feature point extraction method based on deep neural network | |
KR20220160066A (en) | Image processing method and apparatus | |
CN111012353A (en) | Height detection method based on face key point recognition | |
KR20220024494A (en) | Method and system for human monocular depth estimation | |
CN112597847B (en) | Face pose estimation method and device, electronic equipment and storage medium | |
CN116645697A (en) | Multi-view gait recognition method and device, electronic equipment and storage medium | |
CN108230402A (en) | A kind of stereo calibration method based on trigone Based On The Conic Model | |
CN112184898A (en) | Digital human body modeling method based on motion recognition | |
CN116152697A (en) | Three-dimensional model measuring method and related device for concrete structure cracks | |
CN110910449A (en) | Method and system for recognizing three-dimensional position of object | |
CN114463663A (en) | Method and device for calculating height of person, electronic equipment and storage medium | |
CN113989295A (en) | Scar and keloid image cutting and surface area calculating method and system | |
CN113222939B (en) | Food image volume calculation method based on thumbnail calibration | |
CN113408531B (en) | Target object shape frame selection method and terminal based on image recognition | |
CN116974369B (en) | Method, system, equipment and storage medium for operating medical image in operation | |
CN116758124B (en) | 3D model correction method and terminal equipment | |
CN114495199B (en) | Organ positioning method, organ positioning device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220816 |