CN115170629A - Wound information acquisition method, device, equipment and storage medium - Google Patents

Wound information acquisition method, device, equipment and storage medium Download PDF

Info

Publication number
CN115170629A
CN115170629A CN202211092134.1A CN202211092134A CN115170629A CN 115170629 A CN115170629 A CN 115170629A CN 202211092134 A CN202211092134 A CN 202211092134A CN 115170629 A CN115170629 A CN 115170629A
Authority
CN
China
Prior art keywords
wound
image
information
pixel point
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211092134.1A
Other languages
Chinese (zh)
Inventor
刘陈林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Haikang Huiying Technology Co ltd
Original Assignee
Hangzhou Haikang Huiying Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Haikang Huiying Technology Co ltd filed Critical Hangzhou Haikang Huiying Technology Co ltd
Priority to CN202211092134.1A priority Critical patent/CN115170629A/en
Publication of CN115170629A publication Critical patent/CN115170629A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a wound information acquisition method, a wound information acquisition device, wound information acquisition equipment and a storage medium, relates to the field of medical treatment, and can acquire wound information quickly and accurately. The method comprises the following steps: acquiring a first image and a second image which are obtained by shooting a target object, wherein the target object comprises a wound surface and/or a tool for detecting the depth of a wound, the first image is a plane image containing the target object, and the second image is a depth map containing the target object; the depth value of a pixel point in the depth map represents the distance from the corresponding point of the pixel point to the shooting lens in the physical space; determining wound information based on the first image and the second image; wherein the wound information is indicative of an attribute characteristic parameter of the wound; and prompting wound information. The application can be used in the process of obtaining wound information.

Description

Wound information acquisition method, device, equipment and storage medium
Technical Field
The present application relates to the field of medical treatment, and in particular, to a method, an apparatus, a device, and a storage medium for acquiring wound information.
Background
In a medical setting, it is often necessary to obtain wound information for a patient to develop a treatment and care plan. For example, in the case of pressure sore nursing, medical staff need to measure information such as the size, area, color and the like of a wound to evaluate the severity of the injury, and then make a treatment and nursing plan in a targeted manner.
At present, wound information is generally measured and evaluated manually by medical staff, and the obtained wound information is uploaded to a medical information system, so that doctors can conveniently make response schemes according to the wound information. However, the manual measurement mode has certain subjectivity and ambiguity, so that the requirement on the experience of medical staff is high, and the measurement efficiency is low.
Disclosure of Invention
The application provides a wound information acquisition method, a device, equipment and a storage medium, which can quickly and accurately acquire wound information.
In a first aspect, the present application provides a method of wound information acquisition, the method comprising: acquiring a first image and a second image which are obtained by shooting a target object, wherein the target object comprises a wound surface and/or a tool for detecting the depth of a wound, the first image is a plane image containing the target object, and the second image is a depth map containing the target object; the depth value of a pixel point in the depth map represents the distance from the corresponding point of the pixel point to the shooting lens in the physical space; determining wound information based on the first image and the second image; wherein the wound information is indicative of an attribute characteristic parameter of the wound; and prompting wound information.
The wound information acquisition method provided by the application determines wound information through a planar image and a depth map which are shot of the surface of a wound and a tool which has detected the depth of the wound. This scheme compares traditional artifical mode of measuring wound information, and not only measuring accuracy is high, and measurement of efficiency also improves greatly. Moreover, medical staff does not need to have higher level measurement experience, and the wound information can be acquired manually.
In one possible implementation, the target object includes a wound surface, the wound information includes a size and/or an area of the wound, and determining the wound information based on the first image and the second image includes: identifying a contour of the wound in the first image; mapping the outline of the wound to a second image to obtain depth values of a plurality of pixel points in the outline of the wound and azimuth information of the plurality of pixel points in the outline of the wound; the orientation information of one pixel point is used for indicating the orientation of the corresponding point of the pixel point in the physical space relative to the shooting lens; and determining the size and/or the area of the wound according to the depth values of the plurality of pixel points in the outline of the wound and the azimuth information of the plurality of pixel points in the outline of the wound.
In another possible implementation, the target object includes a tool that has detected a depth of a wound, the wound information includes a depth of the wound, and determining the wound information based on the first image and the second image includes: identifying a first pixel point and a second pixel point in a first image; the first pixel point and the second pixel point are used for indicating the starting position and the ending position of a bloodstain area on the tool in the axial direction of the tool; mapping the first pixel points and the second pixel points to a second image to obtain depth values of the first pixel points and the second pixel points and azimuth information of the first pixel points and the second pixel points; the orientation information of one pixel point is used for indicating the orientation of the corresponding point of the pixel point in the physical space relative to the shooting lens; and determining the distance between the first pixel point and the second pixel point as the depth of the wound according to the depth values of the first pixel point and the second pixel point and the azimuth information of the first pixel point and the second pixel point.
In yet another possible implementation, when the target object includes a wound surface, the wound information further includes a fluid leakage condition, and/or a color of the wound, the method further includes: determining a fluid leakage condition of the wound, and/or a color of the wound, based on the first image; wherein the liquid leakage condition is characterized by a liquid leakage area and/or a liquid leakage amount; the color of the wound is characterized by the pixel value size, and/or the color name; when the wound information comprises a liquid leakage condition, prompting the wound information, comprising: marking the liquid leakage area and/or the liquid leakage amount of the wound; when the wound information includes the color of the wound, prompting the wound information, including: the wound is labeled with pixel value size, and/or color name.
In yet another possible implementation, the prompting the wound information includes: displaying the target area where the wound is located, and/or labeling wound information.
In a second aspect, the present application provides a wound information acquisition apparatus, comprising: the device comprises an acquisition module, a determination module and a prompt module; the acquisition module is used for acquiring a first image and a second image which are obtained by shooting a target object, wherein the target object comprises a wound surface and/or a tool for detecting the depth of a detected wound, the first image is a plane image containing the target object, and the second image is a depth map containing the target object; the depth value of a pixel point in the depth map represents the distance from the corresponding point of the pixel point to the shooting lens in the physical space; a determination module for determining wound information based on the first image and the second image; wherein the wound information is indicative of an attribute characteristic parameter of the wound; the prompting module is used for prompting wound information.
In one possible implementation, the target object includes a wound surface, the wound information includes a size and/or an area of the wound, and the determination module is specifically configured to identify a contour of the wound in the first image; mapping the outline of the wound to a second image to obtain depth values of a plurality of pixel points in the outline of the wound and azimuth information of the plurality of pixel points in the outline of the wound; the direction information of one pixel point is used for indicating the direction of the corresponding point of the pixel point in the physical space relative to the shooting lens; and determining the size and/or the area of the wound according to the depth values of the plurality of pixel points in the outline of the wound and the azimuth information of the plurality of pixel points in the outline of the wound.
In another possible implementation manner, the target object includes a tool for detecting a depth of a wound, the wound information includes a depth of the wound, and the determining module is specifically configured to identify a first pixel point and a second pixel point in the first image; the first pixel point and the second pixel point are used for indicating the starting position and the ending position of a bloodstain area on the tool in the axial direction of the tool; mapping the first pixel points and the second pixel points to a second image to obtain depth values of the first pixel points and the second pixel points and azimuth information of the first pixel points and the second pixel points; the direction information of one pixel point is used for indicating the direction of the corresponding point of the pixel point in the physical space relative to the shooting lens; and determining the distance between the first pixel point and the second pixel point as the depth of the wound according to the depth values of the first pixel point and the second pixel point and the azimuth information of the first pixel point and the second pixel point.
In yet another possible implementation, when the target object includes a wound surface, the wound information further includes a fluid leakage condition, and/or a color of the wound; the determination module is further used for determining the liquid leakage condition of the wound and/or the color of the wound based on the first image; wherein the liquid leakage condition is characterized by a liquid leakage area and/or a liquid leakage amount; the color of the wound is characterized by the pixel value size, and/or the color name; when the wound information comprises a liquid leakage condition, the prompting module is specifically used for marking the liquid leakage area and/or the liquid leakage amount of the wound; when the wound information includes the color of the wound, the prompt module is specifically configured to label the size of the pixel value of the wound, and/or the color name.
In another possible implementation manner, the prompting module is specifically configured to display a target area where the wound is located and label the wound information.
In a third aspect, the present application provides a wound information acquiring apparatus, including a camera and a ranging module; the camera is adjacent to the distance measuring module; the camera is used for shooting a target object to obtain a first image; the target object comprises a wound surface and/or a tool that has probed the depth of the wound; the first image is a plane image containing a target object; the distance measurement module is used for shooting a target object to obtain a second image; the second image is a depth map containing a target object; the depth value of a pixel point in the depth map represents the distance from the corresponding point of the pixel point to the shooting lens in the physical space.
In a fourth aspect, the present application provides an electronic device comprising: a processor and a memory; the memory stores instructions executable by the processor; the processor is configured to execute the instructions such that the electronic device implements the method of the first aspect described above.
In a fifth aspect, the present application provides a computer-readable storage medium comprising: computer software instructions; the computer software instructions, when executed in a computer, cause the computer to perform the method of the first aspect described above.
In a sixth aspect, the present application provides a computer program product for causing a computer to perform the steps of the related method described in the above first aspect, when the computer program product runs on a computer, so as to implement the method of the above first aspect.
The beneficial effects of the second to sixth aspects may refer to the corresponding descriptions of the first aspect, and are not described again.
Drawings
Fig. 1 is a schematic view of a wound information acquisition apparatus provided herein;
fig. 2 is a schematic flow chart of a wound information acquisition method provided in the present application;
FIG. 3 is a schematic flow diagram of a method of determining wound size and/or area provided herein;
FIG. 4 is a schematic illustration of an area of a wound provided herein;
FIG. 5 is a schematic view of a calibration grid provided herein;
FIG. 6 is a schematic flow chart of a calibration coefficient determination process provided herein;
FIG. 7 is a schematic diagram illustrating a relationship between a pixel point and a taking lens according to the present disclosure;
FIG. 8 is a schematic flow chart of a method of determining wound depth provided herein;
FIG. 9 is a schematic view of a tool for detecting wound depth provided herein;
FIG. 10 is a schematic flow diagram of another method of obtaining wound information provided herein;
fig. 11 is a schematic flow chart of another method for acquiring wound information provided by the present application;
fig. 12 is a schematic diagram of a wound information acquisition device provided in the present application;
fig. 13 is a schematic composition diagram of an electronic device provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
For the convenience of clearly describing the technical solutions of the embodiments of the present application, in the embodiments of the present application, the terms "first", "second", and the like are used for distinguishing the same items or similar items with basically the same functions and actions, and those skilled in the art can understand that the terms "first", "second", and the like are not limited in number or execution order.
In order to facilitate understanding of the technical solutions of the present application, the following briefly introduces terms related to the present application.
1. The distance measurement module: a device capable of outputting depth matrix information in a range of field angles. Such as time to flight (TOF) cameras, 3D structured light cameras, radar, etc.
2. Calibration: the parameter process for calculating the position relation between the camera and the distance measurement module is obtained.
3. Deep learning: deep learning is a new field in machine learning research, and its motivation is to create and simulate a neural network for human brain to analyze and learn, which simulates the mechanism of human brain to interpret data such as images, sounds and texts. Deep learning is one type of unsupervised learning. The concept of deep learning stems from the study of artificial neural networks. A multi-layer perceptron with multiple hidden layers is a deep learning structure. Deep learning forms more abstract high-level representation attribute classes or features by combining low-level features to discover a distributed feature representation of the data. In the scheme, a deep learning algorithm is adopted to identify the wound picture.
In the medical field, a significant portion of people are afflicted with persistent wound problems. Especially the incidence of chronic, refractory wounds continues to increase year by year. In clinic, the acquisition of wound information is an important and effective index for evaluating the severity of wounds and making a treatment recovery scheme in a targeted manner.
At present, the wound information is obtained by manually measuring the size information of the wound by medical staff through tools such as a ruler and the like, but the precision of the manual measurement is limited by the proficiency of the medical staff, so that the measurement result has certain subjectivity and ambiguity. Moreover, this manual measurement is inefficient and greatly affects the progress of the patient's wound treatment.
In the related art, a coin is used as a reference to measure the size of the wound, the process intelligence is low, and the error is large. In addition, the scheme of calculating the wound size through a camera by means of external various light coordination is provided. The scheme has certain light pollution and is not beneficial to the recovery of the wound of the patient.
In summary, how to obtain the wound information quickly and effectively is an urgent problem to be solved.
Based on this, the embodiment of the application provides a wound information acquisition method, and the method automatically determines the information of the wound by taking a picture of the wound, does not need manual measurement, has high information acquisition efficiency, has objective measurement results, and can more accurately reflect the real information condition of the wound.
The wound information acquisition method provided by the present application can be applied to a wound information acquisition apparatus (or referred to as a wound information acquisition device) as shown in fig. 1. As shown in fig. 1, the apparatus may include: the device comprises a light supplementing module, an acquisition module (comprising a camera and a ranging module), a calculation module, a core processing module (Soc module), a storage module, a battery, a display screen and a key module.
A camera: for capturing a two-dimensional visible light image. In the embodiment of the present application, for capturing a first image of a target object.
The distance measurement module: for obtaining a depth map of the object. In the embodiment of the present application, the method is used for acquiring the second image of the target object.
An acquisition module: the camera and the distance measurement module are mainly used for collecting image data and processing data.
And (3) a Soc module: and the core processing unit of the whole electronic equipment is used for coordinating the mutual cooperation of all the modules.
A calculation module: the method is used for calculating the distance and other information of the measured object. The method is used for calculating the information such as the size or the area of the wound in the embodiment of the application.
A storage module: the system is used for storing calibration parameters and pictures marked with wound information.
A battery: and a power supply unit of the whole electronic equipment.
A key module: the electronic equipment is used for receiving user instructions, so that the electronic equipment responds, and the user operation is facilitated. The keys may be touch screens or physical keys.
A display screen: and the display module is used for displaying a preview picture and a shooting result display during shooting.
The application provides a wound information acquisition method, through above-mentioned electronic equipment in camera and the mutual cooperation of range finding module, shoot the two-dimensional visible light image of wound through the camera to and the depth map of the wound that the range finding module obtained, based on this two-dimensional visible light image and depth map, gather the degree of deep learning algorithm and handle the back, combine two kinds of pictures together through predetermined calibration coefficient, and then confirm information such as the size area of wound. This scheme traditional manual measurement of comparing, measurement accuracy is accurate high and improved the acquisition efficiency of wound information greatly.
Fig. 2 is a schematic flow chart of a wound information acquiring method according to an embodiment of the present application. By way of example, the wound information acquisition method provided by the present application may be applied to the electronic device shown in fig. 1.
As shown in fig. 2, the wound information obtaining method provided by the present application may specifically include the following steps:
s201, a wound information acquisition device acquires a first image and a second image which are obtained by shooting a target object.
Wherein the target object comprises a wound surface and/or a tool that has probed the depth of the wound.
In some embodiments, the wound information acquisition device may acquire the first image and the second image captured by the target object when the wound information needs to be acquired. The first image is a planar image including a target object, and the second image is a depth map including the target object. The depth value of a pixel point in the depth map represents the distance from the corresponding point of the pixel point to the shooting lens in the physical space.
Illustratively, when the wound information needs to be acquired, the medical staff can hold the wound information acquisition device shown in fig. 1 and open the shooting function to aim at the wound surface. The effect of shooing can be clear effectual shoot the wound surface the complete picture can, this application embodiment does not do the restriction to specific shooting angle, the distance between device and the wound surface. After receiving a determined shooting instruction of a medical worker, the wound information acquisition device can shoot the surface of the wound through the camera shown in fig. 1 to acquire a first image (a two-dimensional visible light image and a planar image) of the surface of the wound. In addition, the wound information acquisition device also acquires a second image (depth map) of the wound surface through the ranging module shown in fig. 1. It should be understood that the distance measurement module for acquiring the depth map may be in any form such as a TOF camera, a structured light camera, a laser or a radar, as long as the depth map of the target object of the wound can be acquired, and a specific implementation form of the embodiment of the present application is not particularly limited.
In addition, the medical staff can also hold the wound information acquisition device by hands, and aim at the tool with the detected wound depth to take a first image and a second image of the tool. Due to the images taken of the wound surface, only wound information such as the size or area of the wound can be analyzed. For clinical medicine, wound depth is also an important parameter for understanding wound information. Thus, a medical professional may use a tool (such as a cotton swab) to detect the depth of the wound. The length of the stain on the tool can be considered to be the depth of the wound. Therefore, the wound information acquisition device can acquire the planar image and the depth map acquired by shooting the tool for detecting the wound depth for acquiring the subsequent wound depth.
In other embodiments, in the shooting process, if the shooting picture is dark, the wound information acquisition device can turn on the light supplement lamp to supplement light to the target object of the wound. Whether to turn on the light supplement lamp can be subjective action of medical personnel, and can also be automatically judged by the wound information acquisition device. In the shooting process, the wound information acquisition device can acquire a preview image of the target object, and the brightness value of the image can be acquired from the preview image and compared with a preset brightness threshold value. If the brightness value of the preview picture is smaller than the preset brightness threshold value, the current environment is dark, and the wound information acquisition device starts a light supplement lamp to supplement light. Or, medical personnel feel that the current environment is darker, can send the instruction through clicking the controlling part, and wound information acquisition device receives and responds this instruction, opens the light filling lamp and carries out the light filling, and this application embodiment does not do the restriction to its concrete realization.
S202, the wound information acquisition device determines wound information based on the first image and the second image.
Wherein the wound information is indicative of an attribute characteristic parameter of the wound.
In some embodiments, the wound information acquisition device may determine the wound information using a deep learning algorithm after obtaining the first and second images of the target object of the wound. As previously described, the deep learning algorithm may discover the distribution characteristics of the data. The wound information analysis method and device are used for analyzing and processing images shot by the wound, determining key characteristics of the wound and further determining wound information. The specific implementation process is described in the following embodiments, and will not be described in detail herein.
Optionally, after the image is obtained, the image may be subjected to preprocessing operations, such as compression, cutting, denoising, and the like, so that the image is conveniently processed by a deep learning algorithm in the subsequent process.
Illustratively, where the target object comprises a wound surface, the wound information comprises: the size of the wound, and/or the area of the wound. Where the target object comprises a tool that has probed the depth of the wound, the wound information comprises the depth of the wound.
After determining the wound information, as shown in fig. 2, the wound information acquisition apparatus further performs S203 as follows.
And S203, prompting wound information by the wound information acquisition device.
In some embodiments, after determining the wound information, the wound information acquisition device may prompt the wound information. The wound information may include one or more of: size of wound, area of wound, depth of wound. Optionally, the wound information may also include the color of the wound and the fluid leakage.
Specifically, the wound information acquisition device can display a target area where the wound is located, or mark wound information at the target area, so as to prompt medical staff. For example, the wound information acquiring apparatus may prompt by displaying a target image on a display screen, where the target image includes a target area where the wound is located, and marking the wound information on the target image at the target area. The labeled wound information can be one or more of the size of the wound, the area of the wound, the depth of the wound, the color of the wound, the liquid leakage condition and the like, and can be determined according to the actual scene requirements.
Note that the target image may be obtained from the first image. When the target object is the wound surface, the wound information is marked on a first image obtained by shooting the target object of the wound, and the first image is the target image. When the target object is a wound surface and a tool for detecting the depth of a wound, acquiring a first image shot for the target object, including a planar image of the wound surface, and labeling the wound information to obtain a target image.
The steps provided in the examples of the present application for determining wound information are described in detail below with reference to specific examples.
In one possible implementation, in a case that the target object includes a wound surface, fig. 3 is a flowchart of a method for determining a size and/or an area of a wound provided by an embodiment of the present application. As shown in fig. 3, S202 specifically includes the following S301 to S303.
S301, the wound information acquisition device identifies the outline of the wound in the first image.
In some embodiments, after acquiring the first image of the wound surface, the wound information acquisition device may identify the first image using a deep learning algorithm to determine a target region of the contour of the wound in the first image.
Specifically, the deep learning algorithm is an algorithm model obtained by training a large number of wound samples in a deep learning manner. The algorithmic model may identify the contours of the wound. The trained algorithm model is stored in the wound information acquisition device in advance, and after acquiring the first image of the wound surface, the wound information acquisition device can input the first image into the trained algorithm model to obtain an output result for determining the outline of the wound. In addition, the trained algorithm model can also identify the optimal position point of the determined size on the outline of the wound, so that the determination of the size of the wound is facilitated.
For example, fig. 4 is a schematic diagram of a wound area provided in the embodiment of the present application, and in addition, fig. 4 also shows an optimal location point of a wound, where the optimal location point is located at an outline of the wound and includes pixel point 1, pixel point 2, pixel point 3, and pixel point 4. The distance from the pixel point 1 to the pixel point 2 is used for determining the length of the wound, and the distance from the pixel point 3 to the pixel point 4 is used for determining the width of the wound.
S302, the wound information acquisition device maps the outline of the wound to a second image to obtain depth values of a plurality of pixel points in the outline of the wound and orientation information of the plurality of pixel points in the outline of the wound.
The position information of one pixel point is used for indicating the position of the corresponding point of the pixel point in the physical space relative to the shooting lens.
In some embodiments, the wound information obtaining device may map the contour of the wound determined in S301 to the second image based on a calibration coefficient determined in advance. The method comprises the steps of associating a plurality of pixel points of the outline of the wound in a two-dimensional visible light image with corresponding points in a depth map to obtain depth values (namely distances from the plurality of pixel points to a shooting lens) of the plurality of pixel points of the outline of the wound and orientation information of the plurality of pixel points. The calibration coefficient is used for indicating the position relation of the same pixel point in the target object in the first image and the second image. In the embodiment of the present application, the taking lens is described by taking a distance measuring module as an example.
When the first image and the second image are taken by using the wound information acquisition device for the first time, a parameter of a positional relationship between the camera and the distance measurement module, that is, a calibration coefficient, is determined first. The determination of the calibration coefficients comprises the following S1-S2.
S1, a wound information acquisition device acquires a first reference image and a second reference image which are shot by a calibration grid plate.
In some embodiments, when the calibration coefficients need to be determined, the wound information acquisition device may acquire a first reference image and a second reference image taken of the calibration grid plate. The calibration grid plate is used for determining the correlation between the three-dimensional geometric position of a certain point on the surface of the space object and the corresponding point of the point in the image. The first reference image is a plane image of the calibration grid plate, and the second reference image is a depth map of the calibration grid plate.
FIG. 5 provides a schematic illustration of a calibration grid plate. The calibration network board is similar to a chessboard and consists of black squares and white squares which are arranged in a staggered mode. The calibration grid plate is commonly used in the application processes of machine vision, image measurement, photogrammetry, three-dimensional reconstruction and the like, and is used for correcting lens distortion, determining the corresponding correlation between a two-dimensional image and a three-dimensional image and the like.
For example, when the calibration coefficient needs to be determined, a medical staff may hold the wound information acquisition device by hand and shoot the wound information acquisition device by aligning with the calibration grid plate. Similarly, the shooting effect is that the complete picture of the calibration grid plate can be shot clearly and effectively, and the embodiment of the application does not limit the specific shooting angle and shooting distance. The wound information acquisition device can acquire a first reference image through camera shooting, and acquire a second reference image through a ranging model.
And S2, the wound information acquisition device performs calibration operation by adopting a calibration algorithm based on the first reference image and the second reference image, and determines a calibration coefficient.
In some embodiments, the wound information obtaining device may determine a calibration coefficient of the wound information obtaining device after performing a calibration operation by using a calibration algorithm based on the first reference image and the second reference image captured in step S1. The calibration algorithm may be a zhang's calibration algorithm, and the calibration coefficients may specifically include an internal reference coefficient and an external reference coefficient of the camera. After the calibration coefficient is determined, the wound information acquisition device can store the calibration coefficient into a storage module of the wound information acquisition device, so that the subsequent use is facilitated.
The above calibration coefficient determination process is fully described with reference to the flowchart shown in fig. 6. Firstly, 1, a medical staff can align a handheld device (wound information acquisition device) to a calibration grid plate. 2. After receiving the shooting instruction of the medical personnel, the device collects the pictures (namely the first reference image and the second reference image) of the camera and the distance measurement module. Further, 3, the device adopts a calibration algorithm to perform calibration operation to obtain a calibration coefficient. 4. And writing the calibration coefficients into a storage module of the device for storage.
S303, the wound information acquisition device determines the size and/or the area of the wound according to the depth values of the multiple pixel points in the outline of the wound and the azimuth information of the multiple pixel points in the outline of the wound.
In some embodiments, after determining the depth values and the orientation information of the plurality of pixel points in the outline of the wound, the wound information obtaining device may establish a three-dimensional coordinate system for calculation according to the depth values of the plurality of pixel points in the outline of the wound and the orientations of the corresponding points of the plurality of pixel points in the outline of the wound in the physical space relative to the shooting lens, and determine the size and/or the area of the wound.
Specifically, as described above, the depth value of a pixel point in the depth map indicates the distance from the corresponding point of the pixel point to the shooting lens in the physical space. The position of the pixel relative to the device (or distance measuring module) can be used to determine the included angle of the corresponding point of the pixel in the physical space relative to the distance measuring module. Fig. 7 is a schematic diagram illustrating a relationship between a pixel point and a photographing lens according to an embodiment of the present disclosure. Taking pixel point 1 and pixel point 2 in the planar image as an example for explanation, in fig. 7, the depth value of pixel point 1 is the distance s1 from point 1 (the point corresponding to pixel point 1 in the physical space) to the distance measurement module, and the depth value of pixel point 2 is the distance s2 from point 2 to the distance measurement module. The included angle formed by the connecting line between the point 1 and the ranging module and the connecting line between the point 2 and the ranging module can represent the directions of the corresponding points of the two pixel points in the physical space relative to the ranging module. Therefore, based on the depth values of the pixels and the orientation information of the relative distance measurement module, the positions of the corresponding points of the two pixels in the physical space relative to the camera lens can be determined, and thus a three-dimensional coordinate system can be constructed (for example, a spatial rectangular coordinate system or a spherical coordinate system is constructed with the camera lens as an origin). Further, the coordinates of the pixel point 1 and the pixel point 2 in the three-dimensional coordinate system are determined to be the coordinate 1 and the coordinate 2, respectively, and the distance between the two points is determined through calculation of a distance formula of the two points in the three-dimensional coordinate system. Therefore, the length and width of the wound, i.e., the wound size information, may be determined in this manner based on the optimal position point determined in S302.
In addition, the determination of the wound area can be based on the above, divide the wound area into a plurality of rectangles by means of the idea of calculus, and then determine the area of the wound by means of integral calculation. For details of implementation, reference is made to related art documents, which are not described in detail herein.
In another possible implementation, where the target object includes a tool that has probed wound depth, the wound information includes wound depth. Fig. 8 is a flowchart of a method for determining a wound depth according to an embodiment of the present disclosure. As shown in fig. 8, S202 specifically includes the following S801 to S803.
S801, the wound information acquisition device identifies a first pixel point and a second pixel point in the first image.
The first pixel point and the second pixel point are used for indicating the start and end positions of the bloodstain area on the tool distributed in the axial direction of the tool.
As mentioned above, the depth of wound is also an important index for determining the information of wound and making treatment and nursing plan. Wound depth cannot be determined from images taken of the wound surface, so tools are used to depth probe the wound. The cotton swab is taken as an example in the embodiment of the application, and the depth of the wound is determined by shooting a blood stain area on the cotton swab with the detected wound depth and determining the length of the blood stain area.
In some embodiments, the wound information acquisition device identifies the first image using a deep learning algorithm, determines a bloodstain area on the tool for detecting the depth of the wound, and determines a first pixel point and a second pixel point indicating the axial length of the bloodstain area.
The deep learning algorithm is trained through a large number of sample pictures, the sample pictures comprise tools such as swabs with bloodstains, and the trained algorithm model is obtained after training and stored in the wound information acquisition device. After a first image shot by the tool for detecting the wound depth is acquired, the wound information acquisition device inputs the first image into a trained algorithm model so as to determine a first pixel point and a second pixel point.
For example, fig. 9 is a schematic image diagram of a tool for detecting a depth of a wound according to an embodiment of the present application. Blood stains are stained at one end of the tool, and points of the blood stain area at two ends of the tool in the axial direction are the first pixel point and the second pixel point. And the actual distance between the first pixel point and the second pixel point is the wound depth.
S802, the wound information acquisition device maps the first pixel points and the second pixel points to the second image to obtain depth values of the first pixel points and the second pixel points and direction information of the first pixel points and the second pixel points.
As described above, the direction information of a pixel point is used to indicate the direction of the corresponding point of the pixel point in the physical space relative to the shooting lens.
In some embodiments, the wound information obtaining device may obtain a calibration coefficient from the storage module, and map the first pixel point and the second pixel point in the first image to the second image based on the calibration coefficient. And finding the corresponding points of the first pixel point and the second pixel point in the depth map, thereby determining the depth value and the direction relative to the shot of the first pixel point, the depth value and the direction information of the second pixel point, and subsequently determining the actual distance between the two pixel points. The calibration coefficient here is the same as the calibration coefficient in S302, and the determination process is referred to the above description, which is not repeated herein.
And S803, the wound information acquisition device determines the distance between the first pixel point and the second pixel point as the depth of the wound according to the depth values of the first pixel point and the second pixel point and the direction information of the first pixel point and the second pixel point.
In some embodiments, after determining the depth values of the first pixel point and the second pixel point and the orientations of the corresponding points of the two pixel points in the physical space relative to the camera lens, a three-dimensional coordinate system may be established, and the distance between the two pixel points (i.e., the actual distance between the corresponding points in the physical space) may be calculated, so as to determine the depth of the wound. The calculation principle here is the same as that in S303, and specifically, the above description is referred to, and the details are not repeated here.
In addition, where the target object comprises a wound surface, the wound information may also include fluid leakage and/or wound color, both of which are important parameters upon which a clinical treatment regimen is based. Therefore, the following steps are also performed to determine the liquid leakage condition and/or the color of the wound, i.e. S202 specifically further includes: the wound information acquisition device determines a fluid leakage condition of the wound, and/or a color of the wound based on the first image.
In some embodiments, the wound information acquisition device may employ a deep learning algorithm to determine fluid leakage after capturing the first image of the wound surface. Specifically, the first image may be input into a previously trained algorithm model to determine the fluid leakage condition of the wound. The liquid leakage condition can be specifically characterized by the liquid leakage area and/or the liquid leakage amount. The algorithmic model is also trained from a large number of wound images with fluid leakage. It should be noted that the algorithm for determining the fluid leakage is different from the algorithm for determining the wound area described above.
It is understood that when the wound information includes a fluid leakage condition, the prompting the wound information may specifically include: the area of fluid leakage, and/or the amount of fluid leakage, of the wound is noted. For example, the wound information acquiring device may display a picture of the wound on the display screen, mark the area size of the wound position in the picture, and/or the numerical value size of the leakage amount, so as to prompt the medical staff about the liquid leakage condition.
In other embodiments, the first image is a two-dimensional visible light image taken of the surface of the wound, and the wound information acquisition device may determine the color of the wound based on pixel values of the first image. Wherein the color of the wound may be characterized by a pixel value size, and/or a color name.
Specifically, the wound information acquiring device may determine a target region of the wound in the first image by using a deep learning algorithm based on the first image, and then determine the color of the wound based on a pixel value of the target region in the first image. For example, different pixel intervals may be prepared in advance based on the experience of the healthcare professional, and the different pixel intervals correspond to different color names. Based on the pixel mean value of all pixel points in the wound area or the pixel values processed by other schemes, the pixel value reflecting the color of the wound is determined in which pixel area so as to determine the color of the wound. Alternatively, the color of the wound is directly characterized by the size of the displayed pixel values. The embodiments of the present application do not specifically limit this.
It is to be understood that when the wound information includes the color of the wound, the prompting the wound information may specifically include: the wound is labeled with pixel value size, and/or color name. For example, the wound information acquisition device may display a picture of the wound on a display screen, in which the size of the pixel value is displayed numerically at the wound location for alerting the health care provider of a fluid leak. In addition, for more intuitive prompting, the pixel value size can be correspondingly converted into a color name, and the color name is directly displayed at the wound on the picture in a character mode for prompting.
In addition, after determining the wound information, as shown in fig. 10, in addition to fig. 2, the wound information acquiring apparatus further performs S204 as follows.
And S204, uploading the target image to a medical information system for storage by the wound information acquisition device.
Wherein the target image is an image comprising a target area of the wound and wound information, as previously described.
In some embodiments, the wound information acquiring apparatus is further connected to a server equipped with a medical information system, and the specific connection mode may be a cellular network or a wireless network. The wound information acquisition device can upload the target image to the medical information system for storage, so that other medical personnel can conveniently check and know the wound information in time by accessing the medical information system, and then make a treatment and nursing plan in a targeted manner.
Fig. 11 is a flowchart of a method for acquiring wound information according to an embodiment of the present application, and the method for acquiring wound information according to the present application is completely described with reference to fig. 11. Firstly, medical personnel hand-held device aligns the wound with the camera, turns on the light filling lamp and carries out the light filling. After the device collects a wound picture (namely the first image), the device performs picture preprocessing (compression, cutting and the like) on the picture, performs recognition processing on the wound picture by adopting a deep learning algorithm, and determines information such as a wound outline, a wound color and a liquid leakage condition. Then, the camera and the ranging module are mapped (by calibrating parameters) (i.e. S302), so as to obtain the relative position and distance information (i.e. the depth value) of each pixel point of the wound, and the calculation module in the wound information obtaining device performs calculation to output the wound information: length, width, area. In another branch, the health care provider detects the depth of the wound with a cotton swab, and then the hand-held device is aimed at the cotton swab, and the camera takes a picture of the cotton swab (i.e., the first image). Then, the camera and the distance measurement module are mapped to obtain the relative position and distance information of the cotton swab stained with blood stains, and the length of the cotton swab stained with blood, namely the depth of the wound, is output after the calculation of the calculation module. Furthermore, the wound information acquisition device marks information such as wound size, area, color and liquid leakage request in the wound picture shot by the camera, and stores the marked wound picture in the storage unit and uploads the wound picture to the medical information system.
The technical scheme provided by the embodiment at least has the following beneficial effects that the wound information acquisition method provided by the embodiment of the application determines the wound information through the planar image and the depth map which are shot by the surface of the wound and the tool for detecting the depth of the wound. This scheme compares traditional artifical mode of measuring wound information, and not only measuring accuracy is high, and measurement of efficiency also improves greatly. Moreover, medical staff does not need to have higher level measurement experience, and the wound information can be acquired manually.
Furthermore, the scheme combines the distance measuring tool with the camera, so that the key information such as the size, the area, the color and the liquid leakage request of the wound can be automatically determined, the measuring tool is not required to be in direct contact with the wound, and the infection risk is reduced. In addition, the scheme can realize measurement without a large amount of external ambient light, and has no light pollution. Wound information marks and uploads medical information system on the wound photo, makes things convenient for inquiry and sharing, has promoted the efficiency of medical personnel in wound treatment, nursing process greatly.
It can be seen that the foregoing describes the solution provided by the embodiments of the present application primarily from a methodological perspective. In order to implement the functions, the embodiments of the present application provide corresponding hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed in hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In an exemplary embodiment, the present application also provides a wound information acquisition apparatus. The wound information acquisition may include one or more functional modules for implementing the wound information acquisition method of the above method embodiments.
For example, fig. 12 is a schematic composition diagram of a wound information acquiring apparatus according to an embodiment of the present application. As shown in fig. 12, the wound information acquisition apparatus includes: an obtaining module 1201, a determining module 1202 and a prompting module 1203.
The acquiring module 1201 is configured to acquire a first image and a second image captured of a target object, where the target object includes a wound surface and/or a tool for detecting a depth of a wound, the first image is a planar image including the target object, and the second image is a depth map including the target object; the depth value of a pixel point in the depth map represents the distance from the corresponding point of the pixel point to the shooting lens in the physical space.
A determination module 1202 for determining wound information based on the first image and the second image; wherein the wound information is indicative of an attribute characteristic parameter of the wound.
The prompting module 1203 is configured to prompt wound information.
In some embodiments, the target object comprises a wound surface, the wound information comprises a size and/or area of the wound, and the determination module 1202 is specifically configured to identify an outline of the wound in the first image; mapping the outline of the wound to a second image to obtain depth values of a plurality of pixel points in the outline of the wound and azimuth information of the plurality of pixel points in the outline of the wound; the orientation information of one pixel point is used for indicating the orientation of the corresponding point of the pixel point in the physical space relative to the shooting lens; and determining the size and/or the area of the wound according to the depth values of the plurality of pixel points in the outline of the wound and the azimuth information of the plurality of pixel points in the outline of the wound.
In some embodiments, the target object comprises a tool that has probed the depth of a wound, the wound information comprises the depth of the wound, and the determining module 1202 is specifically configured to identify a first pixel point and a second pixel point in the first image; the first pixel point and the second pixel point are used for indicating the starting position and the ending position of a bloodstain area on the tool in the axial direction of the tool; mapping the first pixel points and the second pixel points to a second image to obtain depth values of the first pixel points and the second pixel points and azimuth information of the first pixel points and the second pixel points; the orientation information of one pixel point is used for indicating the orientation of the corresponding point of the pixel point in the physical space relative to the shooting lens; and determining the distance between the first pixel point and the second pixel point as the depth of the wound according to the depth values of the first pixel point and the second pixel point and the azimuth information of the first pixel point and the second pixel point.
In some embodiments, where the target object comprises a wound surface, the wound information further comprises a fluid leak condition, and/or a color of the wound; the determination module 1202 is further configured to determine a fluid leakage condition of the wound, and/or a color of the wound, based on the first image; wherein the liquid leakage condition is characterized by a liquid leakage area and/or a liquid leakage amount; the color of the wound is characterized by the pixel value size, and/or the color name; when the wound information comprises a liquid leakage condition, prompting the wound information, comprising: marking the liquid leakage area and/or the liquid leakage amount of the wound; when the wound information includes the color of the wound, prompting the wound information, including: the wound is labeled with pixel value size, and/or color name.
In some embodiments, the prompting module 1203 is specifically configured to display a target area where the wound is located, and/or label the wound information.
In the case of implementing the functions of the integrated module in the form of hardware, the embodiment of the present application provides a schematic composition diagram of an electronic device, where the electronic device may be the wound information acquiring apparatus. As shown in fig. 13, the electronic device 1300 includes: processor 1302, communications interface 1303, bus 1304. Optionally, the electronic device may further include a memory 1301.
The processor 1302 may be implemented or performed with various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 1302 may be a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, transistor logic, a hardware component, or any combination thereof. Which may implement or execute the various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein. The processor 1302 may also be a combination that performs computing functions, e.g., including one or more microprocessors in conjunction with one or more DSPs and microprocessors, etc.
And a communication interface 1303 for connecting to another device via a communication network. The communication network may be an ethernet network, a radio access network, a Wireless Local Area Network (WLAN), etc.
The memory 1301 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that may store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that may store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
As a possible implementation, the memory 1301 may be separate from the processor 1302, and the memory 1301 may be coupled to the processor 1302 via a bus 1304 for storing instructions or program code. The processor 1302, when calling and executing instructions or program code stored in the memory 1301, can implement the wound information acquisition method provided by the embodiments of the present application.
In another possible implementation, the memory 1301 may also be integrated with the processor 1302.
The bus 1304 may be an Extended Industry Standard Architecture (EISA) bus or the like. The bus 1304 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 13, but this is not intended to represent only one bus or type of bus.
Through the description of the above embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the division of the above functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the wound information acquisition device is divided into different functional modules to complete all or part of the above described functions.
The embodiment of the application also provides a computer readable storage medium. All or part of the processes in the above method embodiments may be performed by computer instructions to instruct related hardware, and the program may be stored in the above computer-readable storage medium, and when executed, may include the processes in the above method embodiments. The computer readable storage medium may be of any of the embodiments described above or a memory. The computer readable storage medium may also be an external storage device of the wound information acquiring apparatus, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) card, a flash memory card (flash card), and the like, which are provided on the wound information acquiring apparatus. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the wound information acquisition apparatus. The computer-readable storage medium is used for storing the computer program and other programs and data required by the wound information acquisition device. The above-described computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
The present embodiments also provide a computer program product, which contains a computer program, when the computer program product runs on a computer, the computer is caused to execute any one of the wound information acquisition methods provided in the above embodiments.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "Comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
The above is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of wound information acquisition, the method comprising:
acquiring a first image and a second image which are obtained by shooting a target object, wherein the target object comprises a wound surface and/or a tool for detecting the depth of a wound, the first image is a plane image containing the target object, and the second image is a depth map containing the target object; the depth value of a pixel point in the depth map represents the distance from the corresponding point of the pixel point to a shooting lens in a physical space;
determining the wound information based on the first image and the second image; wherein the wound information is indicative of an attribute characteristic parameter of the wound;
prompting the wound information.
2. The method of claim 1, wherein the target object comprises the wound surface, the wound information comprises a size and/or an area of the wound, and the determining the wound information based on the first image and the second image comprises:
identifying a contour of the wound in the first image;
mapping the outline of the wound to the second image to obtain depth values of a plurality of pixel points in the outline of the wound and azimuth information of the plurality of pixel points in the outline of the wound; the orientation information of one pixel point is used for indicating the orientation of the corresponding point of the pixel point in the physical space relative to the shooting lens;
and determining the size and/or the area of the wound according to the depth values of a plurality of pixel points in the outline of the wound and the azimuth information of the plurality of pixel points in the outline of the wound.
3. The method of claim 1, wherein the target object comprises a tool of the probed wound depth, wherein the wound information comprises a depth of the wound, and wherein determining the wound information based on the first image and the second image comprises:
identifying a first pixel point and a second pixel point in the first image; the first pixel point and the second pixel point are used for indicating the starting position and the ending position of a bloodstain area on the tool distributed in the axial direction of the tool;
mapping the first pixel points and the second pixel points to the second image to obtain depth values of the first pixel points and the second pixel points and azimuth information of the first pixel points and the second pixel points; the orientation information of one pixel point is used for indicating the orientation of the corresponding point of the pixel point in the physical space relative to the shooting lens;
and determining the distance between the first pixel point and the second pixel point as the depth of the wound according to the depth values of the first pixel point and the second pixel point and the azimuth information of the first pixel point and the second pixel point.
4. The method of claim 1, wherein when the target object comprises the wound surface, the wound information further comprises a fluid leakage condition, and/or a color of the wound, the method further comprising:
determining a fluid leakage condition of the wound, and/or a color of the wound, based on the first image;
wherein the liquid leakage condition is characterized by a liquid leakage area, and/or a liquid leakage amount; the color of the wound is characterized by a pixel value size, and/or a color name;
when the wound information includes the fluid leakage condition, the prompting the wound information includes:
marking the liquid leakage area and/or the liquid leakage amount of the wound;
when the wound information includes a color of the wound, the prompting the wound information includes:
labeling the size of the pixel value of the wound, and/or the color name.
5. The method of any one of claims 1-3, wherein the prompting the wound information comprises:
displaying the target area where the wound is located, and/or labeling the wound information.
6. A wound information acquisition apparatus, characterized in that the apparatus comprises: the device comprises an acquisition module, a determination module and a prompt module;
the acquisition module is used for acquiring a first image and a second image which are obtained by shooting a target object, wherein the target object comprises a wound surface and/or a tool with detected wound depth, the first image is a plane image containing the target object, and the second image is a depth map containing the target object; the depth value of a pixel point in the depth map represents the distance from the corresponding point of the pixel point to a shooting lens in a physical space;
the determination module is to determine the wound information based on the first image and the second image; wherein the wound information is indicative of an attribute characteristic parameter of the wound;
the prompting module is used for prompting the wound information.
7. The apparatus of claim 6,
the target object comprises the wound surface, the wound information comprises a size and/or an area of the wound, and the determination module is specifically configured to identify a contour of the wound in the first image; mapping the outline of the wound to the second image to obtain depth values of a plurality of pixel points in the outline of the wound and azimuth information of the plurality of pixel points in the outline of the wound; the orientation information of one pixel point is used for indicating the orientation of the corresponding point of the pixel point in the physical space relative to the shooting lens; determining the size and/or the area of the wound according to the depth values of a plurality of pixel points in the wound outline and the azimuth information of the plurality of pixel points in the wound outline;
the target object comprises the detected wound depth tool, the wound information comprises the depth of the wound, and the determining module is specifically configured to identify a first pixel point and a second pixel point in the first image; the first pixel point and the second pixel point are used for indicating the starting position and the ending position of a bloodstain area on the tool distributed in the axial direction of the tool; mapping the first pixel points and the second pixel points to the second image to obtain depth values of the first pixel points and the second pixel points and direction information of the first pixel points and the second pixel points; determining the distance between the first pixel point and the second pixel point as the depth of the wound according to the depth values of the first pixel point and the second pixel point and the orientation information of the first pixel point and the second pixel point;
when the target object comprises the wound surface, the wound information further comprises a fluid leakage condition, and/or a color of the wound; the determination module is further configured to determine a fluid leakage condition of the wound, and/or a color of the wound, based on the first image; wherein the liquid leakage condition is characterized by a liquid leakage area, and/or a liquid leakage amount; the color of the wound is characterized by a pixel value size, and/or a color name; when the wound information includes the liquid leakage condition, the prompt module is specifically used for marking the liquid leakage area and/or the liquid leakage amount of the wound; when the wound information includes the color of the wound, the prompt module is specifically configured to label the size of the pixel value of the wound and/or the color name;
the prompting module is specifically used for displaying a target area where the wound is located and/or marking the wound information.
8. The wound information acquisition equipment is characterized by comprising a camera and a distance measurement module; the camera is adjacent to the ranging module;
the camera is used for shooting a target object to obtain a first image; the target object comprises a wound surface and/or a tool that has probed wound depth; the first image is a planar image containing the target object;
the distance measurement module is used for shooting the target object to obtain a second image; the second image is a depth map containing the target object; and the depth value of one pixel point in the depth map represents the distance from the corresponding point of the pixel point to the shooting lens in the physical space.
9. An electronic device, characterized in that the electronic device comprises: a processor and a memory;
the memory stores instructions executable by the processor;
the processor is configured to, when executing the instructions, cause the electronic device to implement the method of any of claims 1-5.
10. A computer-readable storage medium, wherein the computer-readable storage medium comprises: computer software instructions;
the computer software instructions, when executed in a computer, cause the computer to implement the method of any one of claims 1-5.
CN202211092134.1A 2022-09-08 2022-09-08 Wound information acquisition method, device, equipment and storage medium Pending CN115170629A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211092134.1A CN115170629A (en) 2022-09-08 2022-09-08 Wound information acquisition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211092134.1A CN115170629A (en) 2022-09-08 2022-09-08 Wound information acquisition method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115170629A true CN115170629A (en) 2022-10-11

Family

ID=83480768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211092134.1A Pending CN115170629A (en) 2022-09-08 2022-09-08 Wound information acquisition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115170629A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115444355A (en) * 2022-10-28 2022-12-09 四川大学华西医院 Endoscope lesion size information determining method, electronic device and storage medium
CN116739989A (en) * 2023-05-15 2023-09-12 四川大学华西医院 Method, device and equipment for collecting and analyzing pressure damage image
CN117523235A (en) * 2024-01-02 2024-02-06 大连壹致科技有限公司 A patient wound intelligent identification system for surgical nursing
CN117593355A (en) * 2023-11-23 2024-02-23 云途信息科技(杭州)有限公司 Pavement element area calculation method, device, computer equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110254861A1 (en) * 2008-12-25 2011-10-20 Panasonic Corporation Information displaying apparatus and information displaying method
CN109700465A (en) * 2019-01-07 2019-05-03 广东体达康医疗科技有限公司 A kind of mobile three-dimensional wound scanning device and its workflow
CN111067531A (en) * 2019-12-11 2020-04-28 中南大学湘雅医院 Wound measuring method and device and storage medium
CN111184517A (en) * 2020-01-14 2020-05-22 南方医科大学珠江医院 Wound measuring and recording system
CN112107291A (en) * 2020-07-29 2020-12-22 青岛浦利医疗技术有限公司 Intelligent wound assessment method and diagnosis system thereof
CN114066872A (en) * 2021-11-24 2022-02-18 苏州天下布医信息科技有限公司 Self-iteration wound evaluation system based on deep camera shooting and deep learning
CN114627186A (en) * 2022-03-16 2022-06-14 杭州浮点智能信息技术有限公司 Distance measuring method and distance measuring device
CN114913153A (en) * 2022-05-16 2022-08-16 北京理工大学 Deep learning technology-based wound identification and area measurement system and method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110254861A1 (en) * 2008-12-25 2011-10-20 Panasonic Corporation Information displaying apparatus and information displaying method
CN109700465A (en) * 2019-01-07 2019-05-03 广东体达康医疗科技有限公司 A kind of mobile three-dimensional wound scanning device and its workflow
CN111067531A (en) * 2019-12-11 2020-04-28 中南大学湘雅医院 Wound measuring method and device and storage medium
CN111184517A (en) * 2020-01-14 2020-05-22 南方医科大学珠江医院 Wound measuring and recording system
CN112107291A (en) * 2020-07-29 2020-12-22 青岛浦利医疗技术有限公司 Intelligent wound assessment method and diagnosis system thereof
CN114066872A (en) * 2021-11-24 2022-02-18 苏州天下布医信息科技有限公司 Self-iteration wound evaluation system based on deep camera shooting and deep learning
CN114627186A (en) * 2022-03-16 2022-06-14 杭州浮点智能信息技术有限公司 Distance measuring method and distance measuring device
CN114913153A (en) * 2022-05-16 2022-08-16 北京理工大学 Deep learning technology-based wound identification and area measurement system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙少杰等: "基于计算机视觉的目标方位测量方法", 《火力与指挥控制》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115444355A (en) * 2022-10-28 2022-12-09 四川大学华西医院 Endoscope lesion size information determining method, electronic device and storage medium
CN116739989A (en) * 2023-05-15 2023-09-12 四川大学华西医院 Method, device and equipment for collecting and analyzing pressure damage image
CN117593355A (en) * 2023-11-23 2024-02-23 云途信息科技(杭州)有限公司 Pavement element area calculation method, device, computer equipment and storage medium
CN117523235A (en) * 2024-01-02 2024-02-06 大连壹致科技有限公司 A patient wound intelligent identification system for surgical nursing
CN117523235B (en) * 2024-01-02 2024-04-16 大连壹致科技有限公司 A patient wound intelligent identification system for surgical nursing

Similar Documents

Publication Publication Date Title
CN115170629A (en) Wound information acquisition method, device, equipment and storage medium
US11783480B2 (en) Semi-automated system for real-time wound image segmentation and photogrammetry on a mobile platform
US10559081B2 (en) Method and system for automated visual analysis of a dipstick using standard user equipment
RU2436507C2 (en) Methods of wound area therapy and systems for its realisation
RU2435520C2 (en) Methods of wound area therapy and systems for said methods realisation
US11333658B2 (en) Urine test strip comprising timer, and method for detecting and analyzing urine test strip
JP2020507836A (en) Tracking surgical items that predicted duplicate imaging
US20150325006A1 (en) Method and system for automated visual analysis of a dipstick using standard user equipment
US20200234444A1 (en) Systems and methods for the analysis of skin conditions
CN105043271A (en) Method and device for length measurement
Gong et al. A handheld device for leaf area measurement
WO2012036732A1 (en) Method and apparatus for performing color-based reaction testing of biological materials
US20240169518A1 (en) Method and apparatus for identifying body constitution in traditional chinese medicine, electronic device, storage medium and program
CN111488872B (en) Image detection method, image detection device, computer equipment and storage medium
CN110443802B (en) Image detection method and device, equipment and storage medium
Mirzaalian Dastjerdi et al. Measuring surface area of skin lesions with 2D and 3D algorithms
CN112107291A (en) Intelligent wound assessment method and diagnosis system thereof
KR20230042706A (en) Neural network analysis of LFA test strips
CN112862955B (en) Method, apparatus, device, storage medium and program product for establishing three-dimensional model
CN111652168B (en) Group detection method, device, equipment and storage medium based on artificial intelligence
CN116649953A (en) Wound scanning method and device and wound scanner
CN108171756A (en) Self-adapting calibration method, apparatus and terminal
CN116824437A (en) Measurement method and system for seat body forward bend
CN112053349B (en) Injury image processing method for forensic identification
CN112163519B (en) Image mapping processing method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221011