CN116958147B - Target area determining method, device and equipment based on depth image characteristics - Google Patents

Target area determining method, device and equipment based on depth image characteristics Download PDF

Info

Publication number
CN116958147B
CN116958147B CN202311218425.5A CN202311218425A CN116958147B CN 116958147 B CN116958147 B CN 116958147B CN 202311218425 A CN202311218425 A CN 202311218425A CN 116958147 B CN116958147 B CN 116958147B
Authority
CN
China
Prior art keywords
target
pixel
image
area
depth image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311218425.5A
Other languages
Chinese (zh)
Other versions
CN116958147A (en
Inventor
冯健
邵宏亭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Medcare Digital Engineering Co ltd
Original Assignee
Qingdao Medcare Digital Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Medcare Digital Engineering Co ltd filed Critical Qingdao Medcare Digital Engineering Co ltd
Priority to CN202311218425.5A priority Critical patent/CN116958147B/en
Publication of CN116958147A publication Critical patent/CN116958147A/en
Application granted granted Critical
Publication of CN116958147B publication Critical patent/CN116958147B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, and provides a target area determining method, device and equipment based on depth image characteristics, wherein the method comprises the following steps: dividing a target area in a target medical image to be detected; generating a target prediction depth map corresponding to the target medical image, mapping a target region to the target prediction depth map, taking pixel values of target pixel points in the target region of the depth image as a reference, calculating a mapping region occupied by each pixel point after unfolding change according to relative change coefficients of unit pixel values of the same target object and the pixel length of the target object in the prediction depth map, and determining a target unfolding region of the target region of the depth image based on the mapping region; and calculating the actual physical size of the target unfolding area according to a mapping function between the image pixel size and the actual physical size in the depth image. The method and the device can avoid the influence of the digestive tract structure on the display form of the target area, and realize the accurate detection of the target area.

Description

Target area determining method, device and equipment based on depth image characteristics
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a device for determining a target area based on depth image features.
Background
Traditional medical image analysis methods generally require a doctor to manually measure the size of a target area, such as a focus, and the method has the problems of subjectivity, low efficiency and the like. The target area measurement technology based on deep learning is to train by using high-resolution medical images, learn the characteristics of focus by a deep learning algorithm, and rapidly and accurately measure the size and shape of a target area in an automatic mode, so that the working efficiency and diagnosis accuracy of doctors are improved.
At present, a medical image processing technology based on target detection and segmentation has been applied to a certain extent, and the application of the technology can improve the working efficiency of doctors, reduce risks of missed diagnosis and misdiagnosis, and provide new ideas and methods for the development of the medical image analysis field. However, the lesion of the gastrointestinal endoscopy is affected by the structure of the gastrointestinal tract and has the characteristic of morphological change, for example, the ulcer or early cancer area of the esophagus has large general area and has bending morphology, the size of the target area obtained by only target examination or segmentation is often inaccurate, and the mode can only obtain the pixel length on imaging and cannot obtain the real physical length of the target area.
Disclosure of Invention
The present invention has been made in view of the above-mentioned problems, and it is an object of the present invention to provide a depth image feature-based target area determining method, apparatus and device which solve or at least partially solve the above-mentioned technical problems.
In one aspect of the present invention, there is provided a target area determining method based on depth image features, the method comprising:
dividing a target region in a target medical image to be detected to obtain a target divided region;
generating a target prediction depth map corresponding to the target medical image, and mapping a target segmentation area in the target medical image to the target prediction depth map to obtain a depth image target area;
selecting a target pixel point in a depth image target area, and calculating a mapping area occupied by each pixel point in the depth image target area after unfolding and changing according to a relative change coefficient of a unit pixel value of the same target object in a preset prediction depth image and the pixel length of the target object by taking the pixel value of the target pixel point in the depth image as a reference;
calculating the position of the unfolded boundary pixel point of the depth image target area according to the mapping area corresponding to each pixel point in the depth image target area, and determining the target unfolded area of the depth image target area according to the position of each unfolded boundary pixel point;
And calculating the actual physical size of the target unfolding area according to a mapping function between the image pixel size and the actual physical size in the preset depth image.
Optionally, with the pixel value of the target pixel point in the depth image as a reference, calculating a mapping area occupied by each pixel point in the depth image target area after being unfolded and changed according to a relative change coefficient of a unit pixel value of the same target object in a preset prediction depth image and the pixel length of the target object, including:
establishing a coordinate system by taking the target pixel point as an origin coordinate, taking other pixel points in the depth image target area as near ends, and according to a relative change coefficient lambda of a unit pixel value of the same target object in the predicted depth image and the pixel length of the target object and a pixel value p of the target pixel point in the depth image o The mapping area occupied by each pixel point in the depth image target area after unfolding and changing is calculated, and the calculation formula is as follows:
P i =λ·(p o -p i )+1
wherein p is i The mapping area p occupied by the i pixel point in the depth image target area after unfolding and changing is i The pixel value of the ith pixel point in the depth image.
Optionally, calculating the position of the unfolded and changed boundary pixel point of the depth image target area according to the mapping area corresponding to each pixel point in the depth image target area includes:
Establishing a coordinate system by taking the target pixel point as an origin coordinate, respectively making a vertical line to an x coordinate axis and a y coordinate axis for each pixel point on the boundary of a depth image target area, taking the sum of mapping areas occupied by each pixel point intersected with the x coordinate axis after unfolding change as a longitudinal coordinate value of the position of the current boundary pixel point after unfolding change, and taking the sum of mapping areas occupied by each pixel point intersected with the y coordinate axis after unfolding change as a transverse coordinate value of the position of the current boundary pixel point after unfolding change;
the signs of the ordinate and abscissa values of the position of the current boundary pixel point after the expansion change are determined by the quadrants of the coordinate system where the current boundary pixel point is located.
Optionally, the method further comprises:
and for the pixel points on the coordinate axis, through which the vertical line of each pixel point on the boundary of the target area of the depth image is drawn to the x coordinate axis and the y coordinate axis, taking part in the calculation of the vertical coordinate value and the horizontal coordinate value of the corresponding pixel point on the boundary by the mapping area of one half of the pixel points currently on the coordinate axis.
Optionally, calculating the actual physical size of the target expansion area according to a mapping function between the image pixel size and the actual physical size in the preset depth image includes:
Calculating the length of a pixel corresponding to the length and the width of the minimum circumscribed rectangle of the target expansion area;
according to the pixel length corresponding to the length and the width of the minimum circumscribed rectangle and the pixel mean value of the pixel points in the image area after the expansion change, calculating the actual physical size of the target expansion area by using a mapping function between the image pixel size and the actual physical size in the preset depth image, wherein the formula is as follows:
wherein a and b are fitting coefficients of the mapping function, p s Is the mean value of the pixels in the target expanded region.
Optionally, before segmenting the target region in the target medical image to be detected, the method further comprises:
generating a corresponding prediction depth map from a first medical image in a preset data set, wherein the preset data set comprises N pieces of first medical images containing reference objects, and N is greater than or equal to 1;
identifying the reference object in each first medical image, and identifying the reference object in a corresponding predicted depth map according to the position area of the reference object in the first medical image;
according to the pixel values and the pixel lengths of the reference objects corresponding to different depth areas in the predicted depth image, calculating the relative change coefficient of the unit pixel value of the same object in the predicted depth image and the pixel length of the object, and calculating the mapping function between the image pixel size and the actual physical size of the object when the object is displayed as different pixel values in the predicted depth image according to the actual physical size of the reference object.
Optionally, the reference object is a water column sprayed by an endoscope;
the identifying the reference object in each first medical image and identifying the reference object in the corresponding predicted depth map according to the position area of the reference object in the first medical image comprises the following steps:
marking the cross section lines of the far-end width and the near-end width of the water column image in each first medical image to obtain the far-end cross section lines and the near-end cross section lines of the water column, and mapping the far-end cross section lines and the near-end cross section lines of the water column to the prediction depth images corresponding to each first medical image;
according to the pixel values and the pixel lengths of the reference object corresponding to different depth areas in the predicted depth image, calculating the relative change coefficient of the unit pixel value of the same object in the predicted depth image and the pixel length of the object, and according to the actual physical size of the reference object, calculating the mapping function between the image pixel size and the actual physical size of the object when the object is displayed as different pixel values in the predicted depth image, including:
acquiring the length FEPL of a far-end pixel of a far-end cross section line of the water column, the average pixel value FEPM of all pixel points on the far-end cross section line of the water column, the length NEPL of a near-end pixel of the near-end cross section line of the water column and the average pixel value NEPM of all pixel points on the near-end cross section line of the water column in each prediction depth map respectively to obtain N groups of reference object marking data;
Calculating the relative change coefficient lambda of the unit pixel value of the same target object and the pixel length of the target object in the predicted depth map according to the N groups of reference object marking data:
j is the number of the first medical image in the preset data set;
acquiring the actual physical length of the water column width in each first medical image;
for each cross section line in the N groups of reference object marking data, respectively carrying out linear fitting by taking the average pixel value of all pixel points on the cross section line as an independent variable and the ratio of the pixel length of the cross section line to the actual physical length of the reference object as a dependent variable to obtain a mapping function between the image pixel size and the actual physical size of the target object when the target object is displayed as different pixel values in the predicted depth map:
where a and b are the fitting coefficients of the mapping function.
Optionally, the segmenting the target region in the target medical image to be detected includes:
dividing a target region in a target medical image to be detected by adopting a pre-constructed region division model;
the generating of the target prediction depth map corresponding to the target medical image comprises the following steps:
and predicting the target medical image by adopting a pre-constructed monocular depth estimation model to generate a target prediction depth map corresponding to the target medical image.
In another aspect of the present invention, there is provided a depth image feature-based target area determining apparatus, including a functional module for implementing the depth image feature-based target area determining method as set forth in any one of the preceding claims, specifically including:
the region segmentation module is used for segmenting a target region in the target medical image to be detected to obtain a target segmentation region;
the depth map generation module is used for generating a target prediction depth map corresponding to the target medical image, mapping a target segmentation area in the target medical image to the target prediction depth map and obtaining a depth image target area;
the region unfolding module is used for selecting a target pixel point in a depth image target region, taking the pixel value of the target pixel point in the depth image as a reference, and calculating a mapping region occupied by each pixel point in the depth image target region after unfolding and changing according to a relative change coefficient of a unit pixel value of the same target object in a preset prediction depth image and the pixel length of the target object;
the boundary reconstruction module is used for calculating the position of the boundary pixel point of the depth image target area after the expansion change according to the mapping area corresponding to each pixel point in the depth image target area, and determining the target expansion area of the depth image target area according to the position of each boundary pixel point after the expansion change;
And the size calculation module is used for calculating the actual physical size of the target unfolding area according to a mapping function between the image pixel size and the actual physical size in the preset depth image.
In another aspect of the invention, a computer device is provided comprising a memory, a processor, and a computer program stored on the memory and executable on the processor; the computer program, when executed by the processor, implements the steps of the depth image feature based target area determination method as claimed in any one of the above.
According to the target area determining method, the target area determining device and the target area determining equipment based on the depth image characteristics, the target area of the depth image is identified in the predicted depth image corresponding to the medical image, the relative change coefficient of the unit pixel value of the same target object in the predicted depth image and the pixel length of the target object is utilized according to the depth image characteristics, the mapping area occupied by each pixel point in the target area of the depth image after unfolding change is calculated based on the selected reference pixel value, the target unfolding area of the target area of the depth image is further obtained, the influence of the digestive tract structure on the display form of the target area is avoided, the accurate size of the target area in image data is obtained, the actual physical size of the target area is obtained according to the mapping function between the image pixel size and the actual physical size in the preset depth image, and the accurate detection of the size of the target area can be realized in the endoscope operation process.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method for determining a target area based on depth image features according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of image comparison of a gastroscope image, a target segmentation region and a depth image target region in an embodiment of the present invention;
fig. 3-1 is a schematic diagram of a first implementation of planar expansion of pixel points in the present embodiment;
fig. 3-2 is a second schematic diagram for implementing planar expansion of pixel points in the present embodiment;
fig. 3-3 are a third schematic diagram for implementing planar expansion of pixel points in the present embodiment;
fig. 3-4 are schematic diagrams showing an implementation of planar expansion of pixel points in the present embodiment;
Fig. 4 is an image comparison schematic diagram of a medical image including a water column and a corresponding predicted depth map according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a target area determining device based on depth image features according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Example 1
The embodiment of the invention provides a target area determining method based on depth image features, as shown in fig. 1, the target area determining method based on depth image features, which comprises the following steps:
s1, dividing a target area in a target medical image to be detected to obtain a target division area.
The target medical image to be detected is an image containing lesions, which is acquired in the gastrointestinal endoscopy.
In this embodiment, the segmentation of the target region in the medical image may be implemented by using an edge extraction algorithm or an intelligent region segmentation model. Optionally, the method can train the deep learning region segmentation model according to preset focus labeling data so as to segment the target region in the target medical image to be detected by adopting the preset region segmentation model to obtain the target segmentation region. As shown in fig. 2, this embodiment is described by taking a gastroscope focus image as an example, wherein the left part of fig. 2 is a specific gastroscope image including a focus, and the middle part of fig. 2 is a target segmentation area obtained by diagnosis reasoning.
S2, generating a target prediction depth map corresponding to the target medical image, and mapping a target segmentation area in the target medical image to the target prediction depth map to obtain a depth image target area.
In this embodiment, a monocular depth estimation model may be trained by a public data set and an open source monocular depth estimation algorithm in advance, the target medical image is predicted by using the monocular depth estimation model constructed in advance to generate a target predicted depth map corresponding to the target medical image, and then a target segmentation region extracted from the target medical image is combined with the target predicted depth map to obtain a target region in the depth image, where the right part of fig. 2 is the target region of the depth image.
S3, selecting a target pixel point in a depth image target area, and calculating a mapping area occupied by each pixel point in the depth image target area after unfolding and changing according to a relative change coefficient of a unit pixel value of the same target object in a preset prediction depth image and the pixel length of the target object by taking the pixel value of the target pixel point in the depth image as a reference.
S4, calculating the position of the unfolded boundary pixel point of the depth image target area according to the mapping area corresponding to each pixel point in the depth image target area, and determining the target unfolded area of the depth image target area according to the position of each unfolded boundary pixel point.
In this embodiment, after the positions of the boundary pixels after the expansion change are obtained, the target expansion region of the depth image target region is obtained by connecting the position points of the boundary pixels after the expansion change.
S5, calculating the actual physical size of the target unfolding area according to a mapping function between the image pixel size and the actual physical size in the preset depth image, and obtaining the actual physical size of the focus.
According to the target region determining method based on the depth image features, the target region of the depth image is identified in the predicted depth image corresponding to the medical image, the relative change coefficient of the unit pixel value of the same target object in the predicted depth image and the pixel length of the target object is utilized according to the depth image features, the mapping region occupied by each pixel point in the target region of the depth image after unfolding change is calculated based on the selected reference pixel value, and then the target unfolding region of the target region of the depth image is obtained, the influence of the digestive tract structure on the display form of the focus is avoided, the accurate size of the target region in image data is obtained, the actual physical size of the focus is obtained according to the mapping function between the image pixel size and the actual physical size in the preset depth image, and the accurate detection of the focus size can be realized in the endoscope operation process.
In the embodiment of the present invention, in step S3, based on the pixel value of the target pixel point in the depth image, a mapping area occupied by each pixel point in the target area of the depth image after being unfolded and changed is calculated according to a relative change coefficient of the unit pixel value of the same target object and the pixel length of the target object in a preset prediction depth image, including:
establishing a coordinate system by taking the target pixel point as an origin coordinate, taking the target pixel point as a far end, taking other pixel points in a target area of the depth image as a near end, and according to a relative change coefficient lambda of a unit pixel value of the same target object in a predicted depth image and the pixel length of the target object and a pixel value p of the target pixel point in the depth image o The mapping area occupied by each pixel point in the depth image target area after unfolding and changing is calculated, and the calculation formula is as follows:
P i =λ·(p o -p i )+1
wherein p is i The mapping area p occupied by the i pixel point in the depth image target area after unfolding and changing is i The pixel value of the ith pixel point in the depth image.
Specifically, a pixel point of the depth image target area can be randomly selected as a target pixel point, and the pixel point is set as an origin coordinate, a coordinate system is established, and the pixel value is recorded as p o . As shown in the exemplary diagram of fig. 3-1, the gray square is the boundary of the target region of the depth image. In this embodiment, the target pixel point is a reference point, and any one point is selected as the reference point to realize planar expansion of the depth image target area. In an alternative embodiment, one canAnd taking the pixel point with the maximum pixel value as an origin, so that the calculation of the subsequent plane expansion process is concise. Further, each pixel point in the target area is brought into the relative change coefficient lambda of the unit pixel value of the same target object and the pixel length of the target object to obtain the size p of the mapping area occupied by the restored pixel point i As shown in fig. 3-2. Setting M pixel points in the target area of the current depth image, wherein the pixel value of each point is p i Then:
P i =λ· (pixel value of selected origin coordinate pixel-pixel value of current pixel) +1
Namely P i =λ·(p o -p i )+1;
Wherein the pixel value of the target pixel point is p0, which is the origin coordinate, the pixel block size, i.e. the pixel length, of the target pixel point is defaulted to be 1, p i Is the pixel value of the pixel point of the ith target area in the depth image, and the size of the pixel block, namely p, after the pixel point is mapped to the original pixel value can be obtained according to the relative change coefficient i
In the embodiment of the present invention, in step S4, calculating the position of the boundary pixel point of the depth image target area after the expansion change according to the mapping area corresponding to each pixel point in the depth image target area includes: establishing a coordinate system by taking the target pixel point as an origin coordinate, respectively making a vertical line to an x coordinate axis and a y coordinate axis for each pixel point on the boundary of a depth image target area, taking the sum of mapping areas occupied by each pixel point intersected with the x coordinate axis after unfolding change as a longitudinal coordinate value of the position of the current boundary pixel point after unfolding change, and taking the sum of mapping areas occupied by each pixel point intersected with the y coordinate axis after unfolding change as a transverse coordinate value of the position of the current boundary pixel point after unfolding change; the signs of the ordinate and abscissa values of the position of the current boundary pixel point after the expansion change are determined by the quadrants of the coordinate system where the current boundary pixel point is located.
Further, for each pixel point on the boundary of the depth image target area, the pixel point on the coordinate axis, through which the perpendicular line of each pixel point on the boundary of the depth image target area is drawn to the x and y coordinate axes, is calculated according to the vertical and horizontal coordinate values of the corresponding pixel point on the boundary, and the mapping area of one half of the pixel point currently on the coordinate axis. In this embodiment, for the boundary point located on the coordinate axis and the pixel point where the coordinate axis passes, the specific mapping method is as follows: each pixel point which is not positioned on the coordinate axis on the focus boundary and the coordinate axis are perpendicular, and an intersection point is respectively arranged on the x axis and the y axis, and the two intersection points which are positioned on the coordinate axis participate in the calculation of the vertical coordinate value and the horizontal coordinate value by the mapping area of one half of the current pixel point; only a single ordinate or abscissa transformation mapping is involved for a lesion boundary point lying on a coordinate axis.
Specifically, a vertical line is drawn to a coordinate axis of each pixel point on the boundary of the target area of the depth image, the sum of the mapping size P of the area occupied by each pixel point which is expanded and changed by the focus with the vertical line of the x-axis is the ordinate of the new position, the sign is determined by the quadrant where the new position is located, and if the mapping size P of the pixel point at the intersection point of the x-axis and the vertical line is not 1, one half of the mapping size P of the pixel point participates in the ordinate calculation. Accordingly, the abscissa of the new position is available by perpendicular to the y-axis. As shown in fig. 3-3, the pixel points represented by the two ends of the arrow are respectively corresponding to the pixel points before and after the expansion change of the target area, so that a new target area after expansion is obtained by connecting the boundary points of the new target area, as shown in fig. 3-4. Further, after the boundary pixel points are unfolded, each boundary pixel point is not continuous, and the shortest distance between two adjacent points can be used for connecting. The judging rules of the two adjacent points are as follows: the angles formed by the connecting lines of all points on the boundary and the original points (0, 0) and the positive direction of the X axis can be ordered, wherein the points represented by two adjacent degrees in the ordering are adjacent points, and particularly, the points represented by the maximum and minimum degrees are also adjacent points.
In the embodiment of the present invention, in step S5, calculating the actual physical size of the target expansion area according to a mapping function between the image pixel size and the actual physical size in the preset depth image specifically includes:
calculating the length of a pixel corresponding to the length and the width of the minimum circumscribed rectangle of the target expansion area; according to the pixel lengths ll and lw corresponding to the length and width of the minimum circumscribed rectangle and the pixel mean value of the pixel points in the image area after the expansion change, calculating the actual physical size of the target expansion area by using a mapping function between the image pixel size and the actual physical size in the preset depth image, wherein the formula is as follows:
wherein a and b are fitting coefficients of the mapping function, p s Is the mean value of the pixels in the target expanded region.
In this embodiment, the minimum bounding rectangle is calculated for the target expansion area, and the length and width pixel sizes of the target expansion area on the image are obtained and respectively recorded as ll and lw. At this time, the pixel mean value p of the pixel points in the image area s I.e. the pixel value p of the origin coordinate o . Then the target expansion area is corresponding to the long pixel length ll, the wide pixel length lw and the pixel mean value p of the minimum circumscribed rectangle o And (3) carrying out a mapping function between the image pixel size and the actual physical size in the depth image to obtain the actual physical size of the target unfolding area, and obtaining the actual physical size of the focus.
In the implementation of the present invention, before calculating the actual physical size of the lesion in the target medical image to be detected, the method further includes pre-calculating a "relative change coefficient λ of a unit pixel value of the same target object and a pixel length of the target object in the predicted depth map" and a "mapping function between an image pixel size of the target object and the actual physical size when the target object is displayed as different pixel values in the predicted depth map".
Specifically, before the target region in the target medical image to be detected is segmented, the target region determining method based on the depth image features provided by the embodiment of the invention further includes the following steps:
generating a corresponding prediction depth map from a first medical image in a preset data set, wherein the preset data set comprises N pieces of first medical images containing reference objects, and N is greater than or equal to 1;
identifying the reference object in each first medical image, and identifying the reference object in a corresponding predicted depth map according to the position area of the reference object in the first medical image;
According to the pixel values and the pixel lengths of the reference objects corresponding to different depth areas in the predicted depth image, calculating the relative change coefficient of the unit pixel value of the same object in the predicted depth image and the pixel length of the object, and calculating the mapping function between the image pixel size and the actual physical size of the object when the object is displayed as different pixel values in the predicted depth image according to the actual physical size of the reference object.
In this embodiment, the reference object may be a forward water column of the endoscope. Because the water column belongs to auxiliary means frequently used by doctors in the digestive tract endoscopy process, the water column is used as a reference object, medical images containing the reference object are conveniently obtained, additional examination equipment and an examination process are not added, and the collection of a preset data set can be completed in the normal examination process. Specifically, a certain number of endoscopic water column flushing images shot at various angles, parts and equipment models are required to be collected in advance, and a first medical image is shown in the left part of fig. 4.
Further, identifying the reference object in each first medical image, and identifying the reference object in the corresponding predicted depth map according to the position area of the reference object in the first medical image, specifically including: and marking the cross section lines of the distal end width and the proximal end width of the water column image in each first medical image respectively to obtain the distal end cross section line and the proximal end cross section line of the water column, and mapping the distal end cross section line and the proximal end cross section line of the water column to the prediction depth map corresponding to each first medical image. Specifically, in the medical image including the water column, the far end and the near end of the water column are marked by cross-section lines, and two cross-section lines of each image are marked as a group, as shown in the left part of fig. 4. And predicting the medical image comprising the water column by using a preset monocular depth estimation model, and marking the mark in the obtained predicted depth map, as shown in the right part of fig. 4. In the case of a water column, the width of the water column image is the diameter of the contact surface of the water column sprayed onto the mucosa of the digestive tract.
The relative change coefficient λ is defined as: the ratio of the difference between the pixel length NEPL (Near end pixel length) at the near end and the pixel length FEPL (Far end pixel length) at the far end of the same object in the depth image to the difference between the pixel mean NEPM (Near end pixel mean) at the near end and the pixel mean FEPM (Far end pixel mean) at the far end of the object. I.e. the ratio of the length change of the pixels of the same object from near to far to the pixel value change in the same depth image.
Further, according to the pixel values and the pixel lengths of the reference object corresponding to different depth areas in the predicted depth map, calculating the relative change coefficient of the unit pixel value of the same object in the predicted depth map and the pixel length of the object, and according to the actual physical size of the reference object, calculating the mapping function between the image pixel size and the actual physical size of the object when the object is displayed as different pixel values in the predicted depth map, specifically including:
acquiring the length FEPL of a far-end pixel of a far-end cross section line of the water column, the average pixel value FEPM of all pixel points on the far-end cross section line of the water column, the length NEPL of a near-end pixel of the near-end cross section line of the water column and the average pixel value NEPM of all pixel points on the near-end cross section line of the water column in each prediction depth map respectively to obtain N groups of reference object marking data;
Calculating the relative change coefficient lambda of the unit pixel value of the same target object and the pixel length of the target object in the predicted depth map according to the N groups of reference object marking data:
j is the number of the first medical image in the preset data set;
acquiring the actual physical length of the water column width in each first medical image;
for each cross section line in the N groups of reference object marking data, 2N cross section lines are marked, the pixel mean value (average pixel value of all pixel points on the cross section line) and the pixel length (pixel length of the cross section line on an image) of each cross section line are counted, straight line fitting is carried out by taking the average pixel value of all pixel points on the cross section line as an independent variable and the ratio of the pixel length of the cross section line to the actual physical length of a reference object as the dependent variable, the form of a straight line function is y=ax+b, and a mapping function between the image pixel size and the actual physical size of the target object when the target object is displayed as different pixel values in a predicted depth map is obtained:
where a and b are the fitting coefficients of the mapping function.
According to the target area determining method based on the depth image features, extra equipment is not needed, collection of a reference object data set can be completed in a normal endoscope operation process, the relative change coefficient lambda of the unit pixel value of the same target object and the pixel length of the target object in a predicted depth image and the mapping function between the image pixel size and the actual physical size of the target object when the target object is displayed as different pixel values in the predicted depth image are obtained in advance, the target area of the depth image is identified in the predicted depth image corresponding to the medical image, the relative change coefficient of the unit pixel value of the same target object and the pixel length of the target object in the predicted depth image is utilized according to the depth image features, the mapping area occupied by each pixel point in the target area of the depth image after being unfolded and changed is calculated based on the selected reference pixel value, the target unfolding area of the target area of the depth image is further obtained, the influence of the digestive tract structure on the display form of a focus is avoided, the accurate size of the target area in the image data is obtained, the physical size of the target area is obtained according to the mapping function between the image pixel size and the actual physical size in the predicted depth image, and the actual size of the target area can be accurately detected in the process of the target area.
For the purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated by one of ordinary skill in the art that the methodologies are not limited by the order of acts, as some acts may, in accordance with the methodologies, take place in other order or concurrently. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Example two
Another embodiment of the present invention further provides a depth image feature-based target area determining apparatus, which includes a functional module for implementing the depth image feature-based target area determining method as set forth in any one of the above. Fig. 5 schematically illustrates a structural schematic diagram of a target region determining apparatus based on depth image features according to an embodiment of the present invention, and referring to fig. 5, the target region determining apparatus based on depth image features according to an embodiment of the present invention specifically includes a region segmentation module 501, a depth map generation module 502, a region expansion module 503, a boundary reconstruction module 504, and a size calculation module 505, where:
The region segmentation module 501 is configured to segment a target region in a target medical image to be detected to obtain a target segmented region;
the depth map generating module 502 is configured to generate a target prediction depth map corresponding to the target medical image, and map a target segmentation area in the target medical image to the target prediction depth map to obtain a depth image target area;
the region expansion module 503 is configured to select a target pixel point in a target region of the depth image, calculate a mapping region occupied by each pixel point in the target region of the depth image after expansion change according to a relative change coefficient of a unit pixel value of the same target object in a preset prediction depth map and a target object pixel length based on a pixel value of the target pixel point in the depth image;
the boundary reconstruction module 504 is configured to calculate, according to the mapping area corresponding to each pixel point in the depth image target area, a position of the boundary pixel point of the depth image target area after the expansion change, and determine a target expansion area of the depth image target area according to the position of each boundary pixel point after the expansion change;
the size calculating module 505 is configured to calculate an actual physical size of the target expansion area according to a mapping function between an image pixel size and the actual physical size in a preset depth image.
In this embodiment of the present invention, the depth map generating module 502 is further configured to generate a corresponding predicted depth map for a first medical image in a preset data set, where the preset data set includes N first medical images including a reference object, and N is greater than or equal to 1;
a region segmentation module 501 for identifying the reference object in each first medical image;
the depth map generation module 502 is further configured to identify a reference object in the corresponding predicted depth map according to a location area of the reference object in the first medical image;
the device further comprises a calculation module which is not shown in the drawing, and the calculation module is used for calculating the relative change coefficient of the unit pixel value of the same object in the predicted depth map and the pixel length of the object according to the pixel values and the pixel lengths corresponding to different depth areas of the reference object in the predicted depth map, and calculating the mapping function between the image pixel size and the actual physical size of the object when the object is displayed as different pixel values in the predicted depth map according to the actual physical size of the reference object.
For the device embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and the relevant points refer to the part of the description of the method embodiment, and have corresponding technical effects.
Example III
The embodiment of the invention provides a computer device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps in the embodiment of the target area determining method based on the depth image features, such as steps S1-S5 shown in fig. 1 when executing the computer program. Alternatively, the processor may implement the functions of the modules in the embodiment of the target area determining apparatus based on depth image features, for example, the area segmentation module 501, the depth map generation module 502, the area expansion module 503, the boundary reconstruction module 504, and the size calculation module 505 shown in fig. 5 when executing the computer program.
For the computer device embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and the relevant points refer to the part of the description of the method embodiment, and have corresponding technical effects.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, any of the claimed embodiments can be used in any combination.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A depth image feature-based target area determination method, the method comprising:
dividing a target region in a target medical image to be detected to obtain a target divided region;
generating a target prediction depth map corresponding to the target medical image, and mapping a target segmentation area in the target medical image to the target prediction depth map to obtain a depth image target area;
selecting a target pixel point in a depth image target area, and calculating a mapping area occupied by each pixel point in the depth image target area after unfolding and changing according to a relative change coefficient of a unit pixel value of the same target object in a preset prediction depth image and the pixel length of the target object by taking the pixel value of the target pixel point in the depth image as a reference;
Calculating the position of the unfolded boundary pixel point of the depth image target area according to the mapping area corresponding to each pixel point in the depth image target area, and determining the target unfolded area of the depth image target area according to the position of each unfolded boundary pixel point;
calculating the actual physical size of the target expansion area according to a mapping function between the image pixel size and the actual physical size in a preset depth image;
before segmenting the target region in the target medical image to be detected, the method further comprises:
generating a corresponding prediction depth map from a first medical image in a preset data set, wherein the preset data set comprises N pieces of first medical images containing reference objects, and N is greater than or equal to 1;
identifying the reference object in each first medical image, and identifying the reference object in a corresponding predicted depth map according to the position area of the reference object in the first medical image;
calculating the relative change coefficient of the unit pixel value of the same object in the predicted depth map and the pixel length of the object according to the pixel values and the pixel lengths of the reference objects corresponding to different depth areas in the predicted depth map, and calculating the mapping function between the image pixel size and the actual physical size of the object when the object is displayed as different pixel values in the predicted depth map according to the actual physical size of the reference objects;
The relative change coefficient λ is a ratio of a difference value between a pixel length NEPL of a near end and a pixel length FEPL of a far end of the same target object in the depth image and a difference value between a pixel mean value NEPM of the near end and a pixel mean value FEPM of the far end of the target object, and the relative change coefficient λ is calculated as follows:
j is the number of the first medical image in the preset data set,
NEPL j to predict the far-end pixel length of the far-end marker of the reference object in the depth map,
FEPM j to predict the average pixel value of all pixels on the far end label of the reference object in the depth map,
NEPL j to predict the near-end pixel length of the reference object near-end marker in the depth map,
NEPM j the average pixel value of all pixel points on the near end mark of the reference object in the predicted depth map.
2. The method according to claim 1, wherein calculating the mapping area occupied by each pixel point in the depth image target area after the expansion change according to the relative change coefficient of the unit pixel value of the same target object and the pixel length of the target object in the preset prediction depth image by taking the pixel value of the target pixel point in the depth image as a reference comprises:
establishing a coordinate system by taking the target pixel point as an origin coordinate, taking the target pixel point as a far end, taking other pixel points in a target area of the depth image as a near end, and according to a relative change coefficient lambda of a unit pixel value of the same target object in a predicted depth image and the pixel length of the target object and a pixel value p of the target pixel point in the depth image o The mapping area occupied by each pixel point in the depth image target area after unfolding and changing is calculated, and the calculation formula is as follows:
P i =λ·(p o -p i )+1
wherein P is i The mapping area p occupied by the i pixel point in the depth image target area after unfolding and changing is i The pixel value of the ith pixel point in the depth image.
3. The method according to claim 1, wherein calculating the position of the boundary pixel point of the depth image target area after the expansion change according to the mapping area corresponding to each pixel point in the depth image target area comprises:
establishing a coordinate system by taking the target pixel point as an origin coordinate, respectively making a vertical line to an x coordinate axis and a y coordinate axis for each pixel point on the boundary of a depth image target area, taking the sum of mapping areas occupied by each pixel point intersected with the x coordinate axis after unfolding change as a longitudinal coordinate value of the position of the current boundary pixel point after unfolding change, and taking the sum of mapping areas occupied by each pixel point intersected with the y coordinate axis after unfolding change as a transverse coordinate value of the position of the current boundary pixel point after unfolding change;
the signs of the ordinate and abscissa values of the position of the current boundary pixel point after the expansion change are determined by the quadrants of the coordinate system where the current boundary pixel point is located.
4. A method according to claim 3, characterized in that the method further comprises:
and for the pixel points on the coordinate axis, through which the vertical line of each pixel point on the boundary of the target area of the depth image is drawn to the x coordinate axis and the y coordinate axis, taking part in the calculation of the vertical coordinate value and the horizontal coordinate value of the corresponding pixel point on the boundary by the mapping area of one half of the pixel points currently on the coordinate axis.
5. The method of claim 1, wherein calculating the actual physical size of the target expanded region according to a mapping function between an image pixel size and the actual physical size in a preset depth image comprises:
calculating the length of a pixel corresponding to the length and the width of the minimum circumscribed rectangle of the target expansion area;
according to the pixel length corresponding to the length and the width of the minimum circumscribed rectangle and the pixel mean value of the pixel points in the image area after the expansion change, calculating the actual physical size of the target expansion area by using a mapping function between the image pixel size and the actual physical size in the preset depth image, wherein the formula is as follows:
wherein a and b are fitting coefficients of the mapping function, p s For the pixel mean value in the target expansion area, ll is the pixel length corresponding to the length of the minimum bounding rectangle, and lw is the pixel length corresponding to the width of the minimum bounding rectangle.
6. The method of claim 1, wherein the reference is an endoscopically ejected water column;
the identifying the reference object in each first medical image and identifying the reference object in the corresponding predicted depth map according to the position area of the reference object in the first medical image comprises the following steps:
marking the cross section lines of the far-end width and the near-end width of the water column image in each first medical image to obtain the far-end cross section lines and the near-end cross section lines of the water column, and mapping the far-end cross section lines and the near-end cross section lines of the water column to the prediction depth images corresponding to each first medical image;
according to the pixel values and the pixel lengths of the reference object corresponding to different depth areas in the predicted depth image, calculating the relative change coefficient of the unit pixel value of the same object in the predicted depth image and the pixel length of the object, and according to the actual physical size of the reference object, calculating the mapping function between the image pixel size and the actual physical size of the object when the object is displayed as different pixel values in the predicted depth image, including:
acquiring the length FEPL of a far-end pixel of a far-end cross section line of the water column, the average pixel value FEPM of all pixel points on the far-end cross section line of the water column, the length NEPL of a near-end pixel of the near-end cross section line of the water column and the average pixel value NEPM of all pixel points on the near-end cross section line of the water column in each prediction depth map respectively to obtain N groups of reference object marking data;
Calculating a relative change coefficient lambda of a unit pixel value of the same target object and the pixel length of the target object in the predicted depth map according to the N groups of reference object marking data;
acquiring the actual physical length of the water column width in each first medical image;
for each cross section line in the N groups of reference object marking data, respectively carrying out linear fitting by taking the average pixel value of all pixel points on the cross section line as an independent variable and the ratio of the pixel length of the cross section line to the actual physical length of the reference object as a dependent variable to obtain a mapping function between the image pixel size and the actual physical size of the target object when the target object is displayed as different pixel values in the predicted depth map:
where a and b are the fitting coefficients of the mapping function.
7. The method of claim 1, wherein segmenting the target region in the target medical image to be detected comprises:
dividing a target region in a target medical image to be detected by adopting a pre-constructed region division model;
the generating of the target prediction depth map corresponding to the target medical image comprises the following steps:
and predicting the target medical image by adopting a pre-constructed monocular depth estimation model to generate a target prediction depth map corresponding to the target medical image.
8. A depth image feature-based target area determination apparatus, the apparatus comprising:
the region segmentation module is used for segmenting a target region in the target medical image to be detected to obtain a target segmentation region;
the depth map generation module is used for generating a target prediction depth map corresponding to the target medical image, mapping a target segmentation area in the target medical image to the target prediction depth map and obtaining a depth image target area;
the region unfolding module is used for selecting a target pixel point in a depth image target region, taking the pixel value of the target pixel point in the depth image as a reference, and calculating a mapping region occupied by each pixel point in the depth image target region after unfolding and changing according to a relative change coefficient of a unit pixel value of the same target object in a preset prediction depth image and the pixel length of the target object;
the boundary reconstruction module is used for calculating the position of the boundary pixel point of the depth image target area after the expansion change according to the mapping area corresponding to each pixel point in the depth image target area, and determining the target expansion area of the depth image target area according to the position of each boundary pixel point after the expansion change;
The size calculation module is used for calculating the actual physical size of the target expansion area according to a mapping function between the image pixel size and the actual physical size in the preset depth image;
the depth map generation module is further used for generating a corresponding prediction depth map from the first medical image in a preset data set, wherein the preset data set comprises N pieces of first medical images containing reference objects, and N is greater than or equal to 1;
the region segmentation module is used for identifying the reference object in each first medical image;
the depth map generation module is further used for identifying the reference object in the corresponding predicted depth map according to the position area of the reference object in the first medical image;
the apparatus further comprises: the calculation module is used for calculating the relative change coefficient of the unit pixel value of the same object in the predicted depth map and the pixel length of the object according to the pixel values and the pixel lengths corresponding to different depth areas of the reference object in the predicted depth map, and calculating the mapping function between the image pixel size and the actual physical size of the object when the object is displayed as different pixel values in the predicted depth map according to the actual physical size of the reference object;
the relative change coefficient λ is a ratio of a difference value between a pixel length NEPL of a near end and a pixel length FEPL of a far end of the same target object in the depth image and a difference value between a pixel mean value NEPM of the near end and a pixel mean value FEPM of the far end of the target object, and the relative change coefficient λ is calculated as follows:
j is the number of the first medical image in the preset data set,
NEPL j to predict the far-end pixel length of the far-end marker of the reference object in the depth map,
FEPM j to predict the average pixel value of all pixels on the far end label of the reference object in the depth map,
NEPL j to predict the near-end pixel length of the reference object near-end marker in the depth map,
NEPM j the average pixel value of all pixel points on the near end mark of the reference object in the predicted depth map.
9. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor;
the computer program implementing the steps of the method according to any of claims 1-7 when executed by the processor.
CN202311218425.5A 2023-09-21 2023-09-21 Target area determining method, device and equipment based on depth image characteristics Active CN116958147B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311218425.5A CN116958147B (en) 2023-09-21 2023-09-21 Target area determining method, device and equipment based on depth image characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311218425.5A CN116958147B (en) 2023-09-21 2023-09-21 Target area determining method, device and equipment based on depth image characteristics

Publications (2)

Publication Number Publication Date
CN116958147A CN116958147A (en) 2023-10-27
CN116958147B true CN116958147B (en) 2023-12-22

Family

ID=88447735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311218425.5A Active CN116958147B (en) 2023-09-21 2023-09-21 Target area determining method, device and equipment based on depth image characteristics

Country Status (1)

Country Link
CN (1) CN116958147B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117455924B (en) * 2023-12-26 2024-05-24 杭州首域万物互联科技有限公司 Cigarette atomization measurement data analysis method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102438529A (en) * 2008-12-22 2012-05-02 美的派特恩公司 Method and system of automated detection of lesions in medical images
CN110327046A (en) * 2019-04-28 2019-10-15 安翰科技(武汉)股份有限公司 Object measuring method in a kind of alimentary canal based on camera system
CN111091562A (en) * 2019-12-23 2020-05-01 山东大学齐鲁医院 Method and system for measuring size of digestive tract lesion
CN111145238A (en) * 2019-12-12 2020-05-12 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method and device of monocular endoscope image and terminal equipment
CN111310574A (en) * 2020-01-17 2020-06-19 清华大学 Vehicle-mounted visual real-time multi-target multi-task joint sensing method and device
WO2023125008A1 (en) * 2021-12-30 2023-07-06 小荷医疗器械(海南)有限公司 Artificial intelligence-based endoscope image processing method and apparatus, medium and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5830546B2 (en) * 2011-02-25 2015-12-09 フラウンホッファー−ゲゼルシャフト ツァ フェルダールング デァ アンゲヴァンテン フォアシュンク エー.ファオ Determination of model parameters based on model transformation of objects
CA3232181A1 (en) * 2019-09-23 2021-04-01 Boston Scientific Scimed, Inc. System and method for endoscopic video enhancement, quantitation and surgical guidance

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102438529A (en) * 2008-12-22 2012-05-02 美的派特恩公司 Method and system of automated detection of lesions in medical images
CN110327046A (en) * 2019-04-28 2019-10-15 安翰科技(武汉)股份有限公司 Object measuring method in a kind of alimentary canal based on camera system
CN111145238A (en) * 2019-12-12 2020-05-12 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method and device of monocular endoscope image and terminal equipment
CN111091562A (en) * 2019-12-23 2020-05-01 山东大学齐鲁医院 Method and system for measuring size of digestive tract lesion
CN111310574A (en) * 2020-01-17 2020-06-19 清华大学 Vehicle-mounted visual real-time multi-target multi-task joint sensing method and device
WO2023125008A1 (en) * 2021-12-30 2023-07-06 小荷医疗器械(海南)有限公司 Artificial intelligence-based endoscope image processing method and apparatus, medium and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于双目视觉的工件空间定位方法研究;王翰;王西峰;康运江;陈炜;张天宇;;机电产品开发与创新(06);全文 *

Also Published As

Publication number Publication date
CN116958147A (en) 2023-10-27

Similar Documents

Publication Publication Date Title
Mori et al. Tracking of a bronchoscope using epipolar geometry analysis and intensity-based image registration of real and virtual endoscopic images
EP3979892A1 (en) Systems and methods for processing colon images and videos
CN111091562B (en) Method and system for measuring size of digestive tract lesion
CN116958147B (en) Target area determining method, device and equipment based on depth image characteristics
CN104363815B (en) Image processing apparatus and image processing method
WO2019037676A1 (en) Image processing method and device
US20050107691A1 (en) Methods for digital bowel subtraction and polyp detection
CN103945755B (en) Image processing apparatus
CN111091559A (en) Depth learning-based auxiliary diagnosis system for small intestine sub-scope lymphoma
CN113017702B (en) Method and system for identifying extension length of small probe of ultrasonic endoscope and storage medium
CN110265142B (en) Auxiliary diagnosis system for restoration image of lesion area
CN111145200B (en) Blood vessel center line tracking method combining convolutional neural network and cyclic neural network
WO2009102984A2 (en) System and method for virtually augmented endoscopy
Phan et al. Optical flow-based structure-from-motion for the reconstruction of epithelial surfaces
CN108090954A (en) Abdominal cavity environmental map based on characteristics of image rebuilds the method with laparoscope positioning
CN111311626A (en) Skull fracture automatic detection method based on CT image and electronic medium
Yao et al. Motion-based camera localization system in colonoscopy videos
CN116324897A (en) Method and system for reconstructing a three-dimensional surface of a tubular organ
AU2021201735B2 (en) Representing an interior of a volume
CN115994999A (en) Goblet cell semantic segmentation method and system based on boundary gradient attention network
CN112734707B (en) Auxiliary detection method, system and device for 3D endoscope and storage medium
WO2022124315A1 (en) Endoscopic diagnosis assistance method and endoscopic diagnosis assistance system
CN115797348B (en) Endoscopic target structure evaluation system, method, device and storage medium
US20230334658A1 (en) Quantification of barrett's oesophagus
CN117974668A (en) Novel gastric mucosa visibility scoring quantification method, device and equipment based on AI

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant