CN116091432A - Quality control method and device for medical endoscopy and computer equipment - Google Patents

Quality control method and device for medical endoscopy and computer equipment Download PDF

Info

Publication number
CN116091432A
CN116091432A CN202211721887.4A CN202211721887A CN116091432A CN 116091432 A CN116091432 A CN 116091432A CN 202211721887 A CN202211721887 A CN 202211721887A CN 116091432 A CN116091432 A CN 116091432A
Authority
CN
China
Prior art keywords
predicted
skeleton image
skeleton
image
endoscopy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211721887.4A
Other languages
Chinese (zh)
Inventor
周奇明
杜立辉
姚卫忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huanuokang Technology Co ltd
Original Assignee
Zhejiang Huanuokang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huanuokang Technology Co ltd filed Critical Zhejiang Huanuokang Technology Co ltd
Priority to CN202211721887.4A priority Critical patent/CN116091432A/en
Publication of CN116091432A publication Critical patent/CN116091432A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a quality control method, a quality control device and computer equipment for medical endoscopy. The method comprises the following steps: acquiring a video frame image of an endoscopy, and extracting a predicted skeleton image of at least one target area from the video frame image; matching the predicted skeleton image with a pre-stored template skeleton image, calculating the overlapping rate of the predicted skeleton image and the template skeleton image, and determining the number of the complete predicted skeleton images according to the overlapping rate; obtaining the relative displacement of the predicted skeleton image from continuous video frame images, comparing the relative displacement with a preset relative displacement template, and determining the number of specific motions; and judging whether the endoscopy process is complete or not according to the number of the complete predicted skeleton images and the number of the specific motions. By adopting the method, the quality monitoring of the endoscopic examination integrity of the detection movement part can be realized by double statistics of the number of the complete prediction skeleton images and the number of the specific movements.

Description

Quality control method and device for medical endoscopy and computer equipment
Technical Field
The present disclosure relates to the field of video image processing technologies, and in particular, to a quality control method and apparatus for medical endoscopy, and a computer device.
Background
The medical endoscope is a detection instrument composed of an image sensor, an illumination light source, an optical lens and other physical device structures, and can enter through various organs of a human body such as a nose and a mouth to shoot the conditions of some tissues and organs in the human body, and a doctor can shoot through the endoscope penetrating into various tissues in the human body to save some pathological changes in the human body. Therefore, medical endoscopes have a very important role in pathological diagnosis. However, when a doctor uses a medical endoscope to check the endoscope when the endoscope enters a human body, because the internal structure of the human body is complex and the parts to be checked are more, certain areas or parts are easy to be shot, so that incomplete shooting is caused, incomplete diagnosis is caused, and further the focus of a patient is possibly not diagnosed, and the treatment of the patient is affected.
Most of the prior endoscope quality control technical schemes are to classify the current frame by image classification so as to judge the part to which the current frame belongs, then judge whether all image recognition results cover the preset part to be recognized, and perform integral part recognition on a single frame image, so that the scene that whether the same part of a patient moves normally or not can not be dealt with.
Disclosure of Invention
In view of the above, it is desirable to provide a quality control method, apparatus, and computer device for medical endoscopy, which can monitor the result of an examination of an endoscope that examines a moving site.
In a first aspect, the present application provides a quality control method for medical endoscopy, the quality control method for medical endoscopy comprising:
acquiring a video frame image of endoscopy, and extracting a predicted skeleton image of at least one target area from the video frame image;
matching the predicted skeleton image with a pre-stored template skeleton image, calculating the overlapping rate of the predicted skeleton image and the template skeleton image, and determining the number of the complete predicted skeleton images according to the overlapping rate;
obtaining the relative displacement of a predicted skeleton image from the continuous video frame images, comparing the relative displacement with a preset relative displacement template, and determining the number of specific motions;
and judging whether the endoscopy process is complete or not according to the number of the complete prediction skeleton images and the number of the specific motions.
In one embodiment, the matching the predicted skeleton image with a pre-stored template skeleton image includes:
Carrying out scale level normalization processing on the predicted skeleton image;
and matching the processed predicted skeleton image with a pre-stored template skeleton image.
In one embodiment, the matching the processed predicted skeleton image with a pre-stored template skeleton image includes:
calculating the matching similarity between the processed predicted skeleton image and a pre-stored template skeleton image;
and if the matching similarity is larger than a matching threshold, judging that the matching is successful.
In one embodiment, the calculating the overlapping rate of the predicted skeleton image and the template skeleton image, and determining the number of complete predicted skeleton images according to the overlapping rate includes:
calculating a weighted Euclidean distance between a predicted point of the predicted skeleton image and a template key point of the template skeleton image;
if the weighted Euclidean distance is smaller than a prediction threshold value, judging that the predicted point is a correct predicted point;
calculating the overlapping rate of the predicted skeleton image and the template skeleton image according to the number of the correct predicted points and the total number of the predicted points;
if the overlapping rate is larger than a complete threshold value, judging that the predicted skeleton image is complete;
And counting the number of the complete prediction skeleton images.
In one embodiment, the obtaining the relative displacement of the predicted skeleton image from the continuous video frame images, comparing the relative displacement with a preset relative displacement template, and determining the number of specific motions includes:
acquiring a current predicted skeleton image and a historical predicted skeleton image from the continuous video frame images;
obtaining the relative displacement of the predicted skeleton image according to the current predicted skeleton image and the historical predicted skeleton image;
comparing the relative displacement with a preset relative displacement template to obtain displacement similarity;
if the displacement similarity is larger than a displacement threshold, judging that the specific motion of the target area is captured;
counting the number of said specific movements.
In one embodiment, the determining whether the endoscopy procedure is complete based on the number of complete predicted skeleton images and the number of specific motions includes:
if the number of the complete predicted skeleton images is equal to the number of the preset skeleton images and the number of the specific motions is equal to the preset number of motions, the endoscopy is complete;
if the number of the complete predicted skeleton images is not equal to the number of the preset skeleton images and the number of the specific motions is not equal to the number of the preset motions, the endoscopy is incomplete.
In one embodiment, after said determining whether the endoscopy procedure is complete based on the number of complete predicted skeleton images and the number of specific motions, the quality control method further comprises:
and generating an inspection prompt according to the judgment result of the endoscopy.
In a second aspect, the present application also provides a quality control device for medical endoscopy, the device comprising:
the extraction module is used for acquiring video frame images of endoscopy and extracting a predicted skeleton image of at least one target area from the video frame images;
the matching calculation module is used for matching the predicted skeleton image with a pre-stored template skeleton image, calculating the overlapping rate of the predicted skeleton image and the template skeleton image, and determining the number of the complete predicted skeleton images according to the overlapping rate;
the motion recognition module is used for obtaining the relative displacement of the predicted skeleton image from the continuous video frame images, comparing the relative displacement with a preset relative displacement template and determining the number of specific motions;
and the integrity judging module is used for judging whether the endoscopy process is complete or not according to the number of the complete prediction skeleton images and the number of the specific motions.
In one embodiment, the apparatus further comprises:
and the generation reminding module is used for generating an inspection reminding according to the judgment result of the endoscopy.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a video frame image of endoscopy, and extracting a predicted skeleton image of at least one target area from the video frame image;
matching the predicted skeleton image with a pre-stored template skeleton image, calculating the overlapping rate of the predicted skeleton image and the template skeleton image, and determining the number of the complete predicted skeleton images according to the overlapping rate;
obtaining the relative displacement of a predicted skeleton image from the continuous video frame images, comparing the relative displacement with a preset relative displacement template, and determining the number of specific motions;
and judging whether the endoscopy process is complete or not according to the number of the complete prediction skeleton images and the number of the specific motions.
According to the quality control method, the quality control device and the computer equipment for the medical endoscopy, the prediction skeleton image of at least one target area is extracted from the video frame image, the inspection integrity of the target area is determined based on the overlapping degree of the prediction skeleton image and the pre-stored template skeleton image, the specific motion is captured, the motion condition of the target area is determined, and the quality monitoring of the endoscopy integrity of the detection motion part is realized according to double statistics of the number of the complete prediction skeleton image and the number of the specific motion.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a diagram of an environment in which a quality control method for medical endoscopy is used in one embodiment;
FIG. 2 is a flow diagram of a quality control method for medical endoscopy in an embodiment;
FIG. 3 is a flow chart of a quality control method for medical endoscopy in a preferred embodiment;
FIG. 4 is a block diagram of a quality control device for medical endoscopy in an embodiment;
fig. 5 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on the embodiments provided herein, are intended to be within the scope of the present application.
It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is possible for those of ordinary skill in the art to apply the present application to other similar situations according to these drawings without inventive effort. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the embodiments described herein can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar terms herein do not denote a limitation of quantity, but rather denote the singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein refers to two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
The method embodiments provided in the present embodiment may be executed in a terminal, a computer, or similar computing device. For example, running on a terminal, fig. 1 is a block diagram of the hardware configuration of the terminal of the quality control method for medical endoscopy of the present embodiment. As shown in fig. 1, the terminal may include one or more (only one is shown in fig. 1) processors 102 and a memory 104 for storing data, wherein the processors 102 may include, but are not limited to, a microprocessor MCU, a programmable logic device FPGA, or the like. The terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and is not intended to limit the structure of the terminal. For example, the terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store computer programs, such as software programs of application software and modules, such as those corresponding to the quality control method for medical endoscopy in the present embodiment, and the processor 102 executes the computer programs stored in the memory 104 to perform various functional applications and data processing, i.e., to implement the above-described method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The network includes a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a quality control method for medical endoscopy is provided, fig. 2 is a flowchart of the quality control method for medical endoscopy of this embodiment, and as shown in fig. 2, the flowchart includes the steps of:
step S210, obtaining an endoscopic video frame image, and extracting a predicted skeleton image of at least one target area from the video frame image.
Among them, endoscopy refers to examination of a human organ using a medical endoscope. Medical endoscopes include image sensors, illumination sources, optical lenses, and other physical devices that can be accessed through various organs of the human body, such as the nose and mouth, to capture the condition of some tissue organs within the human body. When a doctor performs an examination using an endoscope, a video image of a human organ is captured, and an endoscopic video frame image is obtained from the video in a frame-by-frame or frame-skip manner. Each frame of video frame image comprises at least one target area, such as at least one tissue area of the throat and a connecting area among the tissue areas in an endoscope video image of the throat, which can be used as the target area. The predicted skeleton image of the target area is extracted, for example, a method may be adopted to extract the predicted skeleton image, identify the video frame image by using the trained skeleton identification model to obtain the predicted skeleton image, or identify key points in the video frame image and determine the predicted skeleton image based on the position distribution information of the key points.
Step S220, matching the predicted skeleton image with a pre-stored template skeleton image, calculating the overlapping rate of the predicted skeleton image and the template skeleton image, and determining the number of the complete predicted skeleton images according to the overlapping rate.
Calculating the similarity between a predicted point of a predicted skeleton image and a template key point of a template skeleton image prestored in a skeleton structure library, and if the similarity is larger than a preset matching threshold value, successfully matching the predicted skeleton image with the template skeleton image. And calculating the weighted Euclidean distance between the predicted point of the predicted skeleton image and the template key point of the template skeleton image, judging whether the predicted point is a correct predicted point according to the weighted Euclidean distance, and judging the integrity of the predicted skeleton image according to the number of the correct predicted points.
Aiming at the condition that at least two tissue areas exist in a video frame image of a single frame, the integrity of each tissue area can be detected through the corresponding predicted skeleton image by extracting the predicted skeleton image of the tissue area; when a certain tissue area in a single-frame video frame image is incomplete, the prediction skeleton image related to the tissue area in the multi-frame video frame image can be mutually complemented to obtain the complete prediction skeleton image of the tissue area, so that the overlapping rate is calculated according to the prediction skeleton image, and the subsequent integrity judgment can be more accurately completed.
Step S230, obtaining the relative displacement of the predicted skeleton image from the continuous video frame images, comparing the relative displacement with a preset relative displacement template, and determining the number of specific motions.
When the endoscope detection project relates to that a specific part has a motion function, the relative displacement of a predicted skeleton image is obtained from continuous video frame images, and compared with a preset relative displacement template, the motion condition of a tissue area of the part is calculated, so that the method can be applied to a scene of a movable part in an endoscopy and judge the integrity of a detection result of the movable part.
Step S240, judging whether the endoscopy process is complete or not according to the number of complete predicted skeleton images and the number of specific motions.
Compared with the prior art, the method and the device have the advantages that the predicted skeleton image of at least one target area is extracted from the video frame image, the inspection integrity of the target area is determined based on the overlapping degree of the predicted skeleton image and the pre-stored template skeleton image, specific motions are captured, the motion condition of the target area is determined, and the quality monitoring of the endoscopic inspection integrity of the detected motion part is realized according to double statistics of the number of the complete predicted skeleton images and the number of the specific motions.
In one embodiment, based on the step S220, the matching between the predicted skeleton image and the pre-stored template skeleton image, calculating the overlapping rate of the predicted skeleton image and the template skeleton image, and determining the number of complete predicted skeleton images according to the overlapping rate may specifically include the following steps:
step S221, the predicted skeleton image is subjected to scale level normalization processing.
The normalization processing of the scale layer aims at eliminating interference of inconsistent scales of the predicted skeleton image and the template skeleton image on matching, improving matching accuracy and adopting a linear function normalization mode.
Step S222, matching the processed predicted skeleton image with a pre-stored template skeleton image.
In one embodiment, the step S222 specifically includes the following steps:
step S2221, calculating the matching similarity between the processed predicted skeleton image and the pre-stored template skeleton image.
Specifically, the prediction point of the predicted skeleton image after the scale normalization processing is extracted, template key points of a pre-stored template skeleton image are calculated, and the matching similarity OKS of the two key points is calculated, wherein OKS of the p-th predicted skeleton image is obtained P The calculation formula is as follows:
Figure BDA0004028617530000081
the number of the predicted points is the same as that of the template key points and corresponds to the number of the template key points one by one, and i represents the ith predicted point and the ith template key point;
d pi representing the Euclidean distance between the ith predicted point of the p-th predicted skeleton image and the ith template key point of the template skeleton image;
S p a scale factor representing the p-th predicted skeleton image, the scale factor having a value that is the square root of the area of the circumscribed rectangle of the predicted skeleton image;
σ i representing a normalization factor between an ith predicted point and an ith template key point, wherein the factor is a standard deviation between a template key point value and a true value of a template skeleton image which is manually marked in all sample sets, and the larger sigma represents the more difficult marking of the key point of the type;
v pi visibility of the ith key point representing the p-th predicted skeleton image, v pi =0 indicates that the keypoint is unlabeled (absent or indeterminate in the figure), v pi =1 indicates that the key point is not blocked and marked, v pi =2 indicates that the keypoint is occluded but marked, v for predicting the keypoint p i =0 indicates that no prediction, v p i =1 indicates prediction;
delta represents 1 if the condition is satisfied, otherwise 0;
step S2222, if the matching similarity is greater than the matching threshold, determines that the matching is successful.
If the matching similarity OKS is larger than the matching threshold, judging that the structure of the predicted skeleton image in the current video frame image is consistent with that of the template skeleton image, and the matching is successful.
Step S223, calculating a weighted Euclidean distance between the predicted point of the predicted skeleton image and the template key point of the template skeleton image.
The calculation formula of the weighted Euclidean distance is as follows:
Figure BDA0004028617530000091
wherein i represents the number of predicted points of the predicted skeleton image, j represents the number of template key points of the template skeleton image, x, y represents the position coordinates, lambda i The weight representing the ith predicted point is used for weighting the result so that the predicted performance is more stable because the difficulty and the difficulty degree of different areas are inconsistent.
In step S224, if the weighted euclidean distance is smaller than the prediction threshold, the predicted point is determined to be the correct predicted point.
If the weighted Euclidean distance corresponding to a predicted point is smaller than a preset predicted threshold, the predicted point is a correct predicted point, otherwise, the predicted point is a value-off point.
Step S225, calculating the overlapping rate of the predicted skeleton image and the template skeleton image according to the number of the correct predicted points and the total number of the predicted points, wherein the specific calculation formula is as follows:
Figure BDA0004028617530000101
/>
wherein C is i Representing the overlap ratio of the ith predicted skeleton image, T i Representing the number of correct predicted points in the ith predicted skeleton image, N i Representing the number of all predicted points in the ith predicted skeleton image.
And step S226, if the overlapping rate is larger than the complete threshold value, judging that the predicted skeleton image is complete.
When the overlap ratio is greater than a preset integrity threshold, the predicted skeleton image is considered to be intact.
Step S227, counting the number of complete prediction skeleton images.
Aiming at the situation that a plurality of tissue areas cannot be completely and accurately identified in a single frame image, in the steps S221 to S227, the prediction skeleton image is extracted from the video frame image, matching similarity calculation is carried out by using the prediction points in the prediction skeleton image and the template key points in the template skeleton image, and calculation of Euclidean distances is weighted, so that correct prediction points are more accurately obtained, the overlapping rate of the prediction skeleton image is calculated according to the number of the correct prediction points, and the quality control precision of the endoscopy is improved.
In one embodiment, based on step S230, the relative displacement of the predicted skeleton image is obtained from the continuous video frame images, and the relative displacement is compared with a preset relative displacement template to determine the number of specific motions, which specifically includes the following steps:
Step S231, acquiring a current predicted skeleton image and a historical predicted skeleton image from the continuous video frame images.
The historical prediction skeleton image is at least two frames of video frame images before the current prediction skeleton image.
And step S232, obtaining the relative displacement of the predicted skeleton image according to the current predicted skeleton image and the historical predicted skeleton image.
Specifically, calculating the relative displacement combination between the corresponding predicted points in the current predicted skeleton image and the historical predicted skeleton image, and obtaining the relative displacement of the predicted skeleton image according to the relative displacement combination of the predicted points.
Step S233, comparing the relative displacement with a preset relative displacement template to obtain a displacement similarity.
Specifically, the relative displacement combination of the predicted points is compared with the relative displacement combination of each predicted point in the relative displacement template, and the magnitude of the displacement similarity is calculated.
In step S234, if the displacement similarity is greater than the displacement threshold, it is determined that the specific motion of the target region is captured.
If the displacement similarity is larger than a preset displacement threshold, judging that the target area corresponding to the tissue area of the detected part completes the specific movement.
Step S235, counting the number of specific motions.
Specifically, the number of specific motions is the number of target areas that complete the specific motions.
When the endoscope detection project relates to that a specific part has a motion function, the relative displacement combination of the predicted points of the predicted skeleton image is obtained from continuous video frame images, the predicted points are introduced for calculation, the relative displacement is obtained more accurately, and then the more accurate motion condition of the tissue area of the part is obtained.
In one embodiment, based on the step S240, determining whether the endoscopy procedure is complete according to the number of complete predicted skeleton images and the number of specific motions may specifically include the following steps:
in step S241, if the number of the complete predicted skeleton images is equal to the number of the preset skeleton images and the number of the specific motions is equal to the preset number of motions, the endoscopy is complete.
In step S242, if the number of the complete predicted skeleton images is not equal to the number of the preset skeleton images and the number of the specific motions is not equal to the number of the preset motions, the endoscopy is incomplete.
And judging the integrity of the endoscopy according to the double results of the number of the complete predicted skeleton images and the number of the specific motions, so as to achieve the effect of quality control.
In one embodiment, in step S240, it is determined whether the endoscopy procedure is complete according to the number of complete predicted skeleton images and the number of specific motions, and then the steps of:
step S250, generating an inspection prompt according to the judgment result of the endoscopy.
The judgment result of the endoscopy can be visually displayed, for example, the judgment result of the current endoscopy is displayed by using an APP end. Or based on the number of the complete predicted skeleton images, the number of the specific motions and the judgment result, informing the doctor to check the covered area and check the uncovered area through voice playing.
The present embodiment is described and illustrated below by way of preferred embodiments.
Fig. 3 is a flow chart of a quality control method for medical endoscopy of the present preferred embodiment.
Step S310, obtaining an endoscopic video frame image, and extracting a predicted skeleton image of at least one target region from the video frame image.
Step S320, carrying out scale level normalization processing on the predicted skeleton image, calculating the matching similarity between the processed predicted skeleton image and a pre-stored template skeleton image, and judging that the matching is successful if the matching similarity is larger than a matching threshold value.
Step S330, calculating a weighted Euclidean distance between a predicted point of the predicted skeleton image and a template key point of the template skeleton image, and judging the predicted point as a correct predicted point if the weighted Euclidean distance is smaller than a predicted threshold value; and calculating the overlapping rate of the predicted skeleton image and the template skeleton image according to the number of the correct predicted points and the total number of the predicted points, judging that the predicted skeleton image is complete if the overlapping rate is greater than a complete threshold value, and counting the number of the complete predicted skeleton images.
Step S340, obtaining a current predicted skeleton image and a historical predicted skeleton image from continuous video frame images, and obtaining the relative displacement of the predicted skeleton image according to the current predicted skeleton image and the historical predicted skeleton image; comparing the relative displacement with a preset relative displacement template to obtain displacement similarity, judging that the specific motion of the target area is captured if the displacement similarity is larger than a displacement threshold, and counting the number of the specific motions.
Step S350, if the number of the complete predicted skeleton images is equal to the number of the preset skeleton images and the number of the specific motions is equal to the preset number of the motions, the endoscopy is complete, and an inspection prompt is generated according to the judgment result of the endoscopy.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, in this embodiment, a quality control device for medical endoscopy is also provided, and the system is used to implement the foregoing embodiments and preferred embodiments, which have been described and will not be repeated. The terms "module," "unit," "sub-unit," and the like as used below may refer to a combination of software and/or hardware that performs a predetermined function. While the system described in the following embodiments is preferably implemented in software, implementation of hardware, or a combination of software and hardware, is also possible and contemplated.
In one embodiment, as shown in fig. 4, there is provided a quality control device 40 for medical endoscopy, comprising: an extraction module 41, a matching calculation module 42, a motion recognition module 43, and an integrity determination module 44, wherein:
an extraction module 41, configured to acquire an endoscopic video frame image, and extract a predicted skeleton image of at least one target region from the video frame image.
Among them, endoscopy refers to examination of a human organ using a medical endoscope. Medical endoscopes include image sensors, illumination sources, optical lenses, and other physical devices that can be accessed through various organs of the human body, such as the nose and mouth, to capture the condition of some tissue organs within the human body. When a doctor performs an examination using an endoscope, a video image of a human organ is captured, and an endoscopic video frame image is obtained from the video in a frame-by-frame or frame-skip manner. Each frame of video frame image comprises at least one target area, such as at least one tissue area of the throat and a connecting area among the tissue areas in an endoscope video image of the throat, which can be used as the target area. The predicted skeleton image of the target area is extracted, for example, a method may be adopted to extract the predicted skeleton image, identify the video frame image by using the trained skeleton identification model to obtain the predicted skeleton image, or identify key points in the video frame image and determine the predicted skeleton image based on the position distribution information of the key points.
The matching calculation module 42 is configured to match the predicted skeleton image with a pre-stored template skeleton image, calculate an overlapping rate of the predicted skeleton image and the template skeleton image, and determine the number of complete predicted skeleton images according to the overlapping rate.
Calculating the similarity between a predicted point of a predicted skeleton image and a template key point of a template skeleton image prestored in a skeleton structure library, and if the similarity is larger than a preset matching threshold value, successfully matching the predicted skeleton image with the template skeleton image. And calculating the weighted Euclidean distance between the predicted point of the predicted skeleton image and the template key point of the template skeleton image, judging whether the predicted point is a correct predicted point according to the weighted Euclidean distance, and judging the integrity of the predicted skeleton image according to the number of the correct predicted points.
Aiming at the condition that at least two tissue areas exist in a video frame image of a single frame, the integrity of each tissue area can be detected through the corresponding predicted skeleton image by extracting the predicted skeleton image of the tissue area; when a certain tissue area in a single-frame video frame image is incomplete, the prediction skeleton image related to the tissue area in the multi-frame video frame image can be mutually complemented to obtain the complete prediction skeleton image of the tissue area, so that the overlapping rate is calculated according to the prediction skeleton image, and the subsequent integrity judgment can be more accurately completed.
The motion recognition module 43 is configured to obtain a relative displacement of the predicted skeleton image from the continuous video frame images, compare the relative displacement with a preset relative displacement template, and determine the number of specific motions.
When the endoscope detection project relates to that a specific part has a motion function, the relative displacement of a predicted skeleton image is obtained from continuous video frame images, and compared with a preset relative displacement template, the motion condition of a tissue area of the part is calculated, so that the method can be applied to a scene of a movable part in an endoscopy and judge the integrity of a detection result of the movable part.
An integrity determination module 44 for determining whether the endoscopic procedure is complete based on the number of complete predicted skeleton images and the number of specific motions. If the number of the complete predicted skeleton images is equal to the number of the preset skeleton images and the number of the specific motions is equal to the preset number of the motions, the endoscopy is complete; if the number of complete predicted skeleton images is not equal to the number of preset skeleton images and the number of specific motions is not equal to the number of preset motions, the endoscopy is incomplete.
According to the method, the device and the system, the predicted skeleton image of at least one target area is extracted from the video frame image, the inspection integrity of the target area is determined based on the overlapping degree of the predicted skeleton image and the pre-stored template skeleton image, specific motions are captured, the motion condition of the target area is determined, and the quality monitoring of the endoscopic inspection integrity of the detected motion part is realized according to double statistics of the number of the complete predicted skeleton images and the number of the specific motions.
In one embodiment, the quality control device 40 for medical endoscopy further comprises: a reminder module 45 is generated.
A generating reminder module 45 for generating an inspection reminder according to the judgment result of the endoscopy.
The various modules in the above-described quality control apparatus for medical endoscopy may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a quality control method for medical endoscopy. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
step S210, obtaining an endoscopic video frame image, and extracting a predicted skeleton image of at least one target area from the video frame image.
Step S220, matching the predicted skeleton image with a pre-stored template skeleton image, calculating the overlapping rate of the predicted skeleton image and the template skeleton image, and determining the number of the complete predicted skeleton images according to the overlapping rate.
Step S230, obtaining the relative displacement of the predicted skeleton image from the continuous video frame images, comparing the relative displacement with a preset relative displacement template, and determining the number of specific motions.
Step S240, judging whether the endoscopy process is complete or not according to the number of complete predicted skeleton images and the number of specific motions.
In one embodiment, the processor when executing the computer program further performs the steps of:
step S250, generating an inspection prompt according to the judgment result of the endoscopy.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
step S210, obtaining an endoscopic video frame image, and extracting a predicted skeleton image of at least one target area from the video frame image.
Step S220, matching the predicted skeleton image with a pre-stored template skeleton image, calculating the overlapping rate of the predicted skeleton image and the template skeleton image, and determining the number of the complete predicted skeleton images according to the overlapping rate.
Step S230, obtaining the relative displacement of the predicted skeleton image from the continuous video frame images, comparing the relative displacement with a preset relative displacement template, and determining the number of specific motions.
Step S240, judging whether the endoscopy process is complete or not according to the number of complete predicted skeleton images and the number of specific motions.
In one embodiment, the computer program when executed by the processor further performs the steps of:
step S250, generating an inspection prompt according to the judgment result of the endoscopy.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
step S210, obtaining an endoscopic video frame image, and extracting a predicted skeleton image of at least one target area from the video frame image.
Step S220, matching the predicted skeleton image with a pre-stored template skeleton image, calculating the overlapping rate of the predicted skeleton image and the template skeleton image, and determining the number of the complete predicted skeleton images according to the overlapping rate.
Step S230, obtaining the relative displacement of the predicted skeleton image from the continuous video frame images, comparing the relative displacement with a preset relative displacement template, and determining the number of specific motions.
Step S240, judging whether the endoscopy process is complete or not according to the number of complete predicted skeleton images and the number of specific motions.
In one embodiment, the computer program when executed by the processor further performs the steps of:
step S250, generating an inspection prompt according to the judgment result of the endoscopy.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase ChangeMemory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as Static Random access memory (Static Random access memory AccessMemory, SRAM) or dynamic Random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A quality control method for medical endoscopy, the quality control method comprising:
acquiring a video frame image of endoscopy, and extracting a predicted skeleton image of at least one target area from the video frame image;
matching the predicted skeleton image with a pre-stored template skeleton image, calculating the overlapping rate of the predicted skeleton image and the template skeleton image, and determining the number of the complete predicted skeleton images according to the overlapping rate;
Obtaining the relative displacement of a predicted skeleton image from the continuous video frame images, comparing the relative displacement with a preset relative displacement template, and determining the number of specific motions;
and judging whether the endoscopy process is complete or not according to the number of the complete prediction skeleton images and the number of the specific motions.
2. The quality control method for medical endoscopy of claim 1, wherein the matching the predicted skeleton image with a pre-stored template skeleton image comprises:
carrying out scale level normalization processing on the predicted skeleton image;
and matching the processed predicted skeleton image with a pre-stored template skeleton image.
3. The quality control method for medical endoscopy of claim 2, wherein the matching the processed predicted skeleton image with a pre-stored template skeleton image comprises:
calculating the matching similarity between the processed predicted skeleton image and a pre-stored template skeleton image;
and if the matching similarity is larger than a matching threshold, judging that the matching is successful.
4. The quality control method for medical endoscopy of claim 1, wherein the calculating the overlapping ratio of the predicted skeleton image and the template skeleton image, determining the number of complete predicted skeleton images from the overlapping ratio includes:
Calculating a weighted Euclidean distance between a predicted point of the predicted skeleton image and a template key point of the template skeleton image;
if the weighted Euclidean distance is smaller than a prediction threshold value, judging that the predicted point is a correct predicted point;
calculating the overlapping rate of the predicted skeleton image and the template skeleton image according to the number of the correct predicted points and the total number of the predicted points;
if the overlapping rate is larger than a complete threshold value, judging that the predicted skeleton image is complete;
and counting the number of the complete prediction skeleton images.
5. The quality control method for medical endoscopy of claim 1, wherein obtaining a relative displacement of a predicted skeleton image from the continuous video frame images, comparing the relative displacement with a preset relative displacement template, and determining a number of specific motions comprises:
acquiring a current predicted skeleton image and a historical predicted skeleton image from the continuous video frame images;
obtaining the relative displacement of the predicted skeleton image according to the current predicted skeleton image and the historical predicted skeleton image;
comparing the relative displacement with a preset relative displacement template to obtain displacement similarity;
If the displacement similarity is larger than a displacement threshold, judging that the specific motion of the target area is captured;
counting the number of said specific movements.
6. The quality control method for medical endoscopy of claim 1, wherein the determining whether the endoscopy procedure is complete based on the number of complete predicted skeleton images and the number of specific motions comprises:
if the number of the complete predicted skeleton images is equal to the number of the preset skeleton images and the number of the specific motions is equal to the preset number of motions, the endoscopy is complete;
if the number of the complete predicted skeleton images is not equal to the number of the preset skeleton images and the number of the specific motions is not equal to the number of the preset motions, the endoscopy is incomplete.
7. The quality control method for medical endoscopy of claim 1, wherein after said judging whether the endoscopy process is complete based on the number of complete predicted skeleton images and the number of specific motions, the quality control method further comprises:
and generating an inspection prompt according to the judgment result of the endoscopy.
8. A quality control device for medical endoscopy, the device comprising:
the extraction module is used for acquiring video frame images of endoscopy and extracting a predicted skeleton image of at least one target area from the video frame images;
the matching calculation module is used for matching the predicted skeleton image with a pre-stored template skeleton image, calculating the overlapping rate of the predicted skeleton image and the template skeleton image, and determining the number of the complete predicted skeleton images according to the overlapping rate;
the motion recognition module is used for obtaining the relative displacement of the predicted skeleton image from the continuous video frame images, comparing the relative displacement with a preset relative displacement template and determining the number of specific motions;
and the integrity judging module is used for judging whether the endoscopy process is complete or not according to the number of the complete prediction skeleton images and the number of the specific motions.
9. The quality control device for medical endoscopy of claim 8, wherein the device further comprises:
and the generation reminding module is used for generating an inspection reminding according to the judgment result of the endoscopy.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the quality control method for medical endoscopy of any of claims 1 to 7.
CN202211721887.4A 2022-12-30 2022-12-30 Quality control method and device for medical endoscopy and computer equipment Pending CN116091432A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211721887.4A CN116091432A (en) 2022-12-30 2022-12-30 Quality control method and device for medical endoscopy and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211721887.4A CN116091432A (en) 2022-12-30 2022-12-30 Quality control method and device for medical endoscopy and computer equipment

Publications (1)

Publication Number Publication Date
CN116091432A true CN116091432A (en) 2023-05-09

Family

ID=86213176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211721887.4A Pending CN116091432A (en) 2022-12-30 2022-12-30 Quality control method and device for medical endoscopy and computer equipment

Country Status (1)

Country Link
CN (1) CN116091432A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503913A (en) * 2023-06-25 2023-07-28 浙江华诺康科技有限公司 Medical image recognition method, device, system and storage medium
CN116523907A (en) * 2023-06-28 2023-08-01 浙江华诺康科技有限公司 Endoscope imaging quality detection method, device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503913A (en) * 2023-06-25 2023-07-28 浙江华诺康科技有限公司 Medical image recognition method, device, system and storage medium
CN116523907A (en) * 2023-06-28 2023-08-01 浙江华诺康科技有限公司 Endoscope imaging quality detection method, device, equipment and storage medium
CN116523907B (en) * 2023-06-28 2023-10-31 浙江华诺康科技有限公司 Endoscope imaging quality detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US10860930B2 (en) Learning method, image recognition device, and computer-readable storage medium
KR101846370B1 (en) Method and program for computing bone age by deep neural network
CN116091432A (en) Quality control method and device for medical endoscopy and computer equipment
US20240169518A1 (en) Method and apparatus for identifying body constitution in traditional chinese medicine, electronic device, storage medium and program
CN107563997B (en) Skin disease diagnosis system, construction method, classification method and diagnosis device
WO2021197015A1 (en) Image analysis method, image analysis device, and image analysis system
JP2022128414A (en) Tracheal intubation positioning method based on deep learning, device, and storage medium
CN112149602B (en) Action counting method and device, electronic equipment and storage medium
CN115861299A (en) Electronic endoscope quality control method and device based on two-dimensional reconstruction
CN115861298B (en) Image processing method and device based on endoscopic visualization
CN111429414B (en) Artificial intelligence-based focus image sample determination method and related device
US20230284968A1 (en) System and method for automatic personalized assessment of human body surface conditions
CN113706536B (en) Sliding mirror risk early warning method and device and computer readable storage medium
CN111227768A (en) Navigation control method and device of endoscope
CN116484916A (en) Chicken health state detection method and method for building detection model thereof
CN115938593A (en) Medical record information processing method, device and equipment and computer readable storage medium
CN113743543B (en) Image classification training method and device, server and storage medium
CN114842972A (en) Method, device, electronic equipment and medium for determining user state
CN110934565B (en) Method and device for measuring pupil diameter and computer readable storage medium
CN114360695A (en) Mammary gland ultrasonic scanning analysis auxiliary system, medium and equipment
Monroy et al. Automated chronic wounds medical assessment and tracking framework based on deep learning
CN112331312A (en) Method, device, equipment and medium for determining labeling quality
CN115620053B (en) Airway type determining system and electronic equipment
CN111310669B (en) Fetal head circumference real-time measurement method and device
JP7148657B2 (en) Information processing device, information processing method and information processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination