CN117576097A - Endoscope image processing method and system based on AI auxiliary image processing information - Google Patents

Endoscope image processing method and system based on AI auxiliary image processing information Download PDF

Info

Publication number
CN117576097A
CN117576097A CN202410058307.0A CN202410058307A CN117576097A CN 117576097 A CN117576097 A CN 117576097A CN 202410058307 A CN202410058307 A CN 202410058307A CN 117576097 A CN117576097 A CN 117576097A
Authority
CN
China
Prior art keywords
image
light intensity
reflected light
endoscope
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410058307.0A
Other languages
Chinese (zh)
Other versions
CN117576097B (en
Inventor
唐永安
林文晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hualun Medical Supplies Shenzhen Co ltd
Original Assignee
Hualun Medical Supplies Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hualun Medical Supplies Shenzhen Co ltd filed Critical Hualun Medical Supplies Shenzhen Co ltd
Priority to CN202410058307.0A priority Critical patent/CN117576097B/en
Publication of CN117576097A publication Critical patent/CN117576097A/en
Application granted granted Critical
Publication of CN117576097B publication Critical patent/CN117576097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Endoscopes (AREA)

Abstract

The invention discloses an endoscope image processing method and system based on AI auxiliary image processing information, relates to the field of medical images, and solves the problem that the existing endoscope needs to manually position a diseased region according to an image video stream acquired in real time, and has low observation efficiency, and the method comprises the following steps of S1: acquiring endoscopic image data of a patient, step S2: acquiring detection point grading data according to the image data of the endoscope of the patient, and step S3: analyzing the detection points according to the detection point grading data to obtain abnormal detection point grading data, and step S4: according to the method, the affected area is marked by the endoscope image, so that the observation efficiency of the endoscope is improved.

Description

Endoscope image processing method and system based on AI auxiliary image processing information
Technical Field
The invention belongs to the field of medical images, relates to an image processing technology, and in particular relates to an endoscope image processing method and system based on AI auxiliary image processing information.
Background
The endoscope is a medical device, is mainly used for examining internal organs and tissues of a human body, is generally composed of a bendable tubular lens and a light source, can enter the interior of the human body through natural holes or micro wounds of an oral cavity, a nasal cavity, an esophagus, a stomach, an intestinal part and the like, so that a lesion area is observed and lesions are diagnosed, can help doctors to carry out endoscopic examination, sampling and treatment, and is one of important tools commonly used in the modern medical field;
in the prior art, the endoscope has the following defects in the use process:
1. the existing endoscope image processing lacks a method for automatically adjusting the focal length of an endoscope in real time, and cannot present the optimal image duty ratio in an endoscope display;
2. the existing endoscope needs to manually position the affected area according to an image video stream acquired in real time, and has the problem of low observation efficiency.
For this reason, we propose an endoscopic image processing method and system based on AI-assisted image processing information.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide an endoscope image processing method and system based on AI auxiliary image processing information.
The invention can realize the aim through the following technical proposal, and the endoscope image processing method based on the AI auxiliary image processing information specifically comprises the following steps:
step S1: acquiring an image video stream through an endoscope probe, acquiring a target tissue image duty ratio according to the image video stream, adjusting the target tissue image duty ratio to enable the target tissue image to be in a first image duty ratio section, establishing an image classification model, intercepting the image video stream, and carrying out definition division to obtain patient endoscope image data;
step S2: establishing an endoscope gray level histogram according to the patient endoscope image data, and classifying abnormal detection points according to the endoscope gray level histogram and the patient endoscope image data to obtain detection point classification data;
step S3: acquiring detection point grading data, transmitting a spectrum to a monitoring point, acquiring a reflected light intensity value and a normal reflected light intensity value of an abnormal monitoring point, acquiring an abnormal detection point reflected light intensity difference value through the reflected light intensity value and the normal reflected light intensity value of the abnormal monitoring point, and performing threshold judgment on the abnormal detection point reflected light intensity difference value to acquire abnormal detection point grading data;
Step S4: obtaining an abnormal detection point number value in a first abnormal interval according to the abnormal detection point grading data, obtaining a reflected light intensity dispersion coefficient, obtaining an affected area judgment coefficient according to the abnormal detection point number value and the reflected light intensity dispersion coefficient, marking an affected area according to the affected area judgment coefficient and a patient endoscope image, obtaining an affected area marking image, and synchronizing the affected area marking image to an endoscope display.
Further, the step S1: the method comprises the following specific steps of:
step S11: slowly inserting an endoscope probe into an area to be inspected in a patient, starting an imaging function of the endoscope, and obtaining a continuous image video stream;
step S12: calibrating a target tissue in an image video stream, obtaining the duty ratio of the target tissue in an endoscope image video stream, marking the duty ratio as a target tissue image duty ratio, and adjusting the target tissue image duty ratio;
step S13: establishing an image classification model, and intercepting an image video stream frame by frame to obtain image intercepting data;
step S14: carrying out definition division on the image interception data to obtain image data of the endoscope of the patient;
Step S15: calculating Laplacian variance of each frame of image as a definition score of each frame of image by using a Python programming language in combination with OpenCV;
step S16: setting a definition score threshold value and comparing the values with the definition score, storing the frame image when the definition score is larger than the definition score threshold value, and deleting the frame image when the definition score is smaller than the definition score threshold value;
step S17: the image of the image classification model is defined as patient endoscopic image data.
Further, the step S12: the target tissue image duty ratio is adjusted, and the specific steps are as follows:
step S121: determining a target tissue according to the image video stream, acquiring an HSV value interval of the target tissue by using an AI image model, and marking the HSV value interval as a sample HSV value interval;
step S122: the method comprises the steps that an endoscope image video stream is thinned into i image units through an image display, i is the total number of the image units, HSV values of each image unit are obtained in real time through an HSB color model, and when the HSV values are in a sample HSV value interval, the image units are judged to be target tissue image units;
step S123: counting the number of image units occupied by the target tissue image area, marking the image units as target image unit numbers, and taking the ratio of the target image unit numbers to the total number of the image units as the target tissue image duty ratio;
Step S124: acquiring a first calibration image duty ratio and a second calibration image duty ratio, and respectively carrying out numerical comparison on the first calibration image duty ratio, the second calibration image duty ratio and the target tissue image duty ratio;
step S125: when the target tissue image duty ratio is larger than or equal to the first calibration image duty ratio and smaller than or equal to the second calibration image duty ratio, judging that the target tissue image duty ratio is a first image duty ratio interval;
step S126: when the target tissue image duty ratio is smaller than the first calibration image duty ratio, judging that the target tissue image is in a first image duty ratio section, automatically adjusting the focal length of the endoscope at the moment, and increasing the target tissue image duty ratio until the target tissue image is in the first image duty ratio section;
step S127: when the target tissue image duty ratio is larger than the second calibration image duty ratio and the third image duty ratio interval is judged, the focal length of the endoscope is automatically adjusted, the target tissue image duty ratio is reduced, and the target tissue image is positioned in the first image duty ratio interval.
Further, the step S2: the method comprises the following specific steps of:
step S21: an endoscope gray level histogram is established according to the image data of the endoscope of a patient, and the method comprises the following specific steps:
step S211: acquiring endoscopic image data of a patient;
Step S212: drawing software is used for respectively acquiring RGB intensity values of the image data of the endoscope of the patient, and dividing the RGB intensity values into red intensity values, green intensity values and blue intensity values;
step S213: converting the red intensity value, the green intensity value and the blue intensity value of the patient endoscope image data into gray intensity values by calculation;
step S214: setting a red intensity value, a green intensity value and a blue intensity value of the endoscope image data to be gray intensity values through drawing software to obtain the patient endoscope image data with gray scale;
step S215: initializing an array with the length of 256, marking the array as pixel_counts, and recording the pixel number of each gray level in the image data of the endoscope of the patient with gray level, wherein the length of each array corresponds to one gray level (0-255);
step S216: traversing each pixel in the endoscopic image data of the graying patient, reading the gray level corresponding to the pixel, increasing the count in the gray level corresponding to the pixel_counts every time the gray level corresponding to the pixel is read, acquiring the pixel count of each gray level in the endoscopic image data of the graying patient, taking the gray level as an abscissa and the pixel number corresponding to the gray level as an ordinate, establishing a gray level histogram, and marking the gray level histogram as an endoscopic gray level histogram;
Step S22: and classifying the abnormal detection points according to the endoscope gray level histogram and the patient endoscope image data.
Further, the step S22: the abnormal detection points are classified, and the specific steps are as follows:
step S221: setting a first characteristic pixel number threshold line and a second characteristic pixel number threshold line in the endoscope gray level histogram, marking the gray level corresponding to a gray square column higher than the first characteristic pixel number threshold line as a first height interval, marking the gray level corresponding to a gray square column lower than the second characteristic pixel number threshold line as a second height interval, and marking the gray level corresponding to a gray square column between the first characteristic pixel number threshold line and the second characteristic pixel number threshold line as a third height interval;
step S222: judging the gray level corresponding to the first height interval and the second height interval as abnormal gray level, and judging the gray level corresponding to the third height interval as normal gray level;
step S223: marking the pixel points corresponding to the abnormal gray level as abnormal detection points in the patient endoscope image data;
step S224: marking the pixel points corresponding to the normal gray level as normal detection points in the endoscope image data of the patient;
Step S225: and defining a judgment result of the detection point according to the gray level as detection point grading data.
Further, the step S3: the method comprises the following specific steps of:
step S31: the method comprises the following specific steps of:
step S311: transmitting a spectrum to a detection point through a spectrum transmitting end, acquiring a spectrum reflected spectrum through an abnormal detection point through a spectrum receiving end, acquiring a reflected light intensity value of the reflected spectrum through a detection end, and repeating the operation to acquire the reflected light intensity value of each abnormal detection point;
step S312: transmitting a spectrum to a normal detection point through a spectrum transmitting end, acquiring a reflection spectrum of the spectrum passing through the normal detection point through a spectrum receiving end, acquiring a reflection light intensity value of the reflection spectrum through a detection end, repeating the process, respectively acquiring the reflection light intensity values of j normal detection points, calculating the average of the reflection light intensity values of j normal detection points, and marking the average as the normal reflection light intensity value;
step S313: randomly selecting a reflected light intensity value of an abnormal monitoring point, marking the value as a first reflected light intensity value, and calculating the first reflected light intensity value and a normal reflected light intensity value to obtain a reflected light intensity difference value of the abnormal monitoring point;
Step S314: step S313 is repeated: calculating the difference value of the reflected light intensity of each abnormal monitoring point;
step S315: and grading the abnormal detection points according to the reflected light intensity difference values of the abnormal detection points to obtain grading data of the abnormal detection points.
Further, the step S315: the method comprises the following specific steps of:
step S3151: obtaining a reflected light intensity difference value threshold, and carrying out numerical comparison on the reflected light intensity difference value and the reflected light intensity difference value threshold;
step S3152: when the difference value of the reflected light intensity is larger than or equal to the difference value threshold value of the reflected light intensity, judging the first abnormal section;
step S3153: when the reflected light intensity difference value is smaller than the reflected light intensity difference value threshold value, judging that the reflected light intensity difference value is a second abnormal interval;
step S3154: defining the abnormal detection points respectively corresponding to the first abnormal interval and the second abnormal interval as abnormal detection point grading data;
step S3155: marking the abnormal detection point corresponding to the first abnormal interval as an affected area detection point, and marking the abnormal detection point corresponding to the second abnormal interval as a conventional detection point.
Further, the step S4: analyzing the reflected light intensity according to the abnormal detection point grading data to obtain the image data of the affected area, wherein the method comprises the following specific steps:
Step S41: obtaining a reflected light intensity average value corresponding to an abnormal detection point corresponding to the second abnormal interval, and marking the reflected light intensity average value as a reflected light intensity normal value;
step S42: obtaining a reflected light intensity dispersibility coefficient of a first endoscope image sample;
step S43: counting the number of the abnormal detection points in the first abnormal section in the first endoscope image sample;
step S44: calculating the reflected light intensity dispersibility coefficient and the number of abnormal detection points to obtain an affected area judgment coefficient corresponding to the first endoscope image sample;
step S45: the method comprises the steps of obtaining a lesion area judgment coefficient threshold value, and comparing a lesion area judgment coefficient corresponding to a first endoscope image sample with the lesion area judgment coefficient threshold value in a numerical mode to obtain lesion area judgment data corresponding to the first endoscope image sample, wherein the method comprises the following specific steps of:
step S451: when the affected area judgment coefficient is greater than or equal to the affected area judgment coefficient threshold value, judging that the area corresponding to the first endoscope image sample is an affected area;
step S452: when the judging coefficient of the affected area is smaller than the judging coefficient threshold value of the affected area, judging that the area corresponding to the first endoscope image sample is a normal area;
step S46: repeating the steps S42-S45, and respectively carrying out affected area analysis on the second endoscope image sample and the m-th endoscope image sample of the third endoscope image sample … … to obtain affected area judgment data corresponding to the endoscope image samples;
Step S47: and acquiring an endoscopic image of the patient, performing image layer coverage on the endoscopic image data of the patient by using an AI auxiliary image processing tool, marking the endoscopic image of the patient above the covered image layer, defining the marked endoscopic image of the patient as a marked image of the patient, and synchronizing the marked image of the patient to an endoscope display.
Further, the step S42: the method comprises the following specific steps of:
step S421: acquiring a graying patient endoscopic image, dividing the graying patient endoscopic image into m graying endoscopic image samples in unit area, and respectively marking the graying patient endoscopic image into a first endoscopic image sample, a second endoscopic image sample and a third endoscopic image sample … … mth endoscopic image sample;
step S422: counting the reflected light intensity difference value corresponding to each pixel point in the first endoscope image sample, and respectively marking the reflected light intensity difference value as a first reflected light intensity difference value, a second reflected light intensity difference value and a third reflected light intensity difference value … … nth reflected light intensity difference value;
step S423: and calculating the normal difference value of the reflected light intensity, the first difference value of the reflected light intensity and the nth difference value of the reflected light intensity of the second difference value … … of the reflected light intensity to obtain a reflected light intensity dispersibility coefficient of the first endoscope image sample.
The endoscope image processing system based on the AI auxiliary image processing information comprises an image acquisition module, an abnormality detection module, a reflected light module, a affected area detection module and a server, and is specifically as follows:
an image acquisition module: acquiring endoscopic image data of a patient;
an abnormality detection module: acquiring detection point grading data according to the image data of the endoscope of the patient;
and (3) a reflection light module: analyzing the detection points according to the detection point grading data to obtain abnormal detection point grading data;
the affected area detection module: and analyzing the reflected light intensity according to the classified data of the abnormal detection points to obtain an affected area labeling image, and synchronizing the affected area labeling image to an endoscope display.
In summary, due to the adoption of the technical scheme, the beneficial effects of the invention are as follows:
1. according to the invention, the image classification model is used for grading the endoscopic image data of the patient, and the image with high grading is stored and subjected to image analysis, so that the high quality of the endoscopic image data source is ensured;
2. the invention realizes the optimal image duty ratio of the target tissue presented in the endoscope display by a method of automatically adjusting the focal length of the endoscope in real time;
3. According to the invention, the abnormal detection point is obtained by analyzing and graying the endoscope image of the patient, the abnormal monitoring point is further analyzed by obtaining the reflected light intensity of the abnormal detection point, the affected area image is obtained, and the affected area image is marked by the endoscope image, so that the affected area in the endoscope image can be marked directly.
Drawings
The present invention is further described below with reference to the accompanying drawings for the convenience of understanding by those skilled in the art.
FIG. 1 is a diagram of the steps in the practice of the present invention;
FIG. 2 is an overall system block diagram of the present invention;
fig. 3 is an endoscopic gray level histogram of the present invention.
Detailed Description
The technical solutions of the present invention will be clearly and completely described in connection with the embodiments, and it is obvious that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1 and 2, the present invention provides a technical solution: the endoscope image processing system based on the AI auxiliary image processing information comprises an image acquisition module, an abnormality detection module, a reflected light module and an affected area marking module, wherein the image acquisition module, the abnormality detection module, the reflected light module and the affected area marking module are respectively connected with a server;
The image acquisition module acquires image data of a patient endoscope;
the image acquisition module comprises an endoscope probe;
slowly inserting an endoscope probe into an area to be inspected in a patient, starting an imaging function of the endoscope, and obtaining a continuous image video stream;
calibrating a target tissue in an image video stream, obtaining the duty ratio of the target tissue in an endoscope image video stream, marking the duty ratio as a target tissue image duty ratio, and adjusting the target tissue image duty ratio;
the target tissue image duty ratio is adjusted, and the method specifically comprises the following steps:
acquiring a target tissue observed by a patient through an endoscope, acquiring an HSV value interval of the corresponding target tissue through an AI image model, and marking the HSV value interval as a sample HSV value interval;
the method comprises the steps that an endoscope image video stream is thinned into i image units through an image display, i is the total number of the image units, HSV values of each image unit are obtained in real time through an HSB color model, and when the HSV values are in a sample HSV value interval, the image units are judged to be target tissue image units;
counting the number of image units occupied by the target tissue image area, marking the image units as target image unit numbers, and taking the ratio of the target image unit numbers to the total number of the image units as the target tissue image duty ratio;
What needs to be explained here is: the HSV value interval comprises three specific indexes of a tone value interval, a saturation interval and a brightness interval, and when the tone value interval, the saturation interval and the brightness interval of an image unit are all in the corresponding intervals, the image unit can be judged to be a target tissue image unit;
acquiring a first calibration image duty ratio and a second calibration image duty ratio, and respectively carrying out numerical comparison on the first calibration image duty ratio, the second calibration image duty ratio and the target tissue image duty ratio;
when the target tissue image duty ratio is larger than or equal to the first calibration image duty ratio and smaller than or equal to the second calibration image duty ratio, judging that the target tissue image duty ratio is a first image duty ratio interval;
when the target tissue image duty ratio is smaller than the first calibration image duty ratio, judging that the target tissue image is in a first image duty ratio section, automatically adjusting the focal length of the endoscope at the moment, and increasing the target tissue image duty ratio until the target tissue image is in the first image duty ratio section;
when the target tissue image duty ratio is larger than the second calibration image duty ratio and the third image duty ratio interval is judged, automatically adjusting the focal length of the endoscope, and reducing the target tissue image duty ratio until the target tissue image is in the first image duty ratio interval;
What needs to be explained here is:
the specific calibration image accounts for 50% of the ratio of the first calibration image and 75% of the ratio of the second calibration image, which are different due to different observed target tissues, for example, the gastroscopy;
the target tissue referred to in this embodiment is a specific tissue that a doctor wants to observe, diagnose or treat when performing an endoscopic examination;
establishing an image classification model, and intercepting an image video stream frame by frame to obtain image intercepting data;
carrying out definition division on the image interception data to obtain patient endoscope image data;
calculating Laplacian variance of each frame of image as a definition score of each frame of image by using a Python programming language in combination with OpenCV;
setting a definition score threshold value and comparing the values with the definition score, storing the frame image when the definition score is larger than the definition score threshold value, and deleting the frame image when the definition score is smaller than the definition score threshold value;
defining an image of the image classification model as patient endoscopic image data;
the specific code implementation process of the image classification model is as follows:
pip install opencv-python
import cv2
# load video;
video_path = 'path_to_your_endoscopy_video.mp4'
cap = cv2.VideoCapture(video_path)
# ensure that the video file can be opened;
if not cap.isOpened():
print("Error: Could not open video.")
exit()
while True:
ret, frame=cap.
if not ret:
The break# video ends;
converting # into a gray scale map;
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# calculate sharpness score, e.g. using Laplacian variance;
fm = cv2.Laplacian(gray, cv2.CV_64F).var()
# shows sharpness score;
cv2.putText(frame, f'Focus Measure: {fm:.2f}', (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2, cv2.LINE_AA)
displaying the image #;
cv2.imshow('Frame', frame)
# exit at 'q';
if cv2.waitKey(1)&0xFF == ord('q'):
Break
focus_threshold=100# sets the sharpness score threshold;
while True:
# ...
if fm>focus_threshold:
if the definition score exceeds the definition score threshold, saving the image;
cv2.imwrite (f 'clear_frame_time { time stamp }. Png', frame) # time stamp is used to ensure file name uniqueness;
what needs to be explained here is: openCV is a cross-platform, open-source computer vision and image processing library that provides rich functions and tools that can process image and video data quickly and efficiently;
laplacian variance refers to calculating gradients of an image using Laplacian operators, and calculating variances of the gradient values, the sharpness or edge sharpness of the image can be estimated by calculating the variances of the Laplacian gradients;
the abnormality detection module acquires detection point grading data according to the image data of the endoscope of the patient;
the abnormality detection module comprises a gray level image unit and an affected area positioning unit;
the gray level image unit establishes an endoscope gray level histogram according to the image data of the endoscope of the patient;
Acquiring endoscopic image data of a patient;
drawing software is used for respectively acquiring RGB intensity values of the image data of the endoscope of the patient, and dividing the RGB intensity values into red intensity values, green intensity values and blue intensity values;
converting the red intensity value, the green intensity value and the blue intensity value of the patient endoscope image data into gray intensity values through a gray value calculation formula;
the specific configuration reference of the gray value calculation formula is as follows:
wherein Y is a gray intensity value, R is a red intensity value, G is a green intensity value, and B is a blue intensity value;
setting a red intensity value, a green intensity value and a blue intensity value of the endoscope image data to be gray intensity values through drawing software to obtain the patient endoscope image data with gray scale;
initializing an array with the length of 256, marking the array as pixel_counts, and recording the pixel number of each gray level in the image data of the endoscope of the patient with gray level, wherein the length of each array corresponds to one gray level (0-255);
traversing each pixel in the image data of the endoscope of the patient, reading the gray level corresponding to the pixel, increasing the count in the gray level corresponding to the pixel_counts every time the gray level corresponding to the pixel is read, obtaining the pixel count of each gray level in the image data of the endoscope of the patient, taking the gray level as an abscissa and the pixel number corresponding to the gray level as an ordinate, establishing a gray level histogram, and marking the gray level histogram as an endoscope gray level histogram, please refer to fig. 3;
The specific code implementation process for counting the pixel number corresponding to each gray level is as follows:
pixel_counts= [0 ]. 256 # creates a 256 long list with an initial value of 0
for pixel in image:
Gray_value=pixel# assume that pixel is the gray value
pixel_counts[gray_value] += 1
for i in range(image_height):
for j in range(image_width):
Gray_value=image [ i ] [ j ] # acquires the gray value of the pixel
pixel_counts[gray_value] += 1
What needs to be explained here is: the drawing software referred to herein is specifically defined as Adobe Photoshop;
the affected area positioning unit classifies abnormal detection points according to the endoscope gray level histogram and the endoscope image data of the patient, and specifically comprises the following steps:
setting a first characteristic pixel number threshold line and a second characteristic pixel number threshold line in the endoscope gray level histogram, marking the gray level corresponding to a gray square column higher than the first characteristic pixel number threshold line as a first height interval, marking the gray level corresponding to a gray square column lower than the second characteristic pixel number threshold line as a second height interval, and marking the gray level corresponding to a gray square column between the first characteristic pixel number threshold line and the second characteristic pixel number threshold line as a third height interval;
judging the gray level corresponding to the first height interval and the second height interval as an abnormal gray level;
Judging the gray level corresponding to the third height interval as a normal gray level;
the first characteristic pixel number threshold line and the second characteristic pixel number threshold line are both larger than 0 in number of pixels corresponding to the first characteristic pixel number threshold line and are higher than the second characteristic pixel number threshold line;
marking the pixel points corresponding to the abnormal gray level as abnormal detection points in the patient endoscope image data;
marking the pixel points corresponding to the normal gray level as normal detection points in the endoscope image data of the patient;
defining a judgment result of the detection point according to the gray level as detection point grading data;
the reflected light module analyzes the detection points according to the detection point grading data to obtain abnormal detection point grading data;
the reflection optical module comprises a spectrum sensor, and the spectrum sensor comprises a spectrum emission end, a spectrum receiving end and a detection end;
obtaining a reflected light intensity value and a normal reflected light intensity value of each abnormal monitoring point through a spectrum sensor;
the reflected light intensity value of the detection point is acquired, and the method specifically comprises the following steps:
transmitting a spectrum to a detection point through a spectrum transmitting end, acquiring a spectrum reflected spectrum through an abnormal detection point through a spectrum receiving end, and acquiring a reflected light intensity value of the reflected spectrum through a detection end to acquire a reflected light intensity value of each abnormal detection point;
Transmitting a spectrum to a normal detection point through a spectrum transmitting end, acquiring a reflection spectrum of the spectrum passing through the normal detection point through a spectrum receiving end, acquiring a reflection light intensity value of the reflection spectrum through a detection end, repeating the process, respectively acquiring the reflection light intensity values of j normal detection points, calculating the average of the reflection light intensity values of j normal detection points, and marking the average as the normal reflection light intensity value;
randomly selecting a reflected light intensity value of an abnormal monitoring point, marking the value as a first reflected light intensity value, and calculating the first reflected light intensity value and a normal reflected light intensity value through a reflected light intensity difference calculation formula to obtain a reflected light intensity difference value of the abnormal monitoring point;
the reflected light intensity difference calculation formula is specifically configured as:
wherein Fs is the difference between the reflected light intensities at the abnormal detection points, fsg is a first reflected light intensity value, fsg is a normal reflected light intensity value;
repeating the above process, and calculating the reflected light intensity difference value of each abnormal monitoring point;
what needs to be explained here is: the spectrum sensor can have spectrum wave band and spectrum color difference according to clinical symptoms of patients, for example, the spectrum wave band is shorter for patients with vasodilation and the spectrum wave band is longer for patients with inflammation;
Classifying the abnormal detection points according to the reflected light intensity difference values of the abnormal detection points to obtain classified data of the abnormal detection points, wherein the classified data are specifically as follows:
obtaining a reflected light intensity difference value threshold, and carrying out numerical comparison on the reflected light intensity difference value and the reflected light intensity difference value threshold;
when the difference value of the reflected light intensity is larger than or equal to the difference value threshold value of the reflected light intensity, judging the first abnormal section;
when the reflected light intensity difference value is smaller than the reflected light intensity difference value threshold value, judging that the reflected light intensity difference value is a second abnormal interval;
defining the abnormal detection points respectively corresponding to the first abnormal interval and the second abnormal interval as abnormal detection point grading data;
wherein, the abnormal detection point corresponding to the first abnormal interval is an affected area detection point, and the abnormal detection point corresponding to the second abnormal interval is a conventional detection point;
what needs to be explained here is:
the reflected light intensity difference threshold value involved in the embodiment is based on the difference value between the reflected light intensity value passing through the normal monitoring point and the reflected light intensity value passing through the normal monitoring point, and is specifically set to be 12.5 units;
assuming that the patient suffers from a gastric ulcer, the reflected light intensity difference threshold is expressed as the difference in reflected light intensity values of normal and abnormal tissues, specifically set at 12.5 units here;
Specifically, a spectrum with the wavelength of 650 nanometers is used for irradiating an abnormal detection point, the first reflected light intensity value is 70 units, the normal reflected light intensity value is 85 units, the difference value of the reflected light intensities of the abnormal detection point is 15 units, when the set reflected light intensity difference value threshold is 12.5 units, and the 15 is larger than 12.5 units, the abnormal detection point is positioned in a first abnormal section, and the units are used for representing abstract values of the reflected light intensities;
the reflected light module acquires the classified data of the abnormal detection points and transmits the classified data to the affected area detection module;
the affected area detection module analyzes the reflected light intensity according to the abnormal detection point grading data to obtain an affected area labeling image, and synchronizes the affected area labeling image to the endoscope display;
obtaining a reflected light intensity average value corresponding to an abnormal detection point corresponding to the second abnormal interval, and marking the reflected light intensity average value as a reflected light intensity normal value;
acquiring a graying patient endoscopic image, dividing the graying patient endoscopic image into m graying endoscopic image samples in unit area, and respectively marking the graying patient endoscopic image into a first endoscopic image sample, a second endoscopic image sample and a third endoscopic image sample … … mth endoscopic image sample;
Counting the reflected light intensity difference value corresponding to each pixel point in the first endoscope image sample, and respectively marking the reflected light intensity difference value as a first reflected light intensity difference value, a second reflected light intensity difference value and a third reflected light intensity difference value … … nth reflected light intensity difference value;
calculating the normal difference value of the reflected light intensity, the first reflected light intensity difference value and the nth reflected light intensity difference value of the second reflected light intensity difference value … … through a reflected light intensity dispersion coefficient calculation formula to obtain a reflected light intensity dispersion coefficient of the first endoscope image sample;
the reflected light intensity dispersibility coefficient is specifically configured to:
wherein, fsx is the dispersion coefficient of the reflected light intensity, fs1 is the first reflected light intensity difference, fs2 is the second reflected light intensity difference, fsn is the nth reflected light intensity difference, and Fsz is the normal reflected light intensity difference;
counting the number of the abnormal detection points in the first abnormal section in the first endoscope image sample;
calculating the number of abnormal detection points and the dispersivity coefficient of the reflected light intensity through a lesion judgment coefficient calculation formula to obtain a lesion judgment coefficient corresponding to the first endoscope image sample;
the calculation formula of the affected area judgment coefficient is specifically configured as follows:
Wherein Hp is a disease area judgment coefficient, fsx is a reflected light intensity dispersion coefficient, yjs is an abnormal detection point number value, a1 is a set proportionality coefficient, and a1 is larger than 0;
what needs to be explained here is: the larger the dispersion coefficient of the reflected light intensity is, the more uniformly the reflected light is distributed in a plurality of angles, and from the angle, the corresponding surface is relatively rough, and in the same way, the more the reflected light is dispersed in the patient, the surface texture of the corresponding position in the patient is complex or has the characteristic of roughness;
acquiring an affected area judgment coefficient threshold value, and comparing the value of an affected area judgment coefficient corresponding to the first endoscope image sample with the value of the affected area judgment coefficient threshold value to obtain affected area judgment data corresponding to the first endoscope image sample;
threshold judgment is carried out on the affected area judgment coefficient corresponding to the first endoscope image sample, and the method specifically comprises the following steps:
when the affected area judgment coefficient is greater than or equal to the affected area judgment coefficient threshold value, judging that the area corresponding to the first endoscope image sample is an affected area;
when the judging coefficient of the affected area is smaller than the judging coefficient threshold value of the affected area, judging that the area corresponding to the first endoscope image sample is a normal area;
what needs to be explained here is:
The affected area judgment coefficient threshold is a preset value, and is used for determining an affected area in the endoscope image according to the reflected light intensity dispersibility coefficient and the abnormal detection point number value, wherein the affected area judgment coefficient threshold is set to be 2.1;
when the dispersion coefficient of the reflected light intensity corresponding to the first endoscope image sample is 11 units, when the number of the abnormal detection points corresponding to the first endoscope image sample is 20000, the proportionality coefficient a1 is 0.000011, the affected area judgment coefficient threshold is 2.42, and the first endoscope image sample is judged to be an affected area because 2.42 is more than 2.1;
repeating the above process, and respectively carrying out affected area analysis on the second endoscope image sample and the m-th endoscope image sample of the third endoscope image sample … … to obtain affected area judgment data corresponding to the endoscope image samples;
synthesizing the affected area in the endoscope image sample to obtain an affected area image;
acquiring an endoscopic image of a patient, performing image layer coverage on the endoscopic image data of the patient by using an AI auxiliary image processing tool, marking the endoscopic image of the patient above the covered image layer, defining the marked endoscopic image of the patient as a marked image of the patient, and synchronizing the marked image of the patient to an endoscope display;
What needs to be explained here is: the AI-assisted image processing tool referred to herein is OpenCV;
in the present application, if a corresponding calculation formula appears, the above calculation formulas are all dimensionality-removed and numerical calculation, and the size of the weight coefficient, the scale coefficient and other coefficients existing in the formulas is a result value obtained by quantizing each parameter, so long as the proportional relation between the parameter and the result value is not affected.
Example two
Based on another concept of the same invention, an endoscope image processing method based on AI auxiliary image processing information is now proposed, referring to fig. 1, comprising the steps of:
step S1: acquiring endoscopic image data of a patient;
step S11: slowly inserting an endoscope probe into an area to be inspected in a patient, starting an imaging function of the endoscope, and obtaining a continuous image video stream;
step S12: calibrating a target tissue in an image video stream, obtaining the duty ratio of the target tissue in an endoscope image video stream, marking the duty ratio as a target tissue image duty ratio, and adjusting the target tissue image duty ratio;
step S121: determining a target tissue according to the image video stream, acquiring an HSV value interval of the target tissue by using an AI image model, and marking the HSV value interval as a sample HSV value interval;
Step S122: the method comprises the steps that an endoscope image video stream is thinned into i image units through an image display, i is the total number of the image units, HSV values of each image unit are obtained in real time through an HSB color model, and when the HSV values are in a sample HSV value interval, the image units are judged to be target tissue image units;
step S123: counting the number of image units occupied by the target tissue image area, marking the image units as target image unit numbers, and taking the ratio of the target image unit numbers to the total number of the image units as the target tissue image duty ratio;
step S124: acquiring a first calibration image duty ratio and a second calibration image duty ratio, and respectively carrying out numerical comparison on the first calibration image duty ratio, the second calibration image duty ratio and the target tissue image duty ratio;
step S125: when the target tissue image duty ratio is larger than or equal to the first calibration image duty ratio and smaller than or equal to the second calibration image duty ratio, judging that the target tissue image duty ratio is a first image duty ratio interval;
step S126: when the target tissue image duty ratio is smaller than the first calibration image duty ratio, judging that the target tissue image is in a first image duty ratio section, automatically adjusting the focal length of the endoscope at the moment, and increasing the target tissue image duty ratio until the target tissue image is in the first image duty ratio section;
Step S127: when the target tissue image duty ratio is larger than the second calibration image duty ratio and the third image duty ratio interval is judged, automatically adjusting the focal length of the endoscope, and reducing the target tissue image duty ratio until the target tissue image is in the first image duty ratio interval;
step S13: establishing an image classification model, and intercepting an image video stream frame by frame to obtain image intercepting data;
step S14: carrying out definition division on the image interception data to obtain patient endoscope image data;
step S15: calculating Laplacian variance of each frame of image as a definition score of each frame of image by using a Python programming language in combination with OpenCV;
step S16: setting a definition score threshold value and comparing the values with the definition score, storing the frame image when the definition score is larger than the definition score threshold value, and deleting the frame image when the definition score is smaller than the definition score threshold value;
step S17: defining an image of the image classification model as patient endoscopic image data;
step S2: acquiring detection point grading data according to the image data of the endoscope of the patient;
step S21: an endoscope gray level histogram is established according to the image data of the endoscope of a patient, and the method comprises the following specific steps:
Step S211: acquiring endoscopic image data of a patient;
step S212: drawing software is used for respectively acquiring RGB intensity values of the image data of the endoscope of the patient, and dividing the RGB intensity values into red intensity values, green intensity values and blue intensity values;
step S213: converting the red intensity value, the green intensity value and the blue intensity value of the patient endoscope image data into gray intensity values by calculation;
step S214: setting a red intensity value, a green intensity value and a blue intensity value of the endoscope image data to be gray intensity values through drawing software to obtain the patient endoscope image data with gray scale;
step S215: initializing an array with the length of 256, marking the array as pixel_counts, and recording the pixel number of each gray level in the image data of the endoscope of the patient with gray level, wherein the length of each array corresponds to one gray level (0-255);
step S216: traversing each pixel in the image data of the endoscope of the patient, reading the gray level corresponding to the pixel, increasing the count in the gray level corresponding to the pixel_counts every time the gray level corresponding to the pixel is read, obtaining the pixel count of each gray level in the image data of the endoscope of the patient, taking the gray level as an abscissa and the pixel number corresponding to the gray level as an ordinate, establishing a gray level histogram, and marking the gray level histogram as an endoscope gray level histogram, please refer to fig. 3;
Step S22: according to the endoscope gray level histogram and the patient endoscope image data, the abnormal detection points are classified, and the specific steps are as follows:
step S221: setting a first characteristic pixel number threshold line and a second characteristic pixel number threshold line in the endoscope gray level histogram, marking the gray level corresponding to a gray square column higher than the first characteristic pixel number threshold line as a first height interval, marking the gray level corresponding to a gray square column lower than the second characteristic pixel number threshold line as a second height interval, and marking the gray level corresponding to a gray square column between the first characteristic pixel number threshold line and the second characteristic pixel number threshold line as a third height interval;
step S222: judging the gray level corresponding to the first height interval and the second height interval as abnormal gray level, and judging the gray level corresponding to the third height interval as normal gray level;
step S223: marking the pixel points corresponding to the abnormal gray level as abnormal detection points in the patient endoscope image data;
step S224: marking the pixel points corresponding to the normal gray level as normal detection points in the endoscope image data of the patient;
Step S225: defining a judgment result of the detection point according to the gray level as detection point grading data;
step S3: analyzing the endoscope image to obtain abnormal detection point grading data;
step S31: the method comprises the following specific steps of:
step S311: transmitting a spectrum to a detection point through a spectrum transmitting end, acquiring a spectrum reflected spectrum through an abnormal detection point through a spectrum receiving end, and acquiring a reflected light intensity value of the reflected spectrum through a detection end to acquire a reflected light intensity value of each abnormal detection point;
step S312: transmitting a spectrum to a normal detection point through a spectrum transmitting end, acquiring a reflection spectrum of the spectrum passing through the normal detection point through a spectrum receiving end, acquiring a reflection light intensity value of the reflection spectrum through a detection end, repeating the process, respectively acquiring the reflection light intensity values of j normal detection points, calculating the average of the reflection light intensity values of j normal detection points, and marking the average as the normal reflection light intensity value;
step S313: randomly selecting a reflected light intensity value of an abnormal monitoring point, marking the value as a first reflected light intensity value, and calculating the first reflected light intensity value and a normal reflected light intensity value to obtain a reflected light intensity difference value of the abnormal monitoring point;
Step S314: step S313 is repeated: calculating the difference value of the reflected light intensity of each abnormal monitoring point;
step S315: classifying the abnormal detection points according to the difference value of the reflected light intensities of the abnormal detection points to obtain classified data of the abnormal detection points, wherein the method comprises the following specific steps:
step S3151: obtaining a reflected light intensity difference value threshold, and carrying out numerical comparison on the reflected light intensity difference value and the reflected light intensity difference value threshold;
step S3152: when the difference value of the reflected light intensity is larger than or equal to the difference value threshold value of the reflected light intensity, judging the first abnormal section;
step S3153: when the reflected light intensity difference value is smaller than the reflected light intensity difference value threshold value, judging that the reflected light intensity difference value is a second abnormal interval;
step S3154: defining the abnormal detection points respectively corresponding to the first abnormal interval and the second abnormal interval as abnormal detection point grading data;
step S3155: marking the abnormal detection point corresponding to the first abnormal interval as an affected area detection point, and marking the abnormal detection point corresponding to the second abnormal interval as a conventional detection point;
step S4: and analyzing the reflected light intensity according to the classified data of the abnormal detection points to obtain an affected area labeling image, and synchronizing the affected area labeling image to an endoscope display.
Step S41: obtaining a reflected light intensity average value corresponding to an abnormal detection point corresponding to the second abnormal interval, and marking the reflected light intensity average value as a reflected light intensity normal value;
Step S42: the method comprises the following specific steps of:
step S421: acquiring a graying patient endoscopic image, dividing the graying patient endoscopic image into m graying endoscopic image samples in unit area, and respectively marking the graying patient endoscopic image into a first endoscopic image sample, a second endoscopic image sample and a third endoscopic image sample … … mth endoscopic image sample;
step S422: counting the reflected light intensity difference value corresponding to each pixel point in the first endoscope image sample, and respectively marking the reflected light intensity difference value as a first reflected light intensity difference value, a second reflected light intensity difference value and a third reflected light intensity difference value … … nth reflected light intensity difference value;
step S423: calculating the normal difference value of the reflected light intensity, the first reflected light intensity difference value and the nth reflected light intensity difference value of the second reflected light intensity difference value … … to obtain a reflected light intensity dispersibility coefficient of the first endoscope image sample;
step S43: counting the number of the abnormal detection points in the first abnormal section in the first endoscope image sample;
step S44: calculating the reflected light intensity dispersibility coefficient and the number of abnormal detection points to obtain an affected area judgment coefficient corresponding to the first endoscope image sample;
Step S45: the method comprises the steps of obtaining a lesion area judgment coefficient threshold value, and comparing a lesion area judgment coefficient corresponding to a first endoscope image sample with the lesion area judgment coefficient threshold value in a numerical mode to obtain lesion area judgment data corresponding to the first endoscope image sample, wherein the method comprises the following specific steps of:
step S451: when the affected area judgment coefficient is greater than or equal to the affected area judgment coefficient threshold value, judging that the area corresponding to the first endoscope image sample is an affected area;
step S452: when the judging coefficient of the affected area is smaller than the judging coefficient threshold value of the affected area, judging that the area corresponding to the first endoscope image sample is a normal area;
step S46: repeating the above process, and respectively carrying out affected area analysis on the second endoscope image sample and the m-th endoscope image sample of the third endoscope image sample … … to obtain affected area judgment data corresponding to the endoscope image samples;
step S47: and acquiring an endoscopic image of the patient, performing image layer coverage on the endoscopic image data of the patient by using an AI auxiliary image processing tool, marking the endoscopic image of the patient above the covered image layer, defining the marked endoscopic image of the patient as a marked image of the patient, and synchronizing the marked image of the patient to an endoscope display.
The preferred embodiments of the invention disclosed above are intended only to assist in the explanation of the invention. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof.

Claims (10)

1. An endoscopic image processing method based on AI-assisted image processing information, comprising:
step S1: acquiring an image video stream through an endoscope probe, acquiring a target tissue image duty ratio according to the image video stream, adjusting the target tissue image duty ratio, establishing an image classification model, intercepting the image video stream, and carrying out definition division to obtain patient endoscope image data;
step S2: establishing an endoscope gray level histogram according to the patient endoscope image data, and classifying abnormal detection points according to the endoscope gray level histogram and the patient endoscope image data to obtain detection point classification data;
Step S3: acquiring detection point grading data, transmitting a spectrum to a monitoring point, acquiring a reflected light intensity value and a normal reflected light intensity value of an abnormal monitoring point, acquiring an abnormal detection point reflected light intensity difference value through the reflected light intensity value and the normal reflected light intensity value of the abnormal monitoring point, and performing threshold judgment on the abnormal detection point reflected light intensity difference value to acquire abnormal detection point grading data;
step S4: obtaining a reflected light intensity dispersion coefficient according to the abnormal detection point grading data, obtaining an affected area judgment coefficient according to the abnormal detection point quantity and the reflected light intensity dispersion coefficient, marking an affected area according to the affected area judgment coefficient and the patient endoscope image, obtaining an affected area marking image, and synchronizing the affected area marking image to the endoscope display.
2. The endoscopic image processing method based on AI-assisted image processing information of claim 1, wherein said step S1: the method comprises the following specific steps of:
step S11: slowly inserting an endoscope probe into an area to be inspected in a patient, starting an imaging function of the endoscope, and obtaining a continuous image video stream;
Step S12: calibrating a target tissue in an image video stream, obtaining the duty ratio of the target tissue in an endoscope image video stream, marking the duty ratio as a target tissue image duty ratio, and adjusting the target tissue image duty ratio;
step S13: establishing an image classification model, and intercepting an image video stream frame by frame to obtain image intercepting data;
step S14: carrying out definition division on the image interception data to obtain image data of the endoscope of the patient;
step S15: calculating Laplacian variance of each frame of image as a definition score of each frame of image by using a Python programming language in combination with OpenCV;
step S16: setting a definition score threshold value and comparing the values with the definition score, storing the frame image when the definition score is larger than the definition score threshold value, and deleting the frame image when the definition score is smaller than the definition score threshold value;
step S17: the image of the image classification model is defined as patient endoscopic image data.
3. The endoscopic image processing method based on AI-assisted image processing information of claim 2, wherein said step S12: the target tissue image duty ratio is adjusted, and the specific steps are as follows:
Step S121: determining a target tissue according to the image video stream, acquiring an HSV value interval of the target tissue by using an AI image model, and marking the HSV value interval as a sample HSV value interval;
step S122: the method comprises the steps that an endoscope image video stream is thinned into i image units through an image display, HSV values of each image unit are obtained in real time through an HSB color model, and when the HSV values are in a sample HSV value interval, the image unit is judged to be a target tissue image unit;
step S123: counting the number of image units occupied by the target tissue image area, marking the image units as target image unit numbers, and taking the ratio of the target image unit numbers to the total number of the image units as the target tissue image duty ratio;
step S124: acquiring a first calibration image duty ratio and a second calibration image duty ratio, and respectively carrying out numerical comparison on the first calibration image duty ratio, the second calibration image duty ratio and the target tissue image duty ratio;
step S125: when the target tissue image duty ratio is larger than or equal to the first calibration image duty ratio and smaller than or equal to the second calibration image duty ratio, judging that the target tissue image duty ratio is a first image duty ratio interval;
step S126: when the target tissue image duty ratio is smaller than the first calibration image duty ratio, judging that the target tissue image is in a first image duty ratio section, automatically adjusting the focal length of the endoscope at the moment, and increasing the target tissue image duty ratio until the target tissue image is in the first image duty ratio section;
Step S127: when the target tissue image duty ratio is larger than the second calibration image duty ratio and the third image duty ratio interval is judged, the focal length of the endoscope is automatically adjusted, the target tissue image duty ratio is reduced, and the target tissue image is positioned in the first image duty ratio interval.
4. The endoscopic image processing method based on AI-assisted image processing information of claim 1, wherein said step S2: the method comprises the following specific steps of:
step S21: establishing an endoscope gray level histogram according to the endoscope image data of the patient;
step S22: classifying abnormal detection points according to the endoscope gray level histogram and the patient endoscope image data;
the method for establishing the endoscope gray level histogram comprises the following specific steps:
step S211: acquiring endoscopic image data of a patient;
step S212: drawing software is used for respectively acquiring RGB intensity values of the image data of the endoscope of the patient, and dividing the RGB intensity values into red intensity values, green intensity values and blue intensity values;
step S213: converting the red intensity value, the green intensity value and the blue intensity value of the patient endoscope image data into gray intensity values by calculation;
step S214: setting a red intensity value, a green intensity value and a blue intensity value of the endoscope image data to be gray intensity values through drawing software to obtain the patient endoscope image data with gray scale;
Step S215: initializing an array with the length of 256, marking the array as pixel_counts, and recording the pixel number of each gray level in the image data of the endoscope of the patient with gray level, wherein the length of each array corresponds to one gray level (0-255);
step S216: each pixel in the endoscopic image data of the graying patient is traversed, the gray level corresponding to the pixel is read, the count is increased in the gray level corresponding to the pixel_counts every time the gray level corresponding to the pixel is read, the pixel count of each gray level in the endoscopic image data of the graying patient is obtained, the gray level is taken as an abscissa, the pixel number corresponding to the gray level is taken as an ordinate, a gray level histogram is established, and the gray level histogram is marked as the endoscopic gray level histogram.
5. The endoscopic image processing method based on AI-assisted image processing information of claim 4, wherein said step S22: the abnormal detection points are classified, and the specific steps are as follows:
step S221: setting a first characteristic pixel number threshold line and a second characteristic pixel number threshold line in the endoscope gray level histogram, marking the gray level corresponding to a gray square column higher than the first characteristic pixel number threshold line as a first height interval, marking the gray level corresponding to a gray square column lower than the second characteristic pixel number threshold line as a second height interval, and marking the gray level corresponding to a gray square column between the first characteristic pixel number threshold line and the second characteristic pixel number threshold line as a third height interval;
Step S222: judging the gray level corresponding to the first height interval and the second height interval as abnormal gray level, and judging the gray level corresponding to the third height interval as normal gray level;
step S223: marking the pixel points corresponding to the abnormal gray level as abnormal detection points in the patient endoscope image data;
step S224: marking the pixel points corresponding to the normal gray level as normal detection points in the endoscope image data of the patient;
step S225: and defining a judgment result of the detection point according to the gray level as detection point grading data.
6. The endoscopic image processing method based on AI-assisted image processing information of claim 1, wherein said step S3: the method comprises the following specific steps of:
step S31: obtaining a reflected light intensity value and a normal reflected light intensity value of each abnormal monitoring point through a spectrum sensor;
the method comprises the following specific steps of:
step S311: transmitting a spectrum to a detection point through a spectrum transmitting end, acquiring a spectrum reflected spectrum through an abnormal detection point through a spectrum receiving end, acquiring a reflected light intensity value of the reflected spectrum through a detection end, and repeating the operation to acquire the reflected light intensity value of each abnormal detection point;
Step S312: transmitting a spectrum to a normal detection point through a spectrum transmitting end, acquiring a reflection spectrum of the spectrum passing through the normal detection point through a spectrum receiving end, acquiring a reflection light intensity value of the reflection spectrum through a detection end, repeating the process, respectively acquiring the reflection light intensity values of j normal detection points, calculating the average of the reflection light intensity values of j normal detection points, and marking the average as the normal reflection light intensity value;
step S313: randomly selecting a reflected light intensity value of an abnormal monitoring point, marking the value as a first reflected light intensity value, and calculating the first reflected light intensity value and a normal reflected light intensity value to obtain a reflected light intensity difference value of the abnormal monitoring point;
step S314: step S313 is repeated: calculating the difference value of the reflected light intensity of each abnormal monitoring point;
step S315: and grading the abnormal detection points according to the reflected light intensity difference values of the abnormal detection points to obtain grading data of the abnormal detection points.
7. The endoscopic image processing method based on AI-assisted image processing information of claim 6, wherein said step S315: the method comprises the following specific steps of:
Step S3151: obtaining a reflected light intensity difference value threshold, and carrying out numerical comparison on the reflected light intensity difference value and the reflected light intensity difference value threshold;
step S3152: when the difference value of the reflected light intensity is larger than or equal to the difference value threshold value of the reflected light intensity, judging the first abnormal section;
step S3153: when the reflected light intensity difference value is smaller than the reflected light intensity difference value threshold value, judging that the reflected light intensity difference value is a second abnormal interval;
step S3154: defining the abnormal detection points respectively corresponding to the first abnormal interval and the second abnormal interval as abnormal detection point grading data;
step S3155: marking the abnormal detection point corresponding to the first abnormal interval as an affected area detection point, and marking the abnormal detection point corresponding to the second abnormal interval as a conventional detection point.
8. The endoscopic image processing method based on AI-assisted image processing information of claim 1, wherein said step S4: analyzing the reflected light intensity according to the abnormal detection point grading data to obtain the image data of the affected area, wherein the method comprises the following specific steps:
step S41: obtaining a reflected light intensity average value corresponding to an abnormal detection point corresponding to the second abnormal interval, and marking the reflected light intensity average value as a reflected light intensity normal value;
step S42: obtaining a reflected light intensity dispersibility coefficient of a first endoscope image sample;
Step S43: counting the number of the abnormal detection points in the first abnormal section in the first endoscope image sample;
step S44: calculating the reflected light intensity dispersibility coefficient and the number of abnormal detection points to obtain an affected area judgment coefficient corresponding to the first endoscope image sample;
step S45: acquiring an affected area judgment coefficient threshold value, and comparing the value of an affected area judgment coefficient corresponding to the first endoscope image sample with the value of the affected area judgment coefficient threshold value to obtain affected area judgment data corresponding to the first endoscope image sample;
the method comprises the following specific steps of:
step S451: when the affected area judgment coefficient is greater than or equal to the affected area judgment coefficient threshold value, judging that the area corresponding to the first endoscope image sample is an affected area;
step S452: when the judging coefficient of the affected area is smaller than the judging coefficient threshold value of the affected area, judging that the area corresponding to the first endoscope image sample is a normal area;
step S46: repeating the steps S42-S45, and respectively carrying out affected area analysis on the second endoscope image sample and the m-th endoscope image sample of the third endoscope image sample … … to obtain affected area judgment data corresponding to the endoscope image samples;
step S47: and acquiring an endoscopic image of the patient, performing image layer coverage on the endoscopic image data of the patient by using an AI auxiliary image processing tool, marking the endoscopic image of the patient above the covered image layer, defining the marked endoscopic image of the patient as a marked image of the patient, and synchronizing the marked image of the patient to an endoscope display.
9. The endoscopic image processing method based on AI-assisted image processing information of claim 8, wherein said step S42: the method comprises the following specific steps of:
step S421: acquiring a graying patient endoscopic image, dividing the graying patient endoscopic image into m graying endoscopic image samples in unit area, and respectively marking the graying patient endoscopic image into a first endoscopic image sample, a second endoscopic image sample and a third endoscopic image sample … … mth endoscopic image sample;
step S422: counting the reflected light intensity difference value corresponding to each pixel point in the first endoscope image sample, and respectively marking the reflected light intensity difference value as a first reflected light intensity difference value, a second reflected light intensity difference value and a third reflected light intensity difference value … … nth reflected light intensity difference value;
step S423: and calculating the normal difference value of the reflected light intensity, the first difference value of the reflected light intensity and the nth difference value of the reflected light intensity of the second difference value … … of the reflected light intensity to obtain a reflected light intensity dispersibility coefficient of the first endoscope image sample.
10. An endoscope image processing system based on AI auxiliary image processing information, which is suitable for the endoscope image processing method based on AI auxiliary image processing information according to any one of claims 1-9, characterized in that the image processing system comprises an image acquisition module, an abnormality detection module, a reflected light module, an affected area detection module and a server, and is specifically as follows:
An image acquisition module: acquiring endoscopic image data of a patient;
an abnormality detection module: acquiring detection point grading data according to the image data of the endoscope of the patient;
and (3) a reflection light module: analyzing the detection points according to the detection point grading data to obtain abnormal detection point grading data;
the affected area detection module: and analyzing the reflected light intensity according to the abnormal detection point grading data to obtain the image data of the affected area.
CN202410058307.0A 2024-01-16 2024-01-16 Endoscope image processing method and system based on AI auxiliary image processing information Active CN117576097B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410058307.0A CN117576097B (en) 2024-01-16 2024-01-16 Endoscope image processing method and system based on AI auxiliary image processing information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410058307.0A CN117576097B (en) 2024-01-16 2024-01-16 Endoscope image processing method and system based on AI auxiliary image processing information

Publications (2)

Publication Number Publication Date
CN117576097A true CN117576097A (en) 2024-02-20
CN117576097B CN117576097B (en) 2024-03-22

Family

ID=89862832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410058307.0A Active CN117576097B (en) 2024-01-16 2024-01-16 Endoscope image processing method and system based on AI auxiliary image processing information

Country Status (1)

Country Link
CN (1) CN117576097B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345638A (en) * 2013-06-24 2013-10-09 清华大学深圳研究生院 Cavity focus computer-assisted detecting method based on medical image
WO2020110214A1 (en) * 2018-11-28 2020-06-04 オリンパス株式会社 Endoscope system, image processing method for endoscope, and image processing program for endoscope
CN114565608A (en) * 2022-04-26 2022-05-31 华伦医疗用品(深圳)有限公司 Endoscope Ai image identification method and system
US20230206435A1 (en) * 2021-12-24 2023-06-29 Infinitt Healthcare Co., Ltd. Artificial intelligence-based gastroscopy diagnosis supporting system and method for improving gastrointestinal disease detection rate
WO2023124876A1 (en) * 2021-12-29 2023-07-06 小荷医疗器械(海南)有限公司 Endoscope image detection auxiliary system and method, medium and electronic device
CN117064311A (en) * 2023-10-16 2023-11-17 深圳迈瑞生物医疗电子股份有限公司 Endoscopic image processing method and endoscopic imaging system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345638A (en) * 2013-06-24 2013-10-09 清华大学深圳研究生院 Cavity focus computer-assisted detecting method based on medical image
WO2020110214A1 (en) * 2018-11-28 2020-06-04 オリンパス株式会社 Endoscope system, image processing method for endoscope, and image processing program for endoscope
US20230206435A1 (en) * 2021-12-24 2023-06-29 Infinitt Healthcare Co., Ltd. Artificial intelligence-based gastroscopy diagnosis supporting system and method for improving gastrointestinal disease detection rate
WO2023124876A1 (en) * 2021-12-29 2023-07-06 小荷医疗器械(海南)有限公司 Endoscope image detection auxiliary system and method, medium and electronic device
CN114565608A (en) * 2022-04-26 2022-05-31 华伦医疗用品(深圳)有限公司 Endoscope Ai image identification method and system
CN117064311A (en) * 2023-10-16 2023-11-17 深圳迈瑞生物医疗电子股份有限公司 Endoscopic image processing method and endoscopic imaging system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DONG CHEN YUAN ET AL.: "Optical diagnostics of tissue pathology by multiphoton microscopy", 《EXPERT OPINION ON MEDICAL DIAGNOSTICS》, vol. 4, no. 6, 30 November 2010 (2010-11-30), pages 519 - 29 *
SHARIB ALI: "Where do we stand in AI for endoscopic image analysis? Deciphering gaps and future directions", 《NPJ DIGITAL MEDICINE》, vol. 5, no. 1, 20 December 2022 (2022-12-20), pages 1 - 13 *

Also Published As

Publication number Publication date
CN117576097B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
EP3449800B1 (en) Medical image processing apparatus, endoscope apparatus, diagnostic support apparatus, and medical service support apparatus
US8027533B2 (en) Method of automated image color calibration
US8401258B2 (en) Method to provide automated quality feedback to imaging devices to achieve standardized imaging data
US6902935B2 (en) Methods of monitoring effects of chemical agents on a sample
US7613335B2 (en) Methods and devices useful for analyzing color medical images
EP1576920B1 (en) Imaging device
CN101305906B (en) System for detecting colorimetric abnormality in organism
US9436992B2 (en) Method of reconstituting cellular spectra useful for detecting cellular disorders
US20060184040A1 (en) Apparatus, system and method for optically analyzing a substrate
US7907775B2 (en) Image processing apparatus, image processing method and image processing program
JP7068487B2 (en) Electronic endoscopy system
JP2012505028A (en) Tissue classification method in cervical image
US9129384B2 (en) Medical image processing device
US20230125377A1 (en) Label-free real-time hyperspectral endoscopy for molecular-guided cancer surgery
Jang et al. Development of the digital tongue inspection system with image analysis
CN117576097B (en) Endoscope image processing method and system based on AI auxiliary image processing information
CN115444355B (en) Endoscope lesion size information determining method, electronic equipment and storage medium
Craine et al. Digital imaging colposcopy: corrected area measurements using shape-from-shading
WO1994016622A1 (en) Diagnostic imaging method and device
CN115174876B (en) Color code design and manufacturing method for medical endoscope imaging color analysis and correction
Ferris Analysis of digitized cervical images to detect cervical neoplasia
Li et al. A new image calibration technique for colposcopic images
Suter et al. Classification of pulmonary airway disease based on mucosal color analysis
Guadagni et al. Imaging in digestive videoendoscopy
Pistoia et al. Interactivity Between Image Processing Systems and Videoendoscopy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant