CN116758058B - Data processing method, device, computer and storage medium - Google Patents
Data processing method, device, computer and storage medium Download PDFInfo
- Publication number
- CN116758058B CN116758058B CN202311000492.XA CN202311000492A CN116758058B CN 116758058 B CN116758058 B CN 116758058B CN 202311000492 A CN202311000492 A CN 202311000492A CN 116758058 B CN116758058 B CN 116758058B
- Authority
- CN
- China
- Prior art keywords
- image
- key frame
- gastrointestinal
- calculating
- key
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 12
- 230000002496 gastric effect Effects 0.000 claims abstract description 97
- 238000012545 processing Methods 0.000 claims abstract description 66
- 230000000153 supplemental effect Effects 0.000 claims abstract description 37
- 239000013598 vector Substances 0.000 claims abstract description 34
- 238000000034 method Methods 0.000 claims abstract description 30
- 230000008439 repair process Effects 0.000 claims abstract description 9
- 230000000295 complement effect Effects 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 19
- 238000012163 sequencing technique Methods 0.000 claims description 16
- 238000001914 filtration Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 239000013589 supplement Substances 0.000 claims description 5
- 230000002123 temporal effect Effects 0.000 claims description 5
- 229940088597 hormone Drugs 0.000 claims 1
- 239000005556 hormone Substances 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 9
- 238000010191 image analysis Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005215 recombination Methods 0.000 description 2
- 230000006798 recombination Effects 0.000 description 2
- 230000001502 supplementing effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002405 diagnostic procedure Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000000386 microscopy Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30092—Stomach; Gastric
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a data processing method, a device, a computer and a storage medium, and relates to the technical field of data processing, wherein the method comprises the following steps: calculating the feature vector distance between the feature points on the two gastrointestinal mirror images according to the feature points, and marking the two gastrointestinal mirror images as key frames when the feature vector distance is more than or equal to a preset fixed threshold value; processing the gastrointestinal mirror images between adjacent key frames to obtain supplementary key frames; inserting the supplemental keyframes between the marked two keyframes to obtain a keyframe domain; identifying and determining a light spot area in the key frame domain through a preset dynamic factor, and repairing a gastrointestinal mirror image corresponding to the light spot area to obtain a repaired image; and carrying out convolution processing on the repair image to obtain an equilibrium image. The invention has higher accuracy and reliability and can provide better gastrointestinal mirror image analysis and processing effects.
Description
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data processing method, a data processing device, a computer, and a storage medium.
Background
Currently, gastroenteroscopy has become a clinically common screening and diagnostic method. Gastrointestinal images are important for doctors to accurately judge and diagnose diseases as one of the main results of gastrointestinal microscopy. However, since the quality of the gastrointestinal images is affected by various factors, such as light, noise, etc., there may be problems of artifacts, blurring, flare, etc. in the images, which may cause trouble to the diagnosis of the doctor.
In response to these problems, some gastroscopic image processing methods have been proposed to improve image quality and accuracy. Including methods based on image enhancement such as contrast enhancement, histogram equalization, etc., to enhance the visibility and detail of the image. However, these methods often have poor processing effects on fine structures such as light spots, and cannot well adapt to the changes of different gastrointestinal images.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a data processing method, a device, a computer and a storage medium, which can identify key frames according to the characteristic points and the characteristic vector distances of the gastrointestinal mirror images, improve the image quality by supplementing the key frames and the spot area repair, have higher accuracy and reliability, and can provide better gastrointestinal mirror image analysis and processing effects.
In order to solve the technical problems, the technical scheme of the invention is as follows:
in a first aspect, a data processing method, the method comprising:
obtaining gastrointestinal mirror images, and sorting according to priority to obtain sorting data;
processing the sequencing data to obtain feature points on the gastrointestinal mirror image;
calculating the feature vector distance between the feature points on the two gastrointestinal mirror images according to the feature points, and marking the two gastrointestinal mirror images as key frames when the feature vector distance is more than or equal to a preset fixed threshold value;
processing the gastrointestinal mirror images between adjacent key frames to obtain supplementary key frames;
inserting the supplemental keyframes between the marked two keyframes to obtain a keyframe domain;
identifying and determining a light spot area in the key frame domain through a preset dynamic factor, and repairing a gastrointestinal mirror image corresponding to the light spot area to obtain a repaired image;
and carrying out convolution processing on the repair image to obtain an equilibrium image.
Further, obtaining gastrointestinal images and sorting according to priority to obtain sorting data, including:
by passing throughCalculating a temporal weight, wherein ∈>Is a time weight;tobtaining a difference value between time and current time for the gastrointestinal mirror image; />Is the expected processing time of the gastroscopic image; />Is the size of the gastrointestinal mirror image; />Is the type of gastroscopic image; />、/>And->Are attenuation parameters; />A base number that is a natural logarithm;
by passing throughCalculating an urgency weight, wherein ∈>Is based on gastrointestinal mirror image characteristicsAn emergency degree function; />Is an additional factor in the degree of urgency; />Weight parameters for the degree of urgency; />Is the emergency degree weight;
according to the time weight and the emergency degree weight, and byCalculating a priority of the gastroscopic image, wherein +.>And->Are all weight parameters; />Priority for gastrointestinal mirror images;
according to the difference Δa=-limit value->And sequencing the gastrointestinal mirror images corresponding to the difference delta A to obtain sequencing data.
Further, processing the ranking data to obtain feature points on the gastrointestinal mirror image includes:
preprocessing the gastrointestinal mirror image to obtain a smooth image;
carrying out Gaussian filtering on the smooth image for a plurality of times according to the smooth image so as to construct a scale space;
positioning each potential key point according to the scale space to obtain positioning key points;
assigning one or more directions to each location key point;
a feature descriptor is generated for each keypoint.
Further, calculating a feature vector distance between feature points on the two gastrointestinal mirror images according to the feature points, including:
according toCalculating a distance between two feature points, wherein +.> n And-> n The feature vectors on the two gastrointestinal images are respectively;W n is a weight matrix;S n is covariance matrix and is used for capturing statistical information in the feature vector;K n the spatial covariance matrix is used for capturing the spatial relationship between the characteristic points;Nrepresenting a transpose of the matrix;nrepresenting a time step; />Is a feature vector +> n And feature vector-> n Distance between them.
Further, processing the gastrointestinal mirror image between adjacent key frames to obtain a supplemental key frame includes:
respectively acquiring a first key frame Z 1 And a second key frame Z 2 Corresponding first pixel Z 1 (x, y) and a second pixel Z 2 (x, y);
By passing throughCalculating pixel values in the supplemental key frame, wherein +.>Is an interpolation factor, and the value range is 0 +.>1;/>To supplement pixel values in the key frame, the coordinates are (x, y);
and calculating the complementary key frame according to the pixel value in the complementary key frame.
Further, inserting the supplemental key frame between the two marked key frames to obtain a key frame field, comprising:
determining the position of a complementary key frame to be inserted;
calculating a key frame time interval between the marked two key frames;
calculating a time stamp of the supplementary key frame according to the key frame time interval;
a supplemental key frame is determined based on the timestamp of the supplemental key frame and inserted between the two marked key frames.
Further, identifying and determining the spot area in the key frame domain by a preset dynamic factor includes:
traversing each pixel in the key frame domain in turn;
calculating the difference between the brightness value of each pixel and a preset dynamic factor;
determining a light spot area according to the difference value;
and processing and confirming the light spot area to obtain a final light spot area.
In a second aspect, a data processing apparatus includes:
the acquisition module is used for acquiring the gastrointestinal mirror images and sequencing the gastrointestinal mirror images according to the priority to obtain sequencing data; processing the sequencing data to obtain feature points on the gastrointestinal mirror image; calculating the feature vector distance between the feature points on the two gastrointestinal mirror images according to the feature points, and marking the two gastrointestinal mirror images as key frames when the feature vector distance is more than or equal to a preset fixed threshold value;
the processing module is used for processing the gastrointestinal mirror images between the adjacent key frames to obtain supplementary key frames; inserting the supplemental keyframes between the marked two keyframes to obtain a keyframe domain; identifying and determining a light spot area in the key frame domain through a preset dynamic factor, and repairing a gastrointestinal mirror image corresponding to the light spot area to obtain a repaired image; and carrying out convolution processing on the repair image to obtain an equilibrium image.
In a third aspect, a computer comprises:
one or more processors;
and a storage means for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the method.
In a fourth aspect, a computer readable storage medium has a program stored therein, which when executed by a processor, implements the method.
The scheme of the invention at least comprises the following beneficial effects:
according to the scheme, the key frames can be identified according to the feature points and the feature vector distances of the gastrointestinal mirror images, the image quality is improved by supplementing the key frames and the spot area repair, the accuracy and the reliability are higher, and better gastrointestinal mirror image analysis and processing effects can be provided.
Drawings
Fig. 1 is a flow chart of a data processing method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a data processing apparatus according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, an embodiment of the present invention proposes a data processing method, which includes the steps of:
step 11, obtaining gastrointestinal mirror images, and sorting according to priority to obtain sorting data;
step 12, processing the sequencing data to obtain feature points on the gastrointestinal mirror image;
step 13, calculating the feature vector distance between the feature points on the two gastrointestinal mirror images according to the feature points, and marking the two gastrointestinal mirror images as key frames when the feature vector distance is more than or equal to a preset fixed threshold value;
step 14, processing the gastrointestinal mirror images between the adjacent key frames to obtain supplementary key frames;
step 15, inserting the supplementary key frame between the marked two key frames to obtain a key frame domain;
step 16, identifying and determining a light spot area in the key frame domain through a preset dynamic factor, and repairing a gastrointestinal mirror image corresponding to the light spot area to obtain a repaired image;
and step 17, carrying out convolution processing on the repair image to obtain an equilibrium image.
In step 11, gastrointestinal mirror image data which are ordered according to the priority are obtained, so that subsequent processing is facilitated. In step 12, the ranking data is processed to extract feature points in the gastroscope image for further feature analysis and processing, and feature point data on the gastroscope image is obtained for subsequent analysis and processing. In step 13, the importance of the image is determined according to the feature vector distance between the feature points, and the key frames are marked for subsequent processing, so that the image data marked with the key frames is obtained, and important image frames are distinguished from non-important image frames. In step 14, the images between the adjacent key frames are processed to extract the complementary key frames therein to increase the integrity and accuracy of the image data, and the image data of the complementary key frames is obtained to complement the image sequence between the key frames. In step 15, the complementary key frames are inserted between the marked key frames to form a key frame field, i.e. an image sequence containing all important information, and image sequence data of the key frame field is obtained to form a continuous key frame sequence. In step 16, the flare areas in the key frame domain are identified and determined through dynamic factors, images corresponding to the areas are repaired, interference caused by flare is removed, so that a repaired image is obtained, repaired image data is obtained, flare interference is removed, and quality and visual effect of the image are improved. In step 17, the restored image is subjected to convolution processing, so that the quality and smoothness of the image are improved, the image is clearer and more balanced, and balanced image data after convolution processing is obtained, so that the image is clearer, smoother and easier to analyze.
In a preferred embodiment of the present invention, the step 11 may include:
step 111, byCalculating a temporal weight, wherein ∈>Is a time weight;tobtaining a difference value between time and current time for the gastrointestinal mirror image; />Is the expected processing time of the gastroscopic image; />Is the size of the gastrointestinal mirror image; />Is a gastroenteroscope imageThe type of image; />、/>And->Are attenuation parameters; />A base number that is a natural logarithm;
step 112, byAn urgency weight is calculated, wherein,as a function of the degree of urgency based on the gastrointestinal image features; />Is an additional factor in the degree of urgency; />Weight parameters for the degree of urgency; />Is the emergency degree weight;
step 113, according to the time weight and the emergency degree weight, and byCalculating a priority of the gastroscopic image, wherein +.>And->Are all weight parameters; />Prioritizing gastrointestinal imagesA stage;
step 114, according to the difference Δa=-limit value->And sequencing the gastrointestinal mirror images corresponding to the difference delta A to obtain sequencing data.
In step 111, the time factor of the gastrointestinal mirror image acquisition is used to adjust the weight of the images acquired at different time points to reflect the timeliness and the emergency degree of the processing of the images, so as to obtain the weight considering the time factor for subsequent sorting and priority calculation. In step 112, the urgency is adjusted to reflect the processing priority of the image based on the features of the gastrointestinal images and additional factors of urgency, resulting in a weight that accounts for the urgency factor for subsequent ranking and priority calculation. In step 113, the priority of each gastrointestinal mirror image is calculated for subsequent sorting and processing according to the time factor and the emergency factor, and the priority of each gastrointestinal mirror image is obtained for subsequent sorting and processing. In step 114, the gastroscope images are ranked according to the computed priorities, and the images with high priorities are ranked in front for preparation for subsequent processing, so that the gastroscope image data ranked according to the priorities is obtained, and the subsequent processing is facilitated.
In a preferred embodiment of the present invention, the step 12 may include:
step 121, byPreprocessing the gastroscopic image to obtain a smooth image, wherein +.>,/>For Gaussian filters in discrete coordinates +.>Value of (I) at (I)>Is the standard deviation of a Gaussian filter, +.>To be in coordinates +.>Intensity of original image at +.>To be in coordinates +.>Intensity of smooth image at +.>And->Is the radius of the gaussian filter;
step 122, performing gaussian filtering on the smoothed image for a plurality of times according to the smoothed image to construct a scale space;
step 123, positioning each potential key point according to the scale space to obtain a positioning key point;
step 124, assigning one or more directions to each positioning key point;
in step 125, a feature descriptor is generated for each keypoint.
In step 121, the noise and details in the image are reduced by smoothing, the interference of subsequent processing is reduced, more accurate features are extracted, and the smoothed image is obtained, so that the subsequent feature extraction and processing are facilitated. In step 122, feature extraction can be performed on the images on different scales by constructing a scale space, so as to improve the capability of detecting features in the images, and a series of scale space images are obtained for subsequent key point positioning and feature description. In step 123, the keypoints are located in order to extract local features of the image, which have invariance at different scales and rotations, resulting in keypoints with stability and repeatability for subsequent feature description and matching. In step 124, local direction information is provided for the keypoints, enhancing the descriptive power and robustness of the feature, and one or more primary directions are assigned to each keypoint for subsequent feature description and matching. In step 125, the local features of the keypoints are converted into numerical representations by generating feature descriptors, so that subsequent matching and recognition are facilitated, and feature descriptors of each keypoint are obtained for subsequent feature matching and image recognition.
In a preferred embodiment of the present invention, the step 13 may include:
step 131, according toCalculating a distance between two feature points, wherein +.> n And-> n The feature vectors on the two gastrointestinal images are respectively;W n is a weight matrix;S n is covariance matrix and is used for capturing statistical information in the feature vector;K n the spatial covariance matrix is used for capturing the spatial relationship between the characteristic points;Nrepresenting a transpose of the matrix;nrepresenting a time step; />Is a feature vector +> n And feature vector n Distance between them.
In step 131, the similarity or difference between the feature points is quantified by calculating the distance between them. In this step, a weight matrix is usedW n Covariance matrixS n And spatial covariance matrixK n To take into account statistical information and spatial relationships of feature vectors to calculate distances more accurately. Through this step, the distance calculation can be performed on the feature points in the gastroscopic image to further process and analyze the feature points.
In a preferred embodiment of the present invention, the step 14 may include:
step 141, respectively obtaining the first key frames Z 1 And a second key frame Z 2 Corresponding first pixel Z 1 (x, y) and a second pixel Z 2 (x, y);
Step 142, byCalculating pixel values in the supplemental key frame, wherein +.>Is an interpolation factor, and the value range is 0 +.>1;/>To supplement pixel values in the key frame, the coordinates are (x, y);
step 143, calculating to obtain the supplemental key frame according to the pixel value in the supplemental key frame.
In step 141, pixel values of corresponding positions are obtained from the two key frames, and subsequent calculation and processing can be performed by obtaining the values of the pixels of the specific positions. In step 142, the value of the pixel at the particular location in the supplemental key frame is calculated based on the interpolation factor, and the pixel value in the supplemental frame may be generated by interpolation of the pixel value between the first key frame and the second key frame. In step 143, the pixel values of the complementary frames are combined to form a complementary key frame, and by generating a new frame of image data, i.e., the complementary key frame, based on the pixel values obtained by the interpolation, a smooth transition between the two key frames can be achieved and the blank area between them can be filled. Through these steps, supplemental frames can be generated to better handle and demonstrate image transitions between two keyframes.
In a preferred embodiment of the present invention, the step 15 may include:
step 151, determining the position of the complementary key frame to be inserted;
step 152, calculating a key frame time interval between the two marked key frames;
step 153, calculating the time stamp of the complementary key frame according to the key frame time interval;
step 154, determining a supplemental key frame according to the timestamp of the supplemental key frame, and inserting the supplemental key frame between the two marked key frames.
In step 151, by determining where the supplemental key frame should be inserted, the specific location of the supplemental key frame between the two key frames that have been marked can be determined, depending on the application requirements or user specifications. In step 152, the time interval between the two marked key frames is calculated, and by calculating the time difference between the key frames, the play order and time lapse of the key frames can be known. In step 153, the time stamp of the supplemental key frame is calculated based on the time interval between key frames, the time stamp referring to the time position of a given frame throughout the video sequence, by which it can be determined when the supplemental key frame should occur in the video sequence. In step 154, supplemental key frames are determined from the calculated time stamps and inserted between the two key frames that have been marked, by which a smooth transition effect can be achieved in the video sequence to better show and render the image or video changes. Through the steps, the position and time of the complementary key frames can be effectively processed and controlled, and the insertion and transition of the middle section of the video or image sequence can be realized, so that the visual experience and the expression effect are improved.
In a preferred embodiment of the present invention, the step 16 may include:
step 161, traversing each pixel in the key frame domain in turn;
step 162, calculating the difference between the brightness value of each pixel and the preset dynamic factor;
step 163, determining a light spot area according to the difference value;
and step 164, processing and confirming the light spot area to obtain a final light spot area.
In step 161, each pixel in the keyframe is traversed and the information for each pixel may be processed one by pixel traversal. In step 162, a difference between the brightness value of each pixel and a preset dynamic factor is calculated, where the dynamic factor is a preset threshold or rule for determining whether the pixel belongs to the spot area, and by calculating the difference, the degree of difference between the brightness of the pixel and the dynamic factor can be estimated. In step 163, a spot area is determined based on the calculated difference, and if the difference between the luminance of the pixel and the dynamic factor exceeds a preset threshold or rule, the pixel may belong to the spot area. In step 164, the determined spot area is processed and validated to obtain a final spot area, and further image processing techniques, such as filtering, morphological processing, etc., may be performed to extract, enhance, and validate the accuracy and connectivity of the spot area. Through the steps, the light spot area can be determined according to the brightness difference and a preset dynamic factor, and the final light spot area is obtained through processing and confirmation. This is very useful for spot detection, tracking and analysis applications.
As shown in fig. 2, an embodiment of the present invention further provides a data processing apparatus 20, including:
an acquisition module 21, configured to acquire gastrointestinal images and sort the gastrointestinal images according to priorities to obtain sorted data; processing the sequencing data to obtain feature points on the gastrointestinal mirror image; calculating the feature vector distance between the feature points on the two gastrointestinal mirror images according to the feature points, and marking the two gastrointestinal mirror images as key frames when the feature vector distance is more than or equal to a preset fixed threshold value;
a processing module 22, configured to process the gastroscope images between adjacent key frames to obtain supplemental key frames; inserting the supplemental keyframes between the marked two keyframes to obtain a keyframe domain; identifying and determining a light spot area in the key frame domain through a preset dynamic factor, and repairing a gastrointestinal mirror image corresponding to the light spot area to obtain a repaired image; and carrying out convolution processing on the repair image to obtain an equilibrium image.
Optionally, obtaining the gastrointestinal mirror image and sorting according to the priority to obtain sorting data includes:
by passing throughCalculating a temporal weight, wherein ∈>Is a time weight;tobtaining a difference value between time and current time for the gastrointestinal mirror image; />Is the expected processing time of the gastroscopic image; />Is the size of the gastrointestinal mirror image; />Is the type of gastroscopic image; />、/>And->Are attenuation parameters; />A base number that is a natural logarithm;
by passing throughCalculating an urgency weight, wherein ∈>As a function of the degree of urgency based on the gastrointestinal image features; />Is an additional factor in the degree of urgency; />Weight parameters for the degree of urgency; />Is the emergency degree weight;
according to the time weight and the emergency degree weight, and byCalculating a priority of the gastroscopic image, wherein +.>And->Are all weight parameters; />Priority for gastrointestinal mirror images;
according to the difference Δa=-limit value->And sequencing the gastrointestinal mirror images corresponding to the difference delta A to obtain sequencing data.
Optionally, processing the ordering data to obtain feature points on the gastrointestinal mirror image includes:
preprocessing the gastrointestinal mirror image to obtain a smooth image;
carrying out Gaussian filtering on the smooth image for a plurality of times according to the smooth image so as to construct a scale space;
positioning each potential key point according to the scale space to obtain positioning key points;
assigning one or more directions to each location key point;
a feature descriptor is generated for each keypoint.
Optionally, calculating a feature vector distance between feature points on the two gastrointestinal mirror images according to the feature points includes:
according toCalculating a distance between two feature points, wherein +.> n And-> n The feature vectors on the two gastrointestinal images are respectively;W n is a weight matrix;S n is covariance matrix and is used for capturing statistical information in the feature vector;K n the spatial covariance matrix is used for capturing the spatial relationship between the characteristic points;Nrepresenting a transpose of the matrix;nrepresenting a time step; />Is a feature vector +> n And feature vector-> n Distance between them.
Optionally, processing the gastrointestinal mirror image between adjacent key frames to obtain a supplemental key frame includes:
respectively acquiring a first key frame Z 1 And a second key frame Z 2 Corresponding first pixel Z 1 (x, y) and a second pixel Z 2 (x, y);
By passing throughCalculating pixel values in the supplemental key frame, wherein +.>Is an interpolation factor, and the value range is 0 +.>1;/>To supplement pixel values in the key frame, the coordinates are (x, y);
and calculating the complementary key frame according to the pixel value in the complementary key frame.
Optionally, inserting the supplemental key frame between the two marked key frames to obtain a key frame field, including:
determining the position of a complementary key frame to be inserted;
calculating a key frame time interval between the marked two key frames;
calculating a time stamp of the supplementary key frame according to the key frame time interval;
a supplemental key frame is determined based on the timestamp of the supplemental key frame and inserted between the two marked key frames.
Optionally, identifying and determining the spot area in the key frame domain by a preset dynamic factor includes:
traversing each pixel in the key frame domain in turn;
calculating the difference between the brightness value of each pixel and a preset dynamic factor;
determining a light spot area according to the difference value;
and processing and confirming the light spot area to obtain a final light spot area.
It should be noted that the apparatus is an apparatus corresponding to the above method, and all implementation manners in the above method embodiment are applicable to this embodiment, so that the same technical effects can be achieved.
Embodiments of the present invention also provide a computer including: a processor, a memory storing a computer program which, when executed by the processor, performs the method as described above. All the implementation manners in the method embodiment are applicable to the embodiment, and the same technical effect can be achieved.
Embodiments of the present invention also provide a computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform a method as described above. All the implementation manners in the method embodiment are applicable to the embodiment, and the same technical effect can be achieved.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
Furthermore, it should be noted that in the apparatus and method of the present invention, it is apparent that the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present invention. Also, the steps of performing the series of processes described above may naturally be performed in chronological order in the order of description, but are not necessarily performed in chronological order, and some steps may be performed in parallel or independently of each other. It will be appreciated by those of ordinary skill in the art that all or any of the steps or components of the methods and apparatus of the present invention may be implemented in hardware, firmware, software, or a combination thereof in any computing device (including processors, storage media, etc.) or network of computing devices, as would be apparent to one of ordinary skill in the art after reading this description of the invention.
The object of the invention can thus also be achieved by running a program or a set of programs on any computing device. The computing device may be a well-known general purpose device. The object of the invention can thus also be achieved by merely providing a program product containing program code for implementing said method or apparatus. That is, such a program product also constitutes the present invention, and a storage medium storing such a program product also constitutes the present invention. It is apparent that the storage medium may be any known storage medium or any storage medium developed in the future. It should also be noted that in the apparatus and method of the present invention, it is apparent that the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present invention. The steps of executing the series of processes may naturally be executed in chronological order in the order described, but are not necessarily executed in chronological order. Some steps may be performed in parallel or independently of each other.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.
Claims (7)
1. A method of data processing, the method comprising:
obtaining a gastrointestinal mirror image and optimizingFirst order sorting to obtain sorting data, including: by passing throughCalculating a temporal weight, wherein ∈>Is a time weight;tobtaining a difference value between time and current time for the gastrointestinal mirror image; />Is the expected processing time of the gastroscopic image; />Is the size of the gastrointestinal mirror image; />Is the type of gastroscopic image; />、/>And->Are attenuation parameters; />A base number that is a natural logarithm; by passing throughCalculating an urgency weight, wherein ∈>As a function of the degree of urgency based on the gastrointestinal image features; />Additional reasons for the degree of urgencyA hormone; />Weight parameters for the degree of urgency; />Is the emergency degree weight; according to the time weight and the emergency degree weight, and by +.>Calculating a priority of the gastroscopic image, wherein +.>And->Are all weight parameters; />Priority for gastrointestinal mirror images; according to the difference Δa= = ->-limit value->Sequencing the gastrointestinal mirror images corresponding to the difference value delta A to obtain sequencing data;
processing the ranking data to obtain feature points on the gastrointestinal mirror image, including: preprocessing the gastrointestinal mirror image to obtain a smooth image; carrying out Gaussian filtering on the smooth image for a plurality of times according to the smooth image so as to construct a scale space; positioning each potential key point according to the scale space to obtain positioning key points; assigning one or more directions to each location key point; generating a feature descriptor for each key point;
calculating the feature vector distance between the feature points on the two gastrointestinal mirror images according to the feature points, and marking the two gastrointestinal mirror images as key frames when the feature vector distance is more than or equal to a preset fixed threshold value;
processing the gastroscopic image between adjacent key frames to obtain supplemental key frames, comprising: respectively acquiring a first key frame Z 1 And a second key frame Z 2 Corresponding first pixel Z 1 (x, y) and a second pixel Z 2 (x, y); by passing throughCalculating pixel values in the supplemental key frame, wherein +.>Is an interpolation factor, and the value range is 0 +.>1;/>To supplement pixel values in the key frame, the coordinates are (x, y); calculating to obtain a complementary key frame according to pixel values in the complementary key frame;
inserting the supplemental keyframes between the marked two keyframes to obtain a keyframe domain;
identifying and determining a light spot area in the key frame domain through a preset dynamic factor, and repairing a gastrointestinal mirror image corresponding to the light spot area to obtain a repaired image;
and carrying out convolution processing on the repair image to obtain an equilibrium image.
2. The data processing method according to claim 1, wherein calculating a feature vector distance between feature points on two gastrointestinal images from the feature points comprises:
according toCalculating a distance between two feature points, wherein +.> n And-> n The feature vectors on the two gastrointestinal images are respectively;W n is a weight matrix;S n is covariance matrix and is used for capturing statistical information in the feature vector;K n the spatial covariance matrix is used for capturing the spatial relationship between the characteristic points;Nrepresenting a transpose of the matrix;nrepresenting a time step; />Is a feature vector +> n And feature vector-> n Distance between them.
3. The data processing method of claim 2, wherein inserting the supplemental key frame between the two key frames that have been marked to obtain a key frame field comprises:
determining the position of a complementary key frame to be inserted;
calculating a key frame time interval between the marked two key frames;
calculating a time stamp of the supplementary key frame according to the key frame time interval;
a supplemental key frame is determined based on the timestamp of the supplemental key frame and inserted between the two marked key frames.
4. A data processing method according to claim 3, wherein identifying and determining the spot area in the key frame domain by a predetermined dynamic factor comprises:
traversing each pixel in the key frame domain in turn;
calculating the difference between the brightness value of each pixel and a preset dynamic factor;
determining a light spot area according to the difference value;
and processing and confirming the light spot area to obtain a final light spot area.
5. A data processing apparatus, comprising:
the acquisition module is used for acquiring the gastrointestinal mirror images and sequencing the gastrointestinal mirror images according to the priority to obtain sequencing data, and comprises the following steps: by passing throughCalculating a temporal weight, wherein ∈>Is a time weight;tobtaining a difference value between time and current time for the gastrointestinal mirror image; />Is the expected processing time of the gastroscopic image; />Is the size of the gastrointestinal mirror image; />Is the type of gastroscopic image; />、/>And->Are attenuation parameters; />A base number that is a natural logarithm; by passing throughCalculating an urgency weight, wherein ∈>As a function of the degree of urgency based on the gastrointestinal image features; />Is an additional factor in the degree of urgency; />Weight parameters for the degree of urgency; />Is the emergency degree weight; according to the time weight and the emergency degree weight, and by +.>Calculating a priority of the gastroscopic image, wherein +.>And->Are all weight parameters; />Priority for gastrointestinal mirror images; according to the difference Δa= = ->-limit value->Ordering the gastrointestinal mirror images corresponding to the difference value delta A,to obtain ranking data; processing the ranking data to obtain feature points on the gastrointestinal mirror image, including: preprocessing the gastrointestinal mirror image to obtain a smooth image; carrying out Gaussian filtering on the smooth image for a plurality of times according to the smooth image so as to construct a scale space; positioning each potential key point according to the scale space to obtain positioning key points; assigning one or more directions to each location key point; generating a feature descriptor for each key point; calculating the feature vector distance between the feature points on the two gastrointestinal mirror images according to the feature points, and marking the two gastrointestinal mirror images as key frames when the feature vector distance is more than or equal to a preset fixed threshold value;
the processing module is used for processing the gastrointestinal mirror images between the adjacent key frames to obtain the supplementary key frames, and comprises the following steps: respectively acquiring a first key frame Z 1 And a second key frame Z 2 Corresponding first pixel Z 1 (x, y) and a second pixel Z 2 (x, y); by passing throughPixel values in the supplemental key frame are calculated, wherein,is an interpolation factor, and the value range is 0 +.>1;/>To supplement pixel values in the key frame, the coordinates are (x, y); calculating to obtain a complementary key frame according to pixel values in the complementary key frame; inserting the supplemental keyframes between the marked two keyframes to obtain a keyframe domain; identifying and determining a light spot area in the key frame domain through a preset dynamic factor, and repairing a gastrointestinal mirror image corresponding to the light spot area to obtain a repaired image;
and carrying out convolution processing on the repair image to obtain an equilibrium image.
6. A computer, comprising:
one or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the method of any of claims 1-4.
7. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a program which, when executed by a processor, implements the method according to any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311000492.XA CN116758058B (en) | 2023-08-10 | 2023-08-10 | Data processing method, device, computer and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311000492.XA CN116758058B (en) | 2023-08-10 | 2023-08-10 | Data processing method, device, computer and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116758058A CN116758058A (en) | 2023-09-15 |
CN116758058B true CN116758058B (en) | 2023-11-03 |
Family
ID=87959300
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311000492.XA Active CN116758058B (en) | 2023-08-10 | 2023-08-10 | Data processing method, device, computer and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116758058B (en) |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103258325A (en) * | 2013-04-15 | 2013-08-21 | 哈尔滨工业大学 | Image feature detection method based on ellipse salient region covariance matrix |
CN103294813A (en) * | 2013-06-07 | 2013-09-11 | 北京捷成世纪科技股份有限公司 | Sensitive image search method and device |
CN107341841A (en) * | 2017-07-26 | 2017-11-10 | 厦门美图之家科技有限公司 | The generation method and computing device of a kind of gradual-change animation |
CN108765337A (en) * | 2018-05-28 | 2018-11-06 | 青岛大学 | A kind of single width color image defogging processing method based on dark primary priori Yu non local MTV models |
CN109035334A (en) * | 2018-06-27 | 2018-12-18 | 腾讯科技(深圳)有限公司 | Determination method and apparatus, storage medium and the electronic device of pose |
CN109556596A (en) * | 2018-10-19 | 2019-04-02 | 北京极智嘉科技有限公司 | Air navigation aid, device, equipment and storage medium based on ground texture image |
CN110084802A (en) * | 2019-04-29 | 2019-08-02 | 江苏理工学院 | A kind of high-accuracy PCB chip pin center positioning method |
CN110992392A (en) * | 2019-11-20 | 2020-04-10 | 北京影谱科技股份有限公司 | Key frame selection method and device based on motion state |
CN111629262A (en) * | 2020-05-08 | 2020-09-04 | Oppo广东移动通信有限公司 | Video image processing method and device, electronic equipment and storage medium |
CN114418897A (en) * | 2022-03-10 | 2022-04-29 | 深圳市一心视觉科技有限公司 | Eye spot image restoration method and device, terminal equipment and storage medium |
CN114581375A (en) * | 2022-01-27 | 2022-06-03 | 大连东软教育科技集团有限公司 | Method, device and storage medium for automatically detecting focus of wireless capsule endoscope |
CN114693598A (en) * | 2022-02-21 | 2022-07-01 | 浙江爱达科技有限公司 | Capsule endoscope gastrointestinal tract organ image automatic identification method |
CN114820334A (en) * | 2021-01-29 | 2022-07-29 | 深圳市万普拉斯科技有限公司 | Image restoration method and device, terminal equipment and readable storage medium |
CN116091321A (en) * | 2023-04-11 | 2023-05-09 | 苏州浪潮智能科技有限公司 | Image scaling method, device, equipment and storage medium |
CN116206741A (en) * | 2023-05-05 | 2023-06-02 | 泰安市中心医院(青岛大学附属泰安市中心医院、泰山医养中心) | Gastroenterology medical information processing system and method |
CN116391203A (en) * | 2020-10-28 | 2023-07-04 | 莱卡微系统Cms有限责任公司 | Method for improving signal-to-noise ratio of image frame sequence and image processing device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030026529A (en) * | 2001-09-26 | 2003-04-03 | 엘지전자 주식회사 | Keyframe Based Video Summary System |
US9613288B2 (en) * | 2014-11-14 | 2017-04-04 | Adobe Systems Incorporated | Automatically identifying and healing spots in images |
US20180005015A1 (en) * | 2016-07-01 | 2018-01-04 | Vangogh Imaging, Inc. | Sparse simultaneous localization and matching with unified tracking |
US10977767B2 (en) * | 2018-11-28 | 2021-04-13 | Adobe Inc. | Propagation of spot healing edits from one image to multiple images |
-
2023
- 2023-08-10 CN CN202311000492.XA patent/CN116758058B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103258325A (en) * | 2013-04-15 | 2013-08-21 | 哈尔滨工业大学 | Image feature detection method based on ellipse salient region covariance matrix |
CN103294813A (en) * | 2013-06-07 | 2013-09-11 | 北京捷成世纪科技股份有限公司 | Sensitive image search method and device |
CN107341841A (en) * | 2017-07-26 | 2017-11-10 | 厦门美图之家科技有限公司 | The generation method and computing device of a kind of gradual-change animation |
CN108765337A (en) * | 2018-05-28 | 2018-11-06 | 青岛大学 | A kind of single width color image defogging processing method based on dark primary priori Yu non local MTV models |
CN109035334A (en) * | 2018-06-27 | 2018-12-18 | 腾讯科技(深圳)有限公司 | Determination method and apparatus, storage medium and the electronic device of pose |
WO2020078064A1 (en) * | 2018-10-19 | 2020-04-23 | 北京极智嘉科技有限公司 | Ground texture image-based navigation method and device, apparatus, and storage medium |
CN109556596A (en) * | 2018-10-19 | 2019-04-02 | 北京极智嘉科技有限公司 | Air navigation aid, device, equipment and storage medium based on ground texture image |
CN110084802A (en) * | 2019-04-29 | 2019-08-02 | 江苏理工学院 | A kind of high-accuracy PCB chip pin center positioning method |
CN110992392A (en) * | 2019-11-20 | 2020-04-10 | 北京影谱科技股份有限公司 | Key frame selection method and device based on motion state |
CN111629262A (en) * | 2020-05-08 | 2020-09-04 | Oppo广东移动通信有限公司 | Video image processing method and device, electronic equipment and storage medium |
CN116391203A (en) * | 2020-10-28 | 2023-07-04 | 莱卡微系统Cms有限责任公司 | Method for improving signal-to-noise ratio of image frame sequence and image processing device |
CN114820334A (en) * | 2021-01-29 | 2022-07-29 | 深圳市万普拉斯科技有限公司 | Image restoration method and device, terminal equipment and readable storage medium |
CN114581375A (en) * | 2022-01-27 | 2022-06-03 | 大连东软教育科技集团有限公司 | Method, device and storage medium for automatically detecting focus of wireless capsule endoscope |
CN114693598A (en) * | 2022-02-21 | 2022-07-01 | 浙江爱达科技有限公司 | Capsule endoscope gastrointestinal tract organ image automatic identification method |
CN114418897A (en) * | 2022-03-10 | 2022-04-29 | 深圳市一心视觉科技有限公司 | Eye spot image restoration method and device, terminal equipment and storage medium |
CN116091321A (en) * | 2023-04-11 | 2023-05-09 | 苏州浪潮智能科技有限公司 | Image scaling method, device, equipment and storage medium |
CN116206741A (en) * | 2023-05-05 | 2023-06-02 | 泰安市中心医院(青岛大学附属泰安市中心医院、泰山医养中心) | Gastroenterology medical information processing system and method |
Non-Patent Citations (1)
Title |
---|
基于关键帧和局部极值的手势特征提取算法;刘杨俊武;程春玲;;计算机技术与发展(第03期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN116758058A (en) | 2023-09-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7458328B2 (en) | Multi-sample whole-slide image processing via multi-resolution registration | |
US7715596B2 (en) | Method for controlling photographs of people | |
RU2659745C1 (en) | Reconstruction of the document from document image series | |
CN112686812B (en) | Bank card inclination correction detection method and device, readable storage medium and terminal | |
JP2010045613A (en) | Image identifying method and imaging device | |
CN107767358B (en) | Method and device for determining ambiguity of object in image | |
CN111192241B (en) | Quality evaluation method and device for face image and computer storage medium | |
CN111784675A (en) | Method and device for processing article texture information, storage medium and electronic equipment | |
CN105678778A (en) | Image matching method and device | |
KR20180092455A (en) | Card number recognition method using deep learnig | |
CN112633221A (en) | Face direction detection method and related device | |
CN114049499A (en) | Target object detection method, apparatus and storage medium for continuous contour | |
CN112528939A (en) | Quality evaluation method and device for face image | |
CN111199197B (en) | Image extraction method and processing equipment for face recognition | |
CN115272647A (en) | Lung image recognition processing method and system | |
JP3814353B2 (en) | Image segmentation method and image segmentation apparatus | |
CN110910497B (en) | Method and system for realizing augmented reality map | |
CN116758058B (en) | Data processing method, device, computer and storage medium | |
Thilagavathy et al. | Fuzzy based edge enhanced text detection algorithm using MSER | |
CN111753722B (en) | Fingerprint identification method and device based on feature point type | |
CN111382703B (en) | Finger vein recognition method based on secondary screening and score fusion | |
CN116109543A (en) | Method and device for quickly identifying and reading data and computer readable storage medium | |
CN110796645A (en) | Certificate photo quality evaluation method, storage medium and processor | |
CN113516020B (en) | Age change related identity verification method and system | |
Barbosa et al. | Automatic analogue gauge reading using smartphones for industrial scenarios |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |