CN109886243A - Image processing method, device, storage medium, equipment and system - Google Patents
Image processing method, device, storage medium, equipment and system Download PDFInfo
- Publication number
- CN109886243A CN109886243A CN201910156660.1A CN201910156660A CN109886243A CN 109886243 A CN109886243 A CN 109886243A CN 201910156660 A CN201910156660 A CN 201910156660A CN 109886243 A CN109886243 A CN 109886243A
- Authority
- CN
- China
- Prior art keywords
- image
- frame image
- lesion
- current frame
- center point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 27
- 238000003860 storage Methods 0.000 title abstract description 12
- 230000003902 lesion Effects 0.000 claims abstract description 127
- 238000001514 detection method Methods 0.000 claims abstract description 98
- 238000000034 method Methods 0.000 claims abstract description 60
- 238000012360 testing method Methods 0.000 claims abstract description 43
- 238000012545 processing Methods 0.000 claims description 50
- 230000033001 locomotion Effects 0.000 claims description 28
- 238000012549 training Methods 0.000 claims description 19
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 238000012805 post-processing Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 4
- 230000008901 benefit Effects 0.000 abstract description 7
- 238000010801 machine learning Methods 0.000 abstract description 6
- 208000037062 Polyps Diseases 0.000 description 109
- 230000008569 process Effects 0.000 description 15
- 238000013527 convolutional neural network Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 11
- 238000013135 deep learning Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 10
- 230000002093 peripheral effect Effects 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 9
- 210000001072 colon Anatomy 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 201000010099 disease Diseases 0.000 description 8
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 8
- 210000000664 rectum Anatomy 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 206010009944 Colon cancer Diseases 0.000 description 6
- 208000001333 Colorectal Neoplasms Diseases 0.000 description 6
- 230000004927 fusion Effects 0.000 description 6
- 208000032177 Intestinal Polyps Diseases 0.000 description 5
- 206010028980 Neoplasm Diseases 0.000 description 5
- 235000013372 meat Nutrition 0.000 description 4
- 210000000056 organ Anatomy 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000000968 intestinal effect Effects 0.000 description 3
- 210000000936 intestine Anatomy 0.000 description 3
- 238000003475 lamination Methods 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 201000011510 cancer Diseases 0.000 description 2
- 208000035269 cancer or benign tumor Diseases 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 238000002052 colonoscopy Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000004195 computer-aided diagnosis Methods 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 210000001519 tissue Anatomy 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 208000003200 Adenoma Diseases 0.000 description 1
- 206010001233 Adenoma benign Diseases 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 208000002927 Hamartoma Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 208000002458 carcinoid tumor Diseases 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000001969 hypertrophic effect Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- -1 inflammatory Diseases 0.000 description 1
- 230000002757 inflammatory effect Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 244000000010 microbial pathogen Species 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000004400 mucous membrane Anatomy 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 208000008128 pulmonary tuberculosis Diseases 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
This application discloses a kind of image processing method, device, storage medium, equipment and systems, belong to machine learning techniques field.Method includes: the Video image stream for obtaining body area to be detected;Lesion detection is successively carried out to each frame image in the Video image stream;Classified according to the second lesion testing result of the first lesion testing result of preamble frame image and the current frame image to the current frame image for current frame image;Wherein, the preamble frame image is at least frame image being located at before the current frame image in timing.The application is in image procossing, the prediction result of preamble frame image will can be considered among to the prediction of current frame image, the efficient of single-frame images detection method not only it has been polymerize and the advantages of without accumulated error, and the relevant information by merging other frame images significantly improves the accuracy of image classification and ensures the continuity of prediction result.
Description
Technical field
This application involves machine learning techniques field, in particular to a kind of image processing method, storage medium, is set device
Standby and system.
Background technique
Core of the machine learning techniques as artificial intelligence, application range has spread every field at present, for example cures
Treatment field is one of.In the medical field, medical image image is handled using machine learning techniques, it can be achieved that
It is identified to whether patient suffers from certain disease.By taking colorectal cancer as an example, enteroscopy has been widely used in colon at present
Cancer screening, after the medical image image for obtaining patient's Colon and rectum position, using computer aided detection technology to the medicine shadow
As image is handled, detection intestinal wall whether there is polyp, so realize according to polyp there are situations that doctor to be assisted to identify
Whether patient suffers from colorectal cancer.
Continue by taking colorectal cancer as an example, the relevant technologies are when carrying out polyp detection by image procossing, although collected
It is Video image stream, but be enter into polyp detection model is single-frame images in the Video image stream, i.e. polyp detection mould
Type carries out feature extraction to the frame image first and judges the frame based on the feature extracted later after receiving single-frame images
It whether there is polyp in image.
Above-mentioned image procossing mode the prior art has at least the following problems:
The first, above-mentioned image procossing mode has higher requirements to the accuracy of polyp detection model, and considers actual field
The complexity of scape, there are bottlenecks for the accuracy of polyp detection model.For example, in enteroscopy it is possible that blocking, light
Excessive lightness or darkness, motion unsharpness, property out of focus are fuzzy etc., in another example, polyp size, form, color etc. also can be according to patients not
Same, camera difference, terminal models difference etc. at a distance from polyp change, and the above-mentioned various factors being mentioned to can be rung
Answer the accuracy in detection of model.
The second, there are problems for the prediction result continuity of above-mentioned image procossing mode.Video image stream is in collection process
Often there is noise, even if camera does not move, since enteron aisle is mobile, can generally also exist between adjacent two field pictures subtle
Difference, and this species diversity sometimes results in adjacent two field pictures and generates completely different prediction result, i.e. visually identical area
Domain, polyp detection model but provide different, discontinuous, inconsistent prediction result.
Summary of the invention
The embodiment of the present application provides a kind of image processing method, device, storage medium, equipment and system, solves
The not high problem of the existing detection accuracy of the relevant technologies.The technical solution is as follows:
On the one hand, a kind of image processing method is provided, which comprises
Obtain the Video image stream of body area to be detected;
Lesion detection is successively carried out to each frame image in the Video image stream;
For current frame image, according to the second of the first lesion testing result of preamble frame image and the current frame image
Lesion testing result classifies to the current frame image;
Wherein, the preamble frame image is at least frame image being located at before the current frame image in timing.
On the other hand, a kind of image processing apparatus is provided, described device includes:
Module is obtained, for obtaining the Video image stream of body area to be detected;
Detection module, for successively carrying out lesion detection to each frame image in the Video image stream;
Processing module according to the first lesion testing result of preamble frame image and described is worked as for current frame image
Second lesion testing result of prior image frame, classifies to the current frame image;
Wherein, the preamble frame image is at least frame image being located at before the current frame image in timing.
In one possible implementation, the detection module is also used to inputting the current frame image into detection mould
In type, the first segmented image of the detection model output is obtained, in first segmented image described in each pixel expression
Pixel in current frame image on corresponding position is the probability value of lesion;First segmented image is adjusted, and right
First segmented image adjusted is post-processed;Calculate the company of at least one foreground area in the first segmented image of post-processing
Reduction of fractions to a common denominator amount;At least one described connected component is ranked up according to size, and according to the similarity degree with target shape to institute
At least one connected component is stated to be ranked up;When maximum connected component, the connected component one with the closest target shape
When cause, by the foreground area of the maximum connected component instruction, it is determined as the lesion center point of the current frame image.
In one possible implementation, the detection module is also used to obtain the adjustment with previous frame images match
The second segmented image afterwards is sought the average value of first segmented image and second segmented image adjusted, is obtained
First segmented image adjusted;Using specified numerical value as threshold value, two-value is carried out to first segmented image adjusted
Change processing;The noise spot in the first segmented image and smooth foreground edge after removing binary conversion treatment.
In one possible implementation, the processing module is also used to when the predicted position coordinate is beyond described
When the image range of current frame image, the corresponding prediction lesion center point of the predicted position coordinate is stopped in next frame image
Only track;Or, when the corresponding prediction lesion center point of the predicted position coordinate is determined as background by the classifier, under
The corresponding prediction lesion center point of the predicted position coordinate is stopped tracking in one frame image.
In one possible implementation, the processing module is also used to that any one lesion center point ought be chased after
When track frame number is greater than the first quantity, stopping is tracked the lesion center point;Or, in the consecutive image of the second quantity
When tracking a lesion center point failure, stopping is tracked the lesion center point.
In one possible implementation, the processing module is also used to when second prediction result and described the
When the lesion center point quantity that three prediction results provide is at least two, connection Euclidean distance is less than the adjacent lesion of targets threshold
Central point;The connected component for calculating at least two lesion centers point, by the corresponding lesion center point of largest connected component, really
It is set to the final lesion center point of the current frame image.
On the other hand, provide a kind of storage medium, be stored at least one instruction in the storage medium, it is described at least
One instruction is loaded by processor and is executed to realize above-mentioned image processing method.
On the other hand, a kind of image processing equipment is provided, the equipment includes processor and memory, the memory
In be stored at least one instruction, at least one instruction is loaded by the processor and is executed to realize above-mentioned image
Reason method.
On the other hand, a kind of image processing system is provided, the system comprises: image capture device, image procossing are set
Standby and display equipment;
Described image acquisition equipment carries out Image Acquisition for treating detection body area, obtains the body portion to be detected
The Video image stream of position;
Described image processing equipment includes processor and memory, and at least one instruction, institute are stored in the memory
It states at least one instruction to be loaded by the processor and executed to realize: obtaining the Video image stream of body area to be detected;It is right
Each frame image in the Video image stream successively carries out lesion detection;For current frame image, according to preamble frame image
Second lesion testing result of the first lesion testing result and the current frame image, classifies to the current frame image,
The preamble frame image is at least frame image being located at before the current frame image in timing;
The result that the display equipment is used to export described image processing equipment is shown.
Technical solution provided by the embodiments of the present application has the benefit that
The embodiment of the present application when performing image processing, can will consider to present frame the prediction result of preamble frame image
Among the prediction of image, i.e., the embodiment of the present application can integrate the prediction result of preamble frame image and the image of current frame image is believed
Breath, to complete the final prediction to single-frame images, when not only having polymerize single-frame images detection efficiently and the advantages of without accumulated error,
And the relevant information by merging other frame images significantly improves the accuracy of image classification and ensures prediction result
Continuity.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for
For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 is implementation environment schematic diagram involved in a kind of image processing method provided by the embodiments of the present application;
Fig. 2 is a kind of lesion testing process schematic diagram of image processing method provided by the embodiments of the present application;
Fig. 3 is a kind of flow chart of image processing method provided by the embodiments of the present application;
Fig. 4 is a kind of structural schematic diagram of U-net network provided by the embodiments of the present application;
Method flow diagram when Fig. 5 is a kind of single-frame images polyp detection provided by the embodiments of the present application;
Fig. 6 is a kind of flow diagram of on-line training CNN classifier provided by the embodiments of the present application;
Fig. 7 is a kind of structural schematic diagram of image processing apparatus provided by the embodiments of the present application;
Fig. 8 is a kind of structural schematic diagram of image processing equipment 800 provided by the embodiments of the present application.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with attached drawing to the application embodiment party
Formula is described in further detail.
Before to the embodiment of the present application carrying out that explanation is explained in detail, first to the invention relates to some names arrived
Word is explained.
CNN: full name in English Convolutional Neural Network, Chinese convolutional neural networks.
In short, the calculating network that CNN is made of multiple convolution operations, is chiefly used in deep learning.Wherein, deep learning
Technology is a kind of technology that machine learning is carried out using deep-neural-network system.
CAD: full name in English Computer Aided Diagnosis, Chinese computer-aided diagnosis.
Wherein, CAD is used for through iconography, Medical Image Processing and other possible physiology, biochemical apparatus, knot
The analytical calculation of computer is closed, auxiliary discovery lesion improves the accuracy rate of diagnosis.
Video image stream: it refers to and Image Acquisition is carried out to body area (target organ on human body) by image capture device
The video flowing of formation.
Illustratively, by taking target organ is Colon and rectum as an example, then it is straight to tying just to refer to medical instrument for above-mentioned Video image stream
The video flowing including multiframe enteron aisle image of intestines progress Image Acquisition formation.
Colonoscopy: medically for detecting a kind of endoscope of enteron aisle.
Polyp: referring to the neoplasm that human tissue surface grows, and modern medicine will be usually grown on human body mucous membrane surface
Neoplasm be referred to as polyp, including Hypertrophic, inflammatory, hamartoma, adenoma and other tumours etc..It should be noted that breath
Meat belongs to carcinoid one kind.
Lesion: lesion generally refers to the part that lesion occurs on body.Change a kind of expression way, limitation, have
The pathological tissues of pathogenic microorganism, that is, can be described as lesion.
Illustratively, for example a leaf of lung is destroyed by tulase, then this part is pulmonary tuberculosis lesion.
In the embodiment of the present application, lesion refers to polyp;In one possible implementation, lesion has herein
Body refers to intestinal polyp.
Image category: it is determined in image by image classification comprising classification belonging to content.In the embodiment of the present application,
By classifying to medical image image, can specify on the target organ of patient with the presence or absence of polyp.Illustratively, pass through
Image processing method provided by the embodiments of the present application may recognize that on the enteron aisle of patient with the presence or absence of intestinal polyp.
Light stream: there are the movement of pixel in adjacent two field pictures, it that is to say that the pixel in previous frame image exists
Its position has slight variation in next frame image, then this is changed, that is to say motion vector, is the light of the pixel
Stream.
It is well known that colorectal cancer is one of universal inducement of whole world cancer mortality at present.Currently, reducing colorectal cancer
The standard method of the death rate is to find polyp by Colon and rectum screening.Wherein, enteroscopy as common practice now
It is widely used in colorectal cancer screening.During enteroscopy, clinician passes through the image capture device pair of medical instrument
Intestinal wall is shot, so that assisting clinician to be based on collected medical image image carries out polyp detection.However, once facing
Bed doctor is there are missing inspection situation, and patient will miss the chance of early stage disease detection and treatment, and there are very big healthy hidden dangers.For
This, in order to reduce the risk of mistaken diagnosis and mitigate the burden of clinician, the embodiment of the present application by computer aided diagnosing method,
It realizes during patient's enteroscopy, polyp is detected automatically automatically by image processing method.
Implementation environment involved in the image processing method first provided below inventive embodiments is introduced.
Fig. 1 is implementation environment schematic diagram involved in a kind of image processing method provided by the embodiments of the present application.Referring to figure
1, which includes image processing equipment 101, display equipment 102 and image capture device 103.Above-mentioned image processing equipment
101, show that equipment 102 and image capture device 103 constitute image processing system.Wherein, display equipment 102 can be display
Device, image processing equipment 101 include but is not limited to fixed terminal and mobile terminal, and the embodiment of the present application is to this without tool
Body limits.
Wherein, image capture device 103 carries out Image Acquisition for treating detection body area, obtains body portion to be detected
The Video image stream of position;Image processing equipment 101 includes processor and memory, is stored at least one instruction in memory,
At least one instruction is loaded by processor and is executed to realize: obtaining Video image stream;To each frame figure in Video image stream
As successively carrying out lesion detection;For current frame image, according to the first lesion testing result and present frame figure of preamble frame image
Second lesion testing result of picture, classifies to current frame image, preamble frame image be timing on be located at current frame image it
A preceding at least frame image;The result that display equipment 102 is used to export image processing equipment is shown.
Using lesion as polyp, for carrying out polyp detection to Colon and rectum, in the embodiment of the present application, clinician passes through intestines
Mirror observes the Colon and rectum of patient.Wherein, the medical instrument i.e. image capture device 103 for carrying out enteroscopy can be deep into
Inside enteron aisle, to carry out Image Acquisition to intestinal wall, and collected Video image stream is passed into image processing equipment 101.Its
In, image capture device 103 is camera.
And image processing equipment 101 is responsible for judging current acquisition by image processing method provided by the embodiments of the present application
To Video image stream in whether there is intestinal polyp.If it is present image processing equipment 101 is responsible for control display equipment 102
Display output as a result, and prompting clinician.
Wherein, the mode prompted includes but is not limited to: voice prompting, display equipment 102 or indicator light it is special
Warn prompt, display equipment 102 show video pictures in be highlighted the polyp regions detected, the embodiment of the present application pair
This is without specifically limiting.
Based on the description above with respect to implementation environment, image processing method provided by the embodiments of the present application in framework level,
On the one hand the polyp prediction to single-frame images is completed by deep learning network end to end.On the other hand, the application is implemented
Example also adds method for tracing and believes for polyp detection, the prediction result of comprehensive preamble frame image and the image of current frame image
Breath completes the polyp prediction final to current frame image.
Continue using lesion as polyp, it is referring to fig. 2, provided by the embodiments of the present application for carrying out polyp detection to Colon and rectum
The detailed realization step of image processing method includes but is not limited to:
1), colonoscopy collects human colorectal Video image stream.
2), for each frame image in Video image stream, former frame is worked as by the detection of deep learning network end to end
It whether there is polyp in image.
That is, the embodiment of the present application is primarily based on deep learning network to detect and the polyp in segmented image.For appointing
One frame image just calculates the center point coordinate of the polyp that is, if when former frame figure as long as detecting polyp in this frame image
There are polyps as in, then above-mentioned deep learning network usually can also provide the spatial position of polyp.
In addition, above-mentioned deep learning network is also referred to as detection model herein.
3), the polyp detected for the single frame detection method for taking steps 2) to provide tracks it in next frame image
Appearance position.
Illustratively, it can first attempt to track its appearance position in next frame image using light stream back tracking method, for light
The complex situations for flowing back tracking method failure use appearance position of the light stream tracking convolutional neural networks to polyp in other frame images instead
Continue to track.
In the embodiment of the present application, after detecting polyp in one frame of image, in subsequent frames it can be continued to chase after
Track, until meeting stopping rule.During tracking, light stream back tracking method is for tracking easier situation, and convolution is tracked in light stream
Neural network is for handling the case where being more difficult.
4) comprehensive to track obtained polyp and to the polyp that current frame image is predicted, obtain, for each frame image
The final prediction result of the position occurred in current frame image to current frame image with the presence or absence of polyp and polyp.
It should be noted that the frame image is considered as negative frame if not including polyp in a certain frame image.If the frame
Include in image multiple polyp center points (some of them are that succession is tracked from preceding frame image), then the embodiment of the present application
Using spatial weighting Voting Algorithm, retain the highest polyp center point of confidence level, and as final polyp center point, together
When delete other polyp center points.
In conclusion the feature relatively uniform based on polyp feature in the same patient, a same to enteroscopy, the application
Embodiment proposes a kind of image point of the means such as combination single frame detection, the property inheritance of before and after frames and tracking moving object
Class mode, this programme not only absorb the efficient of the single frame detection method provided in the related technology and the advantages of without accumulated errors,
And the accuracy of polyp detection is significantly improved by fusion video information and ensures the continuity of prediction result.
Fig. 3 is a kind of flow chart of image processing method provided by the embodiments of the present application.The executing subject of this method is Fig. 1
Shown in image processing equipment 101, by Colon and rectum carry out polyp detection for, it is provided by the embodiments of the present application referring to Fig. 3
Method flow includes:
301, the Video image stream of body area to be detected is obtained.
Wherein, body area refers to human organ, and illustratively, it is straight to refer to knot for body area in the embodiment of the present application
Intestines.The Video image stream be usually medical instrument camera be deep into body area inside carry out Image Acquisition obtain.
And camera can be transferred directly to image processing equipment after collecting image.
302, lesion detection is successively carried out to each frame image in the Video image stream;For current frame image, according to
First lesion testing result of preamble frame image and the second lesion testing result of current frame image, divide current frame image
Class, preamble frame image are at least frame image being located at before current frame image in timing.
The detection of single-frame images lesion
Since body area refers to Colon and rectum in the embodiment of the present application, this step carries out polyp to single-frame images
Detection.In the embodiment of the present application, the end-to-end method of deep learning is taken based on to carry out single-frame images polyp detection.
Illustratively, referring to fig. 4, the embodiment of the present application is using the full convolutional neural networks of entitled U-net come to each frame
Image is split.Wherein, U-net network is a kind of CNN end to end (Convolutional Neural Networks, volume
Product neural network), input is piece image, is exported as the segmentation result to object of interest in the image.
A kind of expression way is changed, outputting and inputting for U-net network is image, and U-net network does not include connecting entirely
Layer, and image segmentation is used to be partitioned into the exact outline of an object of interest.
As shown in figure 4, the left-half of U-net network for carrying out feature extraction, including convolutional layer and pond layer, is being incited somebody to action
After image inputs U-net network, by the cooperation of convolutional layer and pond layer, it can be completed and the layer-by-layer feature of input picture is mentioned
It takes.
Wherein, convolutional layer executes convolution algorithm particular by convolution kernel, and then realizes and mention to the feature of input picture
It takes.It should be noted that input of the output of a upper convolutional layer but also as next convolutional layer, the characteristic information extracted
Generally characterized with characteristic pattern (feature map).Further, since the feature that one layer of convolution learns is often part,
The number of plies of convolutional layer is higher, and the feature learnt is more globalized, therefore in order to extract the global characteristics of input picture, U-net net
Multiple convolutional layers are generally included in network, and multiple convolution kernels are generally included in each convolutional layer.
And pond layer is specifically used to dimensionality reduction, to reduce calculation amount and avoid over-fitting, such as can be by one using pond layer
The big image down of width, while retaining the important information in the image again.
Further, the right half part of U-net network for carry out de-convolution operation, including warp lamination, convolutional layer with
And splicing step.As shown in figure 4, since the network structure shape is similar to U-shaped, so referred to as U-net network.It needs to illustrate
It is that every deconvolution is primary, just correspondingly primary with characteristic extraction part progress Fusion Features, i.e. progress merging features are primary.
Wherein, deconvolution is otherwise known as transposition convolution, and the propagated forward process of convolutional layer is the reversed biography of warp lamination
Process is broadcast, the back-propagation process of convolutional layer is the propagated forward process of warp lamination.Due to deconvolution process be one by
Small size is to large-sized process, therefore input picture and the segmented image of output is in the same size.
In one possible implementation, referring to Fig. 5, to a currently processed frame image (referred to as present frame figure
Picture) for, including but not limited to following step when to single-frame images polyp detection:
302a, current frame image is input in trained detection model, obtains the present frame of detection model output
First segmented image of image.
Wherein, which refers to U-net network mentioned above, and the size of current frame image and the first segmentation
Image it is in the same size.A kind of expression way is changed, the image of input obtains a segmented image after through U-net network, and
The size of the segmented image and input picture it is in the same size.
It should be noted that above-mentioned first and subsequent appearance second be only for carrying out different segmented image
It distinguishes, without constituting any other restriction.
In addition, each pixel indicates that the pixel in current frame image on corresponding position is polyp in the first segmented image
Probability value;A kind of expression way is changed, each pixel refers to corresponding pixel points location in original image in the segmented image
Domain is the probability value of polyp.Illustratively, 1 polyp is indicated, 0 indicates non-polyp, and the embodiment of the present application is to this without specifically limiting
It is fixed.
302b, the first segmented image is adjusted, and the first segmented image adjusted is post-processed.
In order to reduce flutter effect, the embodiment of the present application can also be to the segmentation of the first segmented image and multiple image before
Image is weighted and averaged, and the segmented image that result of weighted average is final as current frame image.Illustratively, weight can
It is 0.5d+1, wherein d is at a distance from that matched frame image to current frame image of a segmented image (i.e. frame number).
Segmented image with t frame image is StFor, then segmented image S adjustedt* are as follows:
Since, when the value of t is larger, denominator part can cast out denominator, molecule close to 1 in above-mentioned formula
It merges rear available:
I.e. the embodiment of the present application utilizes the adjustment of segmented image and previous frame image being calculated based on current frame image
Segmented image afterwards, to calculate the final segmented image of current frame image, and final segmented image is the average value of the two.It changes
A kind of expression way is adjusted the first segmented image, including but not limited to: after obtaining the adjustment with previous frame images match
The second segmented image, seek the average value of the first segmented image and the second segmented image adjusted, after being adjusted
One segmented image.
In one possible implementation, in order to reduce the risk for obtaining false positive prediction result, after being adjusted
The first segmented image after, the embodiment of the present application also further can execute post-processing operation to it.Wherein, to adjusted
One segmented image is post-processed including but not limited to: using specified numerical value as threshold value, being carried out to the first segmented image adjusted
Binary conversion treatment;The noise spot in the first segmented image and smooth foreground edge after removing binary conversion treatment.
Wherein, the value for specifying numerical value can be 0.5 or 0.6 etc., and the embodiment of the present application is to this without specifically limiting.Example
Property, it, then, first can be using 0.5 as threshold value by adjusted first in post-processing by taking to specify the value of numerical value be 0.5 as an example
Segmented image carries out binary conversion treatment.Later, can also operation be eroded to the segmented image after binary conversion treatment, to remove
Small noise point and smooth foreground edge.
302c, the connected component for calculating at least one foreground area in the first segmented image of post-processing, and according to numerical value
Size and at least one connected component is ranked up respectively with the similarity degree of target shape.
Since round or ellipse is often presented in polyp in segmented image, target shape can be round or oval
Shape, the embodiment of the present application is to this without specifically limiting.
It illustratively, can be according to the descending sequence of connected component to each company for the connected component being calculated
Reduction of fractions to a common denominator amount is successively ranked up, and is successively ranked up according still further in the descending sequence of oval degree to each connected component.
302d, when maximum connected component, consistent with the connected component closest to target shape, by it is maximum connection point
The foreground area for measuring instruction, is determined as the lesion center point of current frame image.
Since round or ellipse is often presented in polyp in segmented image, and region is larger, therefore only when maximum
Connected component and when in the connected component of target shape being most the same connected component, the prospect that the connected component can just be indicated
Region is determined as polyp center point.Continue to be exemplified as example with above-mentioned, for two sorted lists, if ranking is two list first places
Connected component be the same connected component, then the corresponding foreground area of the connected component is determined as to the polyp of current frame image
Central point.
Above by taking a currently processed frame image as an example, the process that lesion detection is carried out to single-frame images is described;Continue
By taking current frame image as an example, in addition to foregoing description, image processing equipment can also be based on the lesion testing result of preamble frame image
The tracking of lesion center point is carried out in current frame image.
Lesion tracking
After detecting polyp in one frame of image, the embodiment of the present application can carry out the polyp in multiple image later
Tracking stops tracking rule until meeting.It should be noted that optical flow method is used to track easier situation during tracking,
And light stream tracking convolutional neural networks are for handling the case where being more difficult.
Wherein, optical flow method is the obvious motor pattern of image object between two successive frames, it is 2D vector field, Mei Geshi
Amount is displacement vector, is indicated from first frame image to the flowing of the point of the second frame image.
Under normal conditions, optical flow method is assumed based on following two:
1, the image pixel intensities of same object will not change between successive frame;
2, adjacent pixel has similar movement.In the embodiment of the present application, to the polyp center point coordinate of framing t (x,
Y), its position occurred in next frame image can be tracked using optical flow method.
Based on above description, principle of the optical flow method for polyp tracking is as follows:
For each frame image in Video image stream, the foreground target being likely to occur i.e. polyp is detected;If a certain frame
Occur polyp center point in image, then for any two frames adjacent image later, finds the breath occurred in previous frame image
The position that meat central point occurs in current frame image, to obtain position coordinates of the foreground target in current frame image;Such as
This iteration carries out, and can realize that polyp is tracked.
However, for blurred picture or image artefacts, it may appear that the case where above-mentioned two hypothesis can not be met,
And then lead to the case where failing using optical flow method tracking polyp appearance.In the embodiment of the present application, in order to decide whether to continue into
The tracking of row polyp moves regression model more preferably using robustness to assess whether to continue polyp tracking, and in optical flow method
Tracking is further tracked when stopping.
Move regression model
In the embodiment of the present application, by the motion conditions using polyp center point in preceding frame image, using moving back
Return the motion conditions of polyp center point in model prediction current frame image.
Illustratively, it is assumed that Δ Pt=Pt-Pt-1Refer to the motion vector of polyp center point in t frame, wherein PtRefer to the
The position of polyp center point, P in t framet-1The position of polyp center point in t-1 frame is referred to, then the embodiment of the present application passes through linear
Fitting, using the motion vector of polyp center point in preceding frame image, to predict the movement of polyp center point in current frame image
Vector Δ Pt。
Illustratively, the embodiment of the present application utilizes motion vector [the Δ P of polyp center point in first three framet-3,ΔPt-2,Δ
Pt-1] predict the motion vector Δ P of current image framet, to pass through formula Pt=Δ Pt+Pt-1It obtains ceasing in current frame image
The position P of meat central pointt.Position PtIt is the prediction according to preceding frame image to polyp locations in current frame image, i.e., frame by frame
Track polyp center point.
In one possible implementation, if position PtIn the image range of current frame image, then utilize
The classifier being introduced below further determines the position P of predictiontIt whether is actual polyp center point;If it is, continuing
It is tracked in subsequent frames;Otherwise, then stop tracking.Wherein, which can be CNN classifier, the embodiment of the present application pair
This is without specifically limiting.
Based on above description, the second of the first polyp detection result and current frame image above-mentioned according to preamble frame image
Polyp detection is as a result, classify to current frame image, including but not limited to:
Polyp is tracked in current frame image: according to the movement of the polyp center point of an at least frame image in preamble frame image
Vector predicts the motion vector of the polyp center point of current frame image by linear fit;Later, based on the pre- of current frame image
The motion vector of polyp center point is surveyed, the position coordinates for predicting polyp center point in current frame image are tracked;When tracking obtains
Predicted position coordinate when being located in the image range of current frame image, predicted position coordinate is determined based on classifier,
Obtain third polyp detection result.
Wherein, at least polyp center o'clock of a frame image is obtained based on the first polyp detection result in preamble frame image;
It should be noted that above-mentioned first polyp detection is the result is that system to the polyp detection result of all images in preamble frame image
Claim.
In the embodiment of the present application, due to going back after carrying out above-mentioned single frame detection to a frame image and obtaining polyp center point
It can be continued to track in subsequent multiple image, therefore may include multiple polyp center points in a frame image, such as
Result is inherited respectively from single frame detection result and tracking.So illustratively, the polyp center point of an above-mentioned at least frame image
It can both include single frame detection as a result, or inheriting result including tracking;In addition, for for tracking succession process, polyp center
The tracking of point starts from single frame detection, changes a kind of expression way, the tracking of polyp center point is to obtain polyp center in single frame detection
After point, start whether to occur being predicted in subsequent frame image to it.
Wherein, an above-mentioned at least frame image can be the parts of images or all images in preamble frame image, and the application is implemented
Example is to this without specifically limiting.Illustratively, an above-mentioned at least frame image can be to be located at before current frame image in timing
Three frame images.
Later, based on the second polyp detection for carrying out single frame detection to current frame image as a result, being tracked with progress polyp
To third polyp detection as a result, classifying to current frame image.
As described above, also being needed after completing to the prediction of current frame image based on the polyp prediction result of preamble frame image
Using the good classifier of on-line training, practical judgement is carried out to the polyp center point of prediction.It is explained to the decision process
Before explanation, first the on-line training process of classifier is described.
On-line training classifier
In actual scene, the appearance for rule of thumb observing polyp reaches unanimity between frames, therefore this Shen
Please embodiment propose a kind of light stream tracking CNN frame of on-line training, in the polyp for determining movement forecast of regression model
Whether heart point is real polyp, and further determines whether movement regression model should stop tracking.
Since the intermediate features figure extracted during U-net network query function contains required for light stream tracking CNN calculating
Polyp feature, and since each frame image in the embodiment of the present application can first pass through U-net network query function, in order to reduce
Computation complexity promotes computational efficiency, can directly chase after the intermediate features figure generated during U-net network query function as light stream
The input of track CNN.Referring to Fig. 6, for the tracing process of current frame image, the embodiment of the present application can be by current frame image
Extracted characteristic pattern is as sharing feature after previous frame image is input in U-net network shown in Fig. 4.
In addition, in order to which Optimum Classification device is to judge whether the polyp of tracking is also present in current frame image, the application
Embodiment can in the positive sample of polyp near zone that previous frame image detection arrives acquisition destination number, and apart from polyp farther out
Region acquires the negative sample of destination number, and then by carrying out pondization operation to the corresponding sharing feature figure in sample region,
After standardizing sharing feature length, then the negative sample of the positive sample based on destination number and destination number, it realizes to classification
The on-line training of device.Change a kind of expression way, the embodiment of the present application passes through a certain number of positive samples for getting from previous frame image
Sheet and negative sample, are finely adjusted classifier, to complete to classify to polyp.
Wherein, the value of destination number can be 4, and the embodiment of the present application is to this without specifically limiting.
In one possible implementation, it is based on sharing feature figure, generates the positive sample and destination number of destination number
Negative sample, including but not limited to: in the final segmented image that previous frame image obtains, cutting Chong Die with polyp regions model
The image-region greater than the first value is enclosed, the positive sample of destination number is obtained;In the final segmented image that previous frame image obtains
In, it cuts with the polyp regions overlapping range less than the image-region of the second value, obtains the negative sample of destination number.Wherein,
First value can be 0.7, and the second value can be 0.1, and the embodiment of the present application is to this without specifically limiting.
It should be noted that the final segmented image that previous frame image obtains is referred to herein as third segmented image,
And third segmented image is by treated the segmented image of similar adjustment shown in above-mentioned steps 302b.
It illustratively, as shown in fig. 6, can be negative based on the previous frame image of current frame image four positive samples of generation and four
Sample, such as positive sample are Chong Die with polyp regions jaccard greater than 0.7, and negative sample is Chong Die with polyp regions jaccard to be less than
0.1.(on-line training) is finely adjusted to classifier using this eight samples, movement is returned with the classifier based on on-line training
The polyp that model predicts current frame image is classified.
Wherein, jaccard is used to compare the similitude and otherness between finite sample collection, and the value of Jaccard coefficient is got over
Greatly, Sample Similarity is higher.
It should be noted that the aforementioned position for having passed through polyp center point in movement forecast of regression model current frame image
Coordinate is set, therefore this step classifies to it using the good classifier of on-line training, that is, determines whether it is actual polyp;
If classifier is determined as non-polyp, that is, it is classified as background area rather than foreground area, then stops continuing to track, i.e.,
Cancel in subsequent frame image and continues to track.
In one possible implementation, it is contemplated that arithmetic speed problem, as shown in fig. 6, the embodiment of the present application is by U-
Input of the segmented image of net network output as classifier.In Fig. 6, the input picture size that is input in U-net network
The dimension of sharing feature generated for 288 × 384 × 3, U-net network is 18 × 24 × 512, than the original image of input
Size reduces 16 times.Therefore, it when extracting the characteristic pattern of ROI, is directly realized by and the size of ROI is reduced 16 times.
In addition, the characteristic pattern of ROI also directly can be cut into corresponding size by the embodiment of the present application, illustratively, in order to
Convenient for arithmetic operation, the length and width of ROI is fixed as 48 × 48 by the embodiment of the present application, is then cut to the characteristic pattern of ROI
3 × 3 region out, completes positive sample and negative sample is cut;And then one 1 × 1 × 256 convolutional layer+non-linear layer of access;
And then two full articulamentums and one softmax layers are connect, finally using the completion polyp classification of moisture in the soil loss function is intersected, obtain
The result of previous movement forecast of regression model whether be actual polyp classification results.
Track stopping rule
In the embodiment of the present application, if exceeding present frame to the predicted position coordinate of the polyp center point of current frame image
Image range, alternatively, when the polyp center point predicted current frame image is classified as background by the classifier of on-line training, this Shen
Please embodiment will to the polyp center point of the prediction stop track.
A kind of expression way is changed, when the predicted position coordinate of the polyp center point to current frame image exceeds current frame image
Image range when, in next frame image to the corresponding prediction polyp center point of the predicted position coordinate stop track;Or, working as
When the corresponding prediction polyp center point of the predicted position coordinate is determined as background by classifier, to the prediction in next frame image
The corresponding prediction polyp center point of position coordinates stops tracking.
In one possible implementation, due to can by multiple polyp center points for being generated from different frame image come
Same polyp is tracked, therefore in order to save the calculating time and reduce unnecessary tracking, when to any one polyp center point
When tracking frame number greater than the first quantity, stopping is tracked the polyp center point;Wherein, the value of the first quantity can be 10,
The embodiment of the present application is to this without specifically limiting.
In addition, being chased after to reduce the error as caused by the classifier of on-line training when in the consecutive image in the second quantity
When one polyp center point failure of track, stopping is tracked the polyp center point.Wherein, the value of the second quantity can be 3, this
Apply embodiment to this without specifically limiting.
In the embodiment of the present application, it is based on the second lesion testing result above-mentioned and third lesion testing result, to current
Frame image is classified.
Space Voting Algorithm
It in the embodiment of the present application, may include multiple polyp center points in a frame image after executing tracking, however,
Some of polyp center points may be exceptional value, these exceptional values can seriously affect classification results, and rule of thumb observe, just
True polyp center point is concentrated in a zonule, is based on this, and it is next that the embodiment of the present application proposes a kind of space Voting Algorithm
Eliminate these exceptional values.In short, firstly, then connection Euclidean distance is calculated less than the adjacent polyp center point of targets threshold
Connected component, and the polyp center point that the polyp center of maximum connected component point is final as current frame image.
Change a kind of expression way, be based on the second lesion testing result and third lesion testing result, to current frame image into
Row classification, including but not limited to: when the polyp center point quantity that the second lesion testing result and third lesion testing result provide
When being at least two, connection Euclidean distance is less than the adjacent polyp center point of targets threshold;Calculate 1 polyp center points
Connected component the corresponding polyp center point of largest connected component is determined as the final polyp center point of current frame image.
In conclusion method provided by the embodiments of the present application, in addition to not inheriting the calculating of single frame detection method efficiently and not
Except the advantages of there are cumulative errors, while also at least having the following beneficial effects:
(1), the recall rate of lesion can be significantly improved.One side the embodiment of the present application passes through deep learning net end to end
Network completes the polyp prediction to single-frame images, and on the other hand, the embodiment of the present application also adds method for tracing and examines for polyp
It surveys, the prediction result of comprehensive preamble frame image and the image information of current frame image can be completed final to current frame image
Prediction, this kind of image classification mode is not only less high to the accuracy requirement of polyp detection model, and for single frame detection
Method omits the polyp of falling, being capable of completion by video frequency tracking method provided by the embodiments of the present application.
(2), it can be improved the time continuity of prediction result.Compared to predicting phase between frame and frame in single frame detection method
Mutually independent, image processing method provided by the embodiments of the present application will be fused to present frame the prediction result of previous multiple image
It is not in that adjacent two field pictures generate completely different prediction result among the prediction of image, so as to improve detection breath
The time continuity of meat, avoids the occurrence of visually identical region, and polyp detection model but provides different, discontinuous, different
The prediction result of cause.
(3), it can reduce the probability of false detection in detection process.The embodiment of the present application can be incited somebody to action by space Voting Algorithm
The error result being only detected in a few frames weeds out.
In alternatively possible implementation, the image processing method of above-mentioned offer has a wide range of applications scene, no
It is only only applicable to polyp detection or is not only only applicable to intestinal polyp detection, it may also be used for the detection of other types disease.I.e.
For for the polyp detection of other a certain disease types or another body area, can equally be mentioned based on the embodiment of the present application
The image processing method of confession realizes the detection to this type disease.
A kind of expression way is changed, image processing method provided by the embodiments of the present application can be realized to medically various types disease
The detection of disease, and it is not limited solely to polyp detection, the embodiment of the present application is only to illustrate so that intestinal polyp detects as an example to it
Explanation.
Only had detected in alternatively possible implementation, in previous embodiment in each image with the presence or absence of polyp with
And the position that polyp occurs in the picture.In addition to this, the embodiment of the present application can also further provide more information, such as
The diagnosis report etc. of the size, type, character and generation of polyp about this testing result, the embodiment of the present application pair are provided
This is without specifically limiting.
In alternatively possible implementation, in addition to using image processing method provided by the above embodiment to carry out polyp
Except detection, the detection method of single-frame static images also can be used to carry out polyp detection.For example, being carried out in advance to a certain frame image
It, can be by the prediction result of preamble picture frame and image feature information also as when carrying out current frame image prediction when survey
Input, i.e., polyp prediction is carried out together with current frame image.
In alternatively possible implementation, other video frequency tracking methods can also be used to carry out polyp detection.For example,
It can be inputted using deep learning method end to end as one section of video, by length time memory network or similar depth
Learning network is spent, the polyp prediction result of each frame image in the video is directly generated.But this method require to have it is a large amount of and
Whole section of video of mark is completed for training, while cumulative errors can be generated with the increase of video.
In alternatively possible implementation, when carrying out single-frame images polyp detection, another depth end to end
Learning method is that two-dimensional image is summed into a three-dimensional matrice, passes through Three dimensional convolution using time dimension as the third dimension
Mode is calculated.But this method is higher than two-dimentional volume due to the more convolution algorithm of the one-dimensional space, computation complexities
Product.
Fig. 7 is a kind of structural schematic diagram of image processing apparatus provided by the embodiments of the present application.Referring to Fig. 7, the device packet
It includes:
Module 701 is obtained, for obtaining the Video image stream of body area to be detected;
Detection module 702, for successively carrying out lesion detection to each frame image in the Video image stream;
Processing module 703, for for current frame image, according to the first lesion testing result of preamble frame image and described
Second lesion testing result of current frame image, classifies to the current frame image;
Wherein, the preamble frame image is at least frame image being located at before the current frame image in timing.
Device provided by the embodiments of the present application can will consider the prediction result of preamble frame image in image procossing
Among the prediction of current frame image, i.e., device provided by the embodiments of the present application can integrate the prediction result of preamble frame image and work as
The image information of prior image frame has not only polymerize the height of single-frame images detection method to complete the final prediction to single-frame images
Effect and the advantages of without accumulated error, and the accurate of image classification is significantly improved by the relevant information of fusion other frame images
Spend and ensure the continuity of prediction result.
In one possible implementation, detection module is also used to input current frame image in detection model, obtain
First segmented image of detection model output, each pixel indicates the current frame image in first segmented image
Pixel on middle corresponding position is the probability value of lesion;First segmented image is adjusted, and to adjusted
One segmented image is post-processed;Calculate the connected component of at least one foreground area in the first segmented image of post-processing;It presses
At least one described connected component is ranked up according to size, and according to the similarity degree with target shape to it is described at least one
Connected component is ranked up;It, will be described when maximum connected component, consistent with closest to the connected component of the target shape
The foreground area of maximum connected component instruction, is determined as the lesion center point of current frame image.
In one possible implementation, detection module is also used to obtain adjusted with previous frame images match
Second segmented image seeks the average value of first segmented image and second segmented image adjusted, obtains described
First segmented image adjusted;Using specified numerical value as threshold value, first segmented image adjusted is carried out at binaryzation
Reason;The noise spot in the first segmented image and smooth foreground edge after removing binary conversion treatment.
In one possible implementation, processing module is also used to according to an at least frame figure in the preamble frame image
The motion vector of the lesion center point of picture, by linear fit predict the movement of the lesion center point of the current frame image to
It measures, the lesion center point of an at least frame image is obtained based on the first lesion testing result in the preamble frame image;
Disease is predicted in the motion vector of prediction lesion center point based on the current frame image, tracking described in the current frame image
The position coordinates of stove central point;When the predicted position coordinate that tracking obtains is located in the image range of the current frame image,
The predicted position coordinate is determined based on classifier, obtains third lesion testing result;It is examined based on second lesion
Result and the third lesion testing result are surveyed, is classified to the current frame image.
In one possible implementation, the device further include:
Training module, the previous frame image for obtaining the current frame image input third obtained in detection model point
Cut image;Based on the third segmented image, the positive sample of destination number and the negative sample of destination number are generated;Based on the mesh
The positive sample of quantity and the negative sample of the destination number are marked, on-line training is carried out to the classifier.
In one possible implementation, training module is also used to determine focal zone in the third segmented image
Domain;In the third segmented image, the image-region for being greater than the first value with the focal area overlapping range is cut, is obtained
The positive sample of the destination number;In the third segmented image, cut with the focal area overlapping range less than second
The image-region of value obtains the negative sample of the destination number.
In one possible implementation, processing module is also used to when the predicted position coordinate is beyond described current
When the image range of frame image, the stopping of the predicted position coordinate corresponding prediction lesion center point is chased after in next frame image
Track;Or, when the corresponding prediction lesion center point of the predicted position coordinate is determined as background by the classifier, in next frame
The corresponding prediction lesion center point of the predicted position coordinate is stopped tracking in image.
In one possible implementation, processing module is also used to when the tracking frame to any one lesion center point
When number is greater than the first quantity, stopping is tracked the lesion center point;Or,
When tracking a lesion center point failure in the consecutive image in the second quantity, stop to the lesion center point
It is tracked.
In one possible implementation, processing module is also used to when second prediction result and the third are pre-
When the lesion center point quantity that survey result provides is at least two, connection Euclidean distance is less than the adjacent lesion center of targets threshold
Point;The corresponding lesion center point of largest connected component is determined as by the connected component for calculating at least two lesion centers point
The final lesion center point of the current frame image.
All the above alternatives can form the alternative embodiment of the disclosure, herein no longer using any combination
It repeats one by one.
It should be understood that image processing apparatus provided by the above embodiment is when performing image processing, only with above-mentioned each
The division progress of functional module can according to need and for example, in practical application by above-mentioned function distribution by different function
Energy module is completed, i.e., the internal structure of device is divided into different functional modules, to complete whole described above or portion
Divide function.In addition, image processing apparatus provided by the above embodiment and image processing method embodiment belong to same design, have
Body realizes that process is detailed in embodiment of the method, and which is not described herein again.
Fig. 8 shows a kind of structural block diagram of image processing equipment 800 of one exemplary embodiment of the application offer.It should
Equipment 800 can be portable mobile apparatus, such as: smart phone, tablet computer, MP3 player (Moving Picture
Experts Group Audio Layer III, dynamic image expert's compression standard audio level 3), MP4 (Moving
Picture Experts Group Audio Layer IV, dynamic image expert's compression standard audio level 4) player, pen
Remember this computer or desktop computer.Equipment 800 is also possible to referred to as user equipment, portable device, laptop devices, bench device
Deng other titles.
In general, equipment 800 includes: processor 801 and memory 802.
Processor 801 may include one or more processing cores, such as 4 core processors, 8 core processors etc..Place
Reason device 801 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field-
Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed
Logic array) at least one of example, in hardware realize.Processor 801 also may include primary processor and coprocessor, master
Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing
Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.?
In some embodiments, processor 801 can be integrated with GPU (Graphics Processing Unit, image processor),
GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 801 can also be wrapped
AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learning
Calculating operation.
Memory 802 may include one or more computer readable storage mediums, which can
To be non-transient.Memory 802 may also include high-speed random access memory and nonvolatile memory, such as one
Or multiple disk storage equipments, flash memory device.In some embodiments, the non-transient computer in memory 802 can
Storage medium is read for storing at least one instruction, at least one instruction performed by processor 801 for realizing this Shen
Please in embodiment of the method provide image processing method.
In some embodiments, equipment 800 is also optional includes: peripheral device interface 803 and at least one peripheral equipment.
It can be connected by bus or signal wire between processor 801, memory 802 and peripheral device interface 803.Each peripheral equipment
It can be connected by bus, signal wire or circuit board with peripheral device interface 803.Specifically, peripheral equipment includes: radio circuit
804, at least one of touch display screen 805, camera 806, voicefrequency circuit 807, positioning component 808 and power supply 809.
Peripheral device interface 803 can be used for I/O (Input/Output, input/output) is relevant outside at least one
Peripheral equipment is connected to processor 801 and memory 802.In some embodiments, processor 801, memory 802 and peripheral equipment
Interface 803 is integrated on same chip or circuit board;In some other embodiments, processor 801, memory 802 and outer
Any one or two in peripheral equipment interface 803 can realize on individual chip or circuit board, the present embodiment to this not
It is limited.
Radio circuit 804 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal.It penetrates
Frequency circuit 804 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 804 turns electric signal
It is changed to electromagnetic signal to be sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit 804 wraps
It includes: antenna system, RF transceiver, one or more amplifiers, tuner, oscillator, digital signal processor, codec chip
Group, user identity module card etc..Radio circuit 804 can be carried out by least one wireless communication protocol with other equipment
Communication.The wireless communication protocol includes but is not limited to: WWW, Metropolitan Area Network (MAN), Intranet, each third generation mobile communication network (2G, 3G,
4G and 5G), WLAN and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some embodiments, it penetrates
Frequency circuit 804 can also include NFC (Near Field Communication, wireless near field communication) related circuit, this
Application is not limited this.
Display screen 805 is for showing UI (User Interface, user interface).The UI may include figure, text, figure
Mark, video and its their any combination.When display screen 805 is touch display screen, display screen 805 also there is acquisition to show
The ability of the touch signal on the surface or surface of screen 805.The touch signal can be used as control signal and be input to processor
801 are handled.At this point, display screen 805 can be also used for providing virtual push button and/or dummy keyboard, also referred to as soft button and/or
Soft keyboard.In some embodiments, display screen 805 can be one, and the front panel of equipment 800 is arranged;In other embodiments
In, display screen 805 can be at least two, be separately positioned on the different surfaces of equipment 800 or in foldover design;In still other reality
It applies in example, display screen 805 can be flexible display screen, be arranged on the curved surface of equipment 800 or on fold plane.Even, it shows
Display screen 805 can also be arranged to non-rectangle irregular figure, namely abnormity screen.Display screen 805 can use LCD (Liquid
Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode)
Etc. materials preparation.
CCD camera assembly 806 is for acquiring image or video.Optionally, CCD camera assembly 806 include front camera and
Rear camera.In general, the front panel of equipment is arranged in front camera, the back side of equipment is arranged in rear camera.One
In a little embodiments, rear camera at least two is main camera, depth of field camera, wide-angle camera, focal length camera shooting respectively
Any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide-angle
Camera fusion realizes that pan-shot and VR (Virtual Reality, virtual reality) shooting function or other fusions are clapped
Camera shooting function.In some embodiments, CCD camera assembly 806 can also include flash lamp.Flash lamp can be monochromatic warm flash lamp,
It is also possible to double-colored temperature flash lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for not
With the light compensation under colour temperature.
Voicefrequency circuit 807 may include microphone and loudspeaker.Microphone is used to acquire the sound wave of user and environment, and will
Sound wave, which is converted to electric signal and is input to processor 801, to be handled, or is input to radio circuit 804 to realize voice communication.
For stereo acquisition or the purpose of noise reduction, microphone can be separately positioned on the different parts of equipment 800 to be multiple.Mike
Wind can also be array microphone or omnidirectional's acquisition type microphone.Loudspeaker is then used to that processor 801 or radio circuit will to be come from
804 electric signal is converted to sound wave.Loudspeaker can be traditional wafer speaker, be also possible to piezoelectric ceramic loudspeaker.When
When loudspeaker is piezoelectric ceramic loudspeaker, the audible sound wave of the mankind can be not only converted electrical signals to, it can also be by telecommunications
Number the sound wave that the mankind do not hear is converted to carry out the purposes such as ranging.In some embodiments, voicefrequency circuit 807 can also include
Earphone jack.
Positioning component 808 is used for the current geographic position of positioning device 800, to realize navigation or LBS (Location
Based Service, location based service).Positioning component 808 can be the GPS (Global based on the U.S.
Positioning System, global positioning system), China dipper system or Russia Galileo system positioning group
Part.
Power supply 809 is used to be powered for the various components in equipment 800.Power supply 809 can be alternating current, direct current,
Disposable battery or rechargeable battery.When power supply 809 includes rechargeable battery, which can be wired charging electricity
Pond or wireless charging battery.Wired charging battery is the battery to be charged by Wireline, and wireless charging battery is by wireless
The battery of coil charges.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, equipment 800 further includes having one or more sensors 810.The one or more sensors
810 include but is not limited to: acceleration transducer 811, gyro sensor 812, pressure sensor 813, fingerprint sensor 814,
Optical sensor 815 and proximity sensor 816.
The acceleration that acceleration transducer 811 can detecte in three reference axis of the coordinate system established with equipment 800 is big
It is small.For example, acceleration transducer 811 can be used for detecting component of the acceleration of gravity in three reference axis.Processor 801 can
With the acceleration of gravity signal acquired according to acceleration transducer 811, touch display screen 805 is controlled with transverse views or longitudinal view
Figure carries out the display of user interface.Acceleration transducer 811 can be also used for the acquisition of game or the exercise data of user.
Gyro sensor 812 can detecte body direction and the rotational angle of equipment 800, and gyro sensor 812 can
To cooperate with acquisition user to act the 3D of equipment 800 with acceleration transducer 811.Processor 801 is according to gyro sensor 812
Following function may be implemented in the data of acquisition: when action induction (for example changing UI according to the tilt operation of user), shooting
Image stabilization, game control and inertial navigation.
The lower layer of side frame and/or touch display screen 805 in equipment 800 can be set in pressure sensor 813.Work as pressure
When the side frame of equipment 800 is arranged in sensor 813, user can detecte to the gripping signal of equipment 800, by processor 801
Right-hand man's identification or prompt operation are carried out according to the gripping signal that pressure sensor 813 acquires.When the setting of pressure sensor 813 exists
When the lower layer of touch display screen 805, the pressure operation of touch display screen 805 is realized to UI circle according to user by processor 801
Operability control on face is controlled.Operability control includes button control, scroll bar control, icon control, menu
At least one of control.
Fingerprint sensor 814 is used to acquire the fingerprint of user, collected according to fingerprint sensor 814 by processor 801
The identity of fingerprint recognition user, alternatively, by fingerprint sensor 814 according to the identity of collected fingerprint recognition user.It is identifying
When the identity of user is trusted identity out, the user is authorized to execute relevant sensitive operation, the sensitive operation packet by processor 801
Include solution lock screen, check encryption information, downloading software, payment and change setting etc..Equipment can be set in fingerprint sensor 814
800 front, the back side or side.When being provided with physical button or manufacturer Logo in equipment 800, fingerprint sensor 814 can be with
It is integrated with physical button or manufacturer Logo.
Optical sensor 815 is for acquiring ambient light intensity.In one embodiment, processor 801 can be according to optics
The ambient light intensity that sensor 815 acquires controls the display brightness of touch display screen 805.Specifically, when ambient light intensity is higher
When, the display brightness of touch display screen 805 is turned up;When ambient light intensity is lower, the display for turning down touch display screen 805 is bright
Degree.In another embodiment, the ambient light intensity that processor 801 can also be acquired according to optical sensor 815, dynamic adjust
The acquisition parameters of CCD camera assembly 806.
Proximity sensor 816, also referred to as range sensor are generally arranged at the front panel of equipment 800.Proximity sensor 816
For acquiring the distance between the front of user Yu equipment 800.In one embodiment, when proximity sensor 816 detects use
When family and the distance between the front of equipment 800 gradually become smaller, touch display screen 805 is controlled from bright screen state by processor 801
It is switched to breath screen state;When proximity sensor 816 detects user and the distance between the front of equipment 800 becomes larger,
Touch display screen 805 is controlled by processor 801 and is switched to bright screen state from breath screen state.
It will be understood by those skilled in the art that structure shown in Fig. 8 does not constitute the restriction to equipment 800, can wrap
It includes than illustrating more or fewer components, perhaps combine certain components or is arranged using different components.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware
It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely the preferred embodiments of the application, not to limit the application, it is all in spirit herein and
Within principle, any modification, equivalent replacement, improvement and so on be should be included within the scope of protection of this application.
Claims (15)
1. a kind of image processing method, which is characterized in that the described method includes:
Obtain the Video image stream of body area to be detected;
Lesion detection is successively carried out to each frame image in the Video image stream;
For current frame image, according to the first lesion testing result of preamble frame image and the second lesion of the current frame image
Testing result classifies to the current frame image;
Wherein, the preamble frame image is at least frame image being located at before the current frame image in timing.
2. the method according to claim 1, wherein carrying out lesion detection to the current frame image, comprising:
The current frame image is inputted in detection model, obtains the first segmented image of detection model output, described the
Each pixel indicates that the pixel in the current frame image on corresponding position is the probability value of lesion in one segmented image;
First segmented image is adjusted, and the first segmented image adjusted is post-processed;
Calculate the connected component of at least one foreground area in the first segmented image of post-processing;
At least one described connected component is ranked up according to size, and according to the similarity degree with target shape to it is described extremely
A few connected component is ranked up;
When maximum connected component, consistent with closest to the connected component of the target shape, by the maximum connection point
The foreground area for measuring instruction, is determined as the lesion center point of the current frame image.
3. according to the method described in claim 2, it is characterized in that, described be adjusted first segmented image, and it is right
First segmented image adjusted is post-processed, comprising:
The second segmented image adjusted with previous frame images match is obtained, first segmented image and the adjustment are sought
The average value of the second segmented image afterwards obtains first segmented image adjusted;
Using specified numerical value as threshold value, binary conversion treatment is carried out to first segmented image adjusted;
The noise spot in the first segmented image and smooth foreground edge after removing binary conversion treatment.
4. the method according to claim 1, wherein the first lesion testing result according to preamble frame image
With the second lesion testing result of the current frame image, classify to the current frame image, comprising:
According to the motion vector of the lesion center point of an at least frame image in the preamble frame image, institute is predicted by linear fit
The motion vector of the lesion center point of current frame image is stated, the lesion center point of an at least frame image is in the preamble frame image
It is obtained based on the first lesion testing result;
The motion vector of prediction lesion center point based on the current frame image, tracking are pre- described in the current frame image
Survey the position coordinates of lesion center point;
When the predicted position coordinate that tracking obtains is located in the image range of the current frame image, based on classifier to described
Predicted position coordinate is determined, third lesion testing result is obtained;
Based on the second lesion testing result and the third lesion testing result, classify to the current frame image.
5. according to the method described in claim 4, it is characterized in that, the method also includes:
Obtain third segmented image obtained in the previous frame image input detection model of the current frame image;
Based on the third segmented image, the positive sample of destination number and the negative sample of destination number are generated;
The negative sample of positive sample and the destination number based on the destination number carries out on-line training to the classifier.
6. according to the method described in claim 5, it is characterized in that, described be based on the third segmented image, generation number of targets
The positive sample of amount and the negative sample of destination number, comprising:
Focal area is determined in the third segmented image;
In the third segmented image, the image-region for being greater than the first value with the focal area overlapping range is cut, is obtained
To the positive sample of the destination number;
In the third segmented image, cuts with the focal area overlapping range less than the image-region of the second value, obtain
To the negative sample of the destination number.
7. according to the method described in claim 4, it is characterized in that, the method also includes:
When the predicted position coordinate exceeds the image range of the current frame image, to the prediction in next frame image
The corresponding prediction lesion center point of position coordinates stops tracking;Or,
When the corresponding prediction lesion center point of the predicted position coordinate is determined as background by the classifier, in next frame figure
The corresponding prediction lesion center point of the predicted position coordinate is stopped tracking as in.
8. the method according to any claim in claim 4 to 7, which is characterized in that the method also includes:
When the tracking frame number to any one lesion center point is greater than the first quantity, stopping chases after the lesion center point
Track;Or,
When tracking a lesion center point failure in the consecutive image in the second quantity, stop carrying out the lesion center point
Tracking.
9. according to the method described in claim 4, it is characterized in that, described based on the second lesion testing result and described the
Three lesion testing results, classify to the current frame image, comprising:
When the lesion center point quantity that second prediction result and the third prediction result provide is at least two, connection
Euclidean distance is less than the adjacent lesion center point of targets threshold;
The connected component for calculating at least two lesion centers point determines the corresponding lesion center point of largest connected component
For the lesion center point that the current frame image is final.
10. a kind of image processing apparatus, which is characterized in that described device includes:
Module is obtained, for obtaining the Video image stream of body area to be detected;
Detection module, for successively carrying out lesion detection to each frame image in the Video image stream;
Processing module is used for for current frame image, according to the first lesion testing result of preamble frame image and the present frame
Second lesion testing result of image, classifies to the current frame image;
Wherein, the preamble frame image is at least frame image being located at before the current frame image in timing.
11. device according to claim 10, which is characterized in that the processing module is also used to according to the preamble frame
The motion vector of the lesion center point of an at least frame image, the lesion of the current frame image is predicted by linear fit in image
The motion vector of central point, the lesion center point of an at least frame image is examined based on first lesion in the preamble frame image
Survey what result obtained;The motion vector of prediction lesion center point based on the current frame image, is tracked in the present frame figure
The position coordinates of lesion center point are predicted as described in;When the predicted position coordinate that tracking obtains is located at the current frame image
When in image range, the predicted position coordinate is determined based on classifier, obtains third lesion testing result;Based on institute
The second lesion testing result and the third lesion testing result are stated, is classified to the current frame image.
12. device according to claim 11, which is characterized in that the device further include:
Training module, the previous frame image for obtaining the current frame image input third segmentation figure obtained in detection model
Picture;Based on the third segmented image, the positive sample of destination number and the negative sample of destination number are generated;Based on the number of targets
The negative sample of the positive sample of amount and the destination number carries out on-line training to the classifier.
13. device according to claim 12, which is characterized in that training module is also used in the third segmented image
Middle determining focal area;In the third segmented image, cuts and be greater than the first value with the focal area overlapping range
Image-region obtains the positive sample of the destination number;In the third segmented image, cut Chong Die with the focal area
Range obtains the negative sample of the destination number less than the image-region of the second value.
14. a kind of image processing equipment, which is characterized in that the equipment includes processor and memory, is deposited in the memory
At least one instruction is contained, at least one instruction is loaded by the processor and executed to realize as in claim 1 to 9
Image processing method described in any one claim.
15. a kind of image processing system, which is characterized in that the system comprises: image capture device, image processing equipment and
Show equipment;
Described image acquisition equipment carries out Image Acquisition for treating detection body area, obtains the body area to be detected
Video image stream;
Described image processing equipment includes processor and memory, is stored at least one instruction in the memory, it is described extremely
A few instruction is loaded by the processor and is executed to realize: obtaining the Video image stream;To in the Video image stream
Each frame image successively carry out lesion detection;For current frame image, according to the first lesion testing result of preamble frame image
With the second lesion testing result of the current frame image, classify to the current frame image, the preamble frame image is
At least frame image being located at before the current frame image in timing;
The result that the display equipment is used to export described image processing equipment is shown.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910757897.5A CN110458127B (en) | 2019-03-01 | 2019-03-01 | Image processing method, device, equipment and system |
CN201910156660.1A CN109886243B (en) | 2019-03-01 | 2019-03-01 | Image processing method, device, storage medium, equipment and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910156660.1A CN109886243B (en) | 2019-03-01 | 2019-03-01 | Image processing method, device, storage medium, equipment and system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910757897.5A Division CN110458127B (en) | 2019-03-01 | 2019-03-01 | Image processing method, device, equipment and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109886243A true CN109886243A (en) | 2019-06-14 |
CN109886243B CN109886243B (en) | 2021-03-26 |
Family
ID=66930257
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910757897.5A Active CN110458127B (en) | 2019-03-01 | 2019-03-01 | Image processing method, device, equipment and system |
CN201910156660.1A Active CN109886243B (en) | 2019-03-01 | 2019-03-01 | Image processing method, device, storage medium, equipment and system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910757897.5A Active CN110458127B (en) | 2019-03-01 | 2019-03-01 | Image processing method, device, equipment and system |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN110458127B (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110516620A (en) * | 2019-08-29 | 2019-11-29 | 腾讯科技(深圳)有限公司 | Method for tracking target, device, storage medium and electronic equipment |
CN110706222A (en) * | 2019-09-30 | 2020-01-17 | 杭州依图医疗技术有限公司 | Method and device for detecting bone region in image |
CN110717441A (en) * | 2019-10-08 | 2020-01-21 | 腾讯医疗健康(深圳)有限公司 | Video target detection method, device, equipment and medium |
CN111223488A (en) * | 2019-12-30 | 2020-06-02 | Oppo广东移动通信有限公司 | Voice wake-up method, device, equipment and storage medium |
CN111311635A (en) * | 2020-02-08 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Target positioning method, device and system |
CN111383214A (en) * | 2020-03-10 | 2020-07-07 | 苏州慧维智能医疗科技有限公司 | Real-time endoscope enteroscope polyp detection system |
CN111738998A (en) * | 2020-06-12 | 2020-10-02 | 深圳技术大学 | Dynamic detection method and device for focus position, electronic equipment and storage medium |
CN111899268A (en) * | 2020-08-17 | 2020-11-06 | 上海商汤智能科技有限公司 | Image segmentation method and device, electronic equipment and storage medium |
CN111915573A (en) * | 2020-07-14 | 2020-11-10 | 武汉楚精灵医疗科技有限公司 | Digestive endoscopy focus tracking method based on time sequence feature learning |
CN111932492A (en) * | 2020-06-24 | 2020-11-13 | 数坤(北京)网络科技有限公司 | Medical image processing method and device and computer readable storage medium |
CN111950517A (en) * | 2020-08-26 | 2020-11-17 | 司马大大(北京)智能系统有限公司 | Target detection method, model training method, electronic device and storage medium |
CN112686865A (en) * | 2020-12-31 | 2021-04-20 | 重庆西山科技股份有限公司 | 3D view auxiliary detection method, system, device and storage medium |
CN112766066A (en) * | 2020-12-31 | 2021-05-07 | 北京小白世纪网络科技有限公司 | Method and system for processing and displaying dynamic video stream and static image |
WO2021114105A1 (en) * | 2019-12-09 | 2021-06-17 | 深圳先进技术研究院 | Training method and system for low-dose ct image denoising network |
CN113116305A (en) * | 2021-04-20 | 2021-07-16 | 深圳大学 | Nasopharyngeal endoscope image processing method and device, electronic equipment and storage medium |
CN113379723A (en) * | 2021-06-29 | 2021-09-10 | 上海闻泰信息技术有限公司 | Irregular glue overflow port detection method, device, equipment and storage medium |
CN114066781A (en) * | 2022-01-18 | 2022-02-18 | 浙江鸿禾医疗科技有限责任公司 | Capsule endoscope intestinal tract image identification and positioning method, storage medium and equipment |
CN114511558A (en) * | 2022-04-18 | 2022-05-17 | 武汉楚精灵医疗科技有限公司 | Method and device for detecting cleanliness of intestinal tract |
CN114841913A (en) * | 2021-02-02 | 2022-08-02 | 载美德有限公司 | Real-time biological image identification method and device |
CN114842239A (en) * | 2022-04-02 | 2022-08-02 | 北京医准智能科技有限公司 | Breast lesion attribute prediction method and device based on ultrasonic video |
WO2023226009A1 (en) * | 2022-05-27 | 2023-11-30 | 中国科学院深圳先进技术研究院 | Image processing method and device |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028219B (en) * | 2019-12-10 | 2023-06-20 | 浙江核睿医疗科技有限公司 | Colon image recognition method and device and related equipment |
CN110974179A (en) * | 2019-12-20 | 2020-04-10 | 山东大学齐鲁医院 | Auxiliary diagnosis system for stomach precancer under electronic staining endoscope based on deep learning |
CN111339999A (en) * | 2020-03-23 | 2020-06-26 | 东莞理工学院 | Image processing system and method for visual navigation robot |
CN111598133B (en) * | 2020-04-22 | 2022-10-14 | 腾讯医疗健康(深圳)有限公司 | Image display method, device, system, equipment and medium based on artificial intelligence |
CN112085760B (en) * | 2020-09-04 | 2024-04-26 | 厦门大学 | Foreground segmentation method for laparoscopic surgery video |
CN112669283B (en) * | 2020-12-29 | 2022-11-01 | 杭州优视泰信息技术有限公司 | Enteroscopy image polyp false detection suppression device based on deep learning |
CN112785573B (en) * | 2021-01-22 | 2024-08-16 | 上海商汤善萃医疗科技有限公司 | Image processing method, related device and equipment |
US20230034727A1 (en) * | 2021-07-29 | 2023-02-02 | Rakuten Group, Inc. | Blur-robust image segmentation |
WO2023113438A1 (en) * | 2021-12-16 | 2023-06-22 | 주식회사 온택트헬스 | Method of providing information about organ functions and device for providing information about organ functions using same |
CN115035153B (en) * | 2022-08-12 | 2022-10-28 | 武汉楚精灵医疗科技有限公司 | Medical image processing method, device and related equipment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101010927B1 (en) * | 2008-11-26 | 2011-01-25 | 서울대학교산학협력단 | Automated Polyps Detection Method using computer tomographic colonography and Automated Polyps Detection System using the same |
US20110032347A1 (en) * | 2008-04-15 | 2011-02-10 | Gerard Lacey | Endoscopy system with motion sensors |
CN102056530A (en) * | 2008-06-05 | 2011-05-11 | 奥林巴斯株式会社 | Image processing apparatus, image processing program and image processing method |
US20120250957A1 (en) * | 2011-03-31 | 2012-10-04 | International Business Machines Corporation | Shape based similarity of continuous wave doppler images |
CN105243360A (en) * | 2015-09-21 | 2016-01-13 | 西安空间无线电技术研究所 | Ship object self-organizing cluster method based on distance search |
CN108470355A (en) * | 2018-04-04 | 2018-08-31 | 中山大学 | Merge the method for tracking target of convolutional network feature and discriminate correlation filter |
US20180260951A1 (en) * | 2017-03-08 | 2018-09-13 | Siemens Healthcare Gmbh | Deep Image-to-Image Recurrent Network with Shape Basis for Automatic Vertebra Labeling in Large-Scale 3D CT Volumes |
CN109003672A (en) * | 2018-07-16 | 2018-12-14 | 北京睿客邦科技有限公司 | A kind of early stage of lung cancer detection classification integration apparatus and system based on deep learning |
CN109360226A (en) * | 2018-10-17 | 2019-02-19 | 武汉大学 | A kind of multi-object tracking method based on time series multiple features fusion |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104063885A (en) * | 2014-07-23 | 2014-09-24 | 山东建筑大学 | Improved movement target detecting and tracking method |
CN109272457B (en) * | 2018-08-09 | 2022-07-22 | 腾讯科技(深圳)有限公司 | Image mask generation method and device and server |
-
2019
- 2019-03-01 CN CN201910757897.5A patent/CN110458127B/en active Active
- 2019-03-01 CN CN201910156660.1A patent/CN109886243B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110032347A1 (en) * | 2008-04-15 | 2011-02-10 | Gerard Lacey | Endoscopy system with motion sensors |
CN102056530A (en) * | 2008-06-05 | 2011-05-11 | 奥林巴斯株式会社 | Image processing apparatus, image processing program and image processing method |
KR101010927B1 (en) * | 2008-11-26 | 2011-01-25 | 서울대학교산학협력단 | Automated Polyps Detection Method using computer tomographic colonography and Automated Polyps Detection System using the same |
US20120250957A1 (en) * | 2011-03-31 | 2012-10-04 | International Business Machines Corporation | Shape based similarity of continuous wave doppler images |
CN105243360A (en) * | 2015-09-21 | 2016-01-13 | 西安空间无线电技术研究所 | Ship object self-organizing cluster method based on distance search |
US20180260951A1 (en) * | 2017-03-08 | 2018-09-13 | Siemens Healthcare Gmbh | Deep Image-to-Image Recurrent Network with Shape Basis for Automatic Vertebra Labeling in Large-Scale 3D CT Volumes |
CN108470355A (en) * | 2018-04-04 | 2018-08-31 | 中山大学 | Merge the method for tracking target of convolutional network feature and discriminate correlation filter |
CN109003672A (en) * | 2018-07-16 | 2018-12-14 | 北京睿客邦科技有限公司 | A kind of early stage of lung cancer detection classification integration apparatus and system based on deep learning |
CN109360226A (en) * | 2018-10-17 | 2019-02-19 | 武汉大学 | A kind of multi-object tracking method based on time series multiple features fusion |
Non-Patent Citations (9)
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110516620B (en) * | 2019-08-29 | 2023-07-28 | 腾讯科技(深圳)有限公司 | Target tracking method and device, storage medium and electronic equipment |
CN110516620A (en) * | 2019-08-29 | 2019-11-29 | 腾讯科技(深圳)有限公司 | Method for tracking target, device, storage medium and electronic equipment |
US11783491B2 (en) | 2019-08-29 | 2023-10-10 | Tencent Technology (Shenzhen) Company Limited | Object tracking method and apparatus, storage medium, and electronic device |
CN110706222A (en) * | 2019-09-30 | 2020-01-17 | 杭州依图医疗技术有限公司 | Method and device for detecting bone region in image |
CN110706222B (en) * | 2019-09-30 | 2022-04-12 | 杭州依图医疗技术有限公司 | Method and device for detecting bone region in image |
CN110717441A (en) * | 2019-10-08 | 2020-01-21 | 腾讯医疗健康(深圳)有限公司 | Video target detection method, device, equipment and medium |
WO2021114105A1 (en) * | 2019-12-09 | 2021-06-17 | 深圳先进技术研究院 | Training method and system for low-dose ct image denoising network |
CN111223488A (en) * | 2019-12-30 | 2020-06-02 | Oppo广东移动通信有限公司 | Voice wake-up method, device, equipment and storage medium |
CN111223488B (en) * | 2019-12-30 | 2023-01-17 | Oppo广东移动通信有限公司 | Voice wake-up method, device, equipment and storage medium |
CN111311635A (en) * | 2020-02-08 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Target positioning method, device and system |
CN111383214A (en) * | 2020-03-10 | 2020-07-07 | 苏州慧维智能医疗科技有限公司 | Real-time endoscope enteroscope polyp detection system |
CN111383214B (en) * | 2020-03-10 | 2021-02-19 | 长沙慧维智能医疗科技有限公司 | Real-time endoscope enteroscope polyp detection system |
CN111738998A (en) * | 2020-06-12 | 2020-10-02 | 深圳技术大学 | Dynamic detection method and device for focus position, electronic equipment and storage medium |
CN111932492A (en) * | 2020-06-24 | 2020-11-13 | 数坤(北京)网络科技有限公司 | Medical image processing method and device and computer readable storage medium |
CN111915573A (en) * | 2020-07-14 | 2020-11-10 | 武汉楚精灵医疗科技有限公司 | Digestive endoscopy focus tracking method based on time sequence feature learning |
CN111899268A (en) * | 2020-08-17 | 2020-11-06 | 上海商汤智能科技有限公司 | Image segmentation method and device, electronic equipment and storage medium |
CN111899268B (en) * | 2020-08-17 | 2022-02-18 | 上海商汤智能科技有限公司 | Image segmentation method and device, electronic equipment and storage medium |
CN111950517A (en) * | 2020-08-26 | 2020-11-17 | 司马大大(北京)智能系统有限公司 | Target detection method, model training method, electronic device and storage medium |
CN112686865A (en) * | 2020-12-31 | 2021-04-20 | 重庆西山科技股份有限公司 | 3D view auxiliary detection method, system, device and storage medium |
CN112686865B (en) * | 2020-12-31 | 2023-06-02 | 重庆西山科技股份有限公司 | 3D view auxiliary detection method, system, device and storage medium |
CN112766066A (en) * | 2020-12-31 | 2021-05-07 | 北京小白世纪网络科技有限公司 | Method and system for processing and displaying dynamic video stream and static image |
CN114841913A (en) * | 2021-02-02 | 2022-08-02 | 载美德有限公司 | Real-time biological image identification method and device |
CN113116305A (en) * | 2021-04-20 | 2021-07-16 | 深圳大学 | Nasopharyngeal endoscope image processing method and device, electronic equipment and storage medium |
CN113379723B (en) * | 2021-06-29 | 2023-07-28 | 上海闻泰信息技术有限公司 | Irregular glue overflow port detection method, device, equipment and storage medium |
CN113379723A (en) * | 2021-06-29 | 2021-09-10 | 上海闻泰信息技术有限公司 | Irregular glue overflow port detection method, device, equipment and storage medium |
CN114066781B (en) * | 2022-01-18 | 2022-05-10 | 浙江鸿禾医疗科技有限责任公司 | Capsule endoscope intestinal image identification and positioning method, storage medium and equipment |
CN114066781A (en) * | 2022-01-18 | 2022-02-18 | 浙江鸿禾医疗科技有限责任公司 | Capsule endoscope intestinal tract image identification and positioning method, storage medium and equipment |
CN114842239A (en) * | 2022-04-02 | 2022-08-02 | 北京医准智能科技有限公司 | Breast lesion attribute prediction method and device based on ultrasonic video |
CN114511558B (en) * | 2022-04-18 | 2022-07-19 | 武汉楚精灵医疗科技有限公司 | Method and device for detecting cleanliness of intestinal tract |
CN114511558A (en) * | 2022-04-18 | 2022-05-17 | 武汉楚精灵医疗科技有限公司 | Method and device for detecting cleanliness of intestinal tract |
WO2023226009A1 (en) * | 2022-05-27 | 2023-11-30 | 中国科学院深圳先进技术研究院 | Image processing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN110458127B (en) | 2021-02-26 |
CN110458127A (en) | 2019-11-15 |
CN109886243B (en) | 2021-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109886243A (en) | Image processing method, device, storage medium, equipment and system | |
CN110504029B (en) | Medical image processing method, medical image identification method and medical image identification device | |
JP7085062B2 (en) | Image segmentation methods, equipment, computer equipment and computer programs | |
CN108549863B (en) | Human body gesture prediction method, apparatus, equipment and storage medium | |
JP7186287B2 (en) | Image processing method and apparatus, electronic equipment and storage medium | |
CN110348543B (en) | Fundus image recognition method and device, computer equipment and storage medium | |
CN110070056A (en) | Image processing method, device, storage medium and equipment | |
CN110147805A (en) | Image processing method, device, terminal and storage medium | |
CN110059744A (en) | Method, the method for image procossing, equipment and the storage medium of training neural network | |
JP2022518745A (en) | Target position acquisition method, equipment, computer equipment and computer program | |
CN109978936A (en) | Parallax picture capturing method, device, storage medium and equipment | |
CN110570460B (en) | Target tracking method, device, computer equipment and computer readable storage medium | |
CN110135336A (en) | Training method, device and the storage medium of pedestrian's generation model | |
WO2022193973A1 (en) | Image processing method and apparatus, electronic device, computer readable storage medium, and computer program product | |
CN110009599A (en) | Liver masses detection method, device, equipment and storage medium | |
JP2022548453A (en) | Image segmentation method and apparatus, electronic device and storage medium | |
CN112487844A (en) | Gesture recognition method, electronic device, computer-readable storage medium, and chip | |
WO2023202285A1 (en) | Image processing method and apparatus, computer device, and storage medium | |
CN110705438A (en) | Gait recognition method, device, equipment and storage medium | |
CN110517771B (en) | Medical image processing method, medical image identification method and device | |
CN111598896A (en) | Image detection method, device, equipment and storage medium | |
CN113257412B (en) | Information processing method, information processing device, computer equipment and storage medium | |
CN117038088A (en) | Method, device, equipment and medium for determining onset of diabetic retinopathy | |
CN110135329A (en) | Method, apparatus, equipment and the storage medium of posture are extracted from video | |
CN111639639A (en) | Method, device, equipment and storage medium for detecting text area |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20190911 Address after: 518000 Guangdong city of Shenzhen province Qianhai Shenzhen Hong Kong cooperation zone before Bay Road No. 1 building 201 room A Applicant after: Tencent Medical Health (Shenzhen) Co., Ltd. Address before: 518057 Nanshan District science and technology zone, Guangdong, Zhejiang Province, science and technology in the Tencent Building on the 1st floor of the 35 layer Applicant before: Tencent Technology (Shenzhen) Co., Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |