CN110458223A - Tumor of bronchus automatic testing method and detection system under a kind of scope - Google Patents
Tumor of bronchus automatic testing method and detection system under a kind of scope Download PDFInfo
- Publication number
- CN110458223A CN110458223A CN201910722331.9A CN201910722331A CN110458223A CN 110458223 A CN110458223 A CN 110458223A CN 201910722331 A CN201910722331 A CN 201910722331A CN 110458223 A CN110458223 A CN 110458223A
- Authority
- CN
- China
- Prior art keywords
- bronchus
- tumor
- image
- value
- brightness
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0082—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
- A61B5/0084—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for introduction into the body, e.g. by catheters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/032—Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Probability & Statistics with Applications (AREA)
- Endoscopes (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The present invention relates to technical field of medical equipment, tumor of bronchus automatic testing method and detection system under a kind of scope are disclosed.It creates through the invention, provide new method a kind of based on convolutional neural networks and that doctor's automatic identification tumor of bronchus state of an illness can be substituted, i.e. by first carrying out the tumour recognition training based on convolutional neural networks method to a large amount of bronchus endoscopic image, then identification prediction is carried out to real-time detection image using training pattern, the mode of existing structure figures can be substituted, identification tumor type is removed in bronchus real-time detection, the operation demand of real time data processing can not only be greatly reduced, the tumour state of an illness can also be found automatically, remind that doctor is further is diagnosed, the state of an illness is made a definite diagnosis in time, and then it can reduce the working strength of doctor, make a definite diagnosis whether have lesion in time, avoid the therapic opportunity of the delay state of an illness, it is particularly useful to the discovery of early lesion.
Description
Technical field
The invention belongs to technical field of medical equipment, more particularly to tumor of bronchus automatic testing method under a kind of scope
And detection system.
Background technique
Computer technology has been widely used in medical domain at present, even if being counted with computer technology to medical image
Wordization analysis, and then adjuvant clinical doctor has found lesion.Prior art discloses some about textural characteristics and its calculation of description
Method research, the also application including algorithm and linguistic term, such as the method based on Gabor filter, meets human visual perception
The characteristics of characteristic and human eye physiological vision of system, be the important development direction of Analysis of texture images.Side disclosed in the prior art
Method respectively has advantage and disadvantage, and many Texture Segmentation Algorithms have low, calculating complexity of correctly classifying, parameter in practical applications
The problems such as selection is difficult, these problems constrain the application of these algorithms to a certain extent.
One width bronchus endoscopic image often contains a large amount of information, is mostly based on the bronchus scope figure of graph theory at present
As parser is all using the pixel of each bronchus endoscopic image as the node of a figure, and to a pair of arbitrary neighborhood
Node is attached, and establishes the side of figure, makes the syntople in this side expression bronchus endoscopic image between pixel and pixel.In
After establishing syntople, it can be calculated by the information of the pixel to two neighboring node on behalf to establish each side
Weight.It is similarity degree between pixel and pixel in image that this weight, which expresses,.Although the mode of this structure figures can be with
Greatly retain the raw information of a bronchus endoscopic image, however establishing a node for each pixel will will include
Much It is not necessary to local message, meanwhile, bronchus endoscopic image higher for a resolution ratio, this mode can also
So that the size of the figure of building is very big and pays very big calculation amount, so that being not used in real-time endoscopy application.
In addition, using scope, (i.e. endoscope, including gastroscope, colonoscopy, airway wall and laparoscope etc. are that one kind can at present
To enter human body by the natural cavity of human body or invasive cavity, the optical instrument and image for carrying out diagnosis detection and treatment are adopted
Collection instrument) when being measured in real time to tumor of bronchus, need doctor absorbedly to examine collected by scope institute
Endoscopic picture, so that greatly strengthening the working strength of doctor.Therefore, how to mitigate the working strength of doctor and automatically discovery branch
Tracheal neoplasm is a urgent problem
Summary of the invention
During solving existing scope real-time detection, the working strength of doctor cannot be mitigated and find bronchus automatically
The problem of tumour, it is an object of that present invention to provide tumor of bronchus automatic testing method and detection systems under a kind of scope.
The technical scheme adopted by the invention is as follows:
Tumor of bronchus automatic testing method under a kind of scope, includes the following steps:
S101. multiple bronchus endoscopic images are obtained and mark the tumor type of each bronchus endoscopic image, wherein
The tumor type includes no tumor type and has tumor type, and is directed to various tumor types, corresponding bronchus endoscopic image
Number be no less than 300;
S102. it is directed to each bronchus endoscopic image, extracts corresponding characteristic value collection, wherein the characteristic value collection
It include M2The characteristic value of a different dimensions, M are the natural number not less than 3;
S103. it is directed to each bronchus endoscopic image, generate corresponding according to corresponding characteristic value collection and there are M*M
The fisrt feature grayscale image of pixel;
S104. using the fisrt feature grayscale image of each bronchus endoscopic image and the tumor type of correspondence markings as primary
Training sample imported into progress tumour recognition training in convolutional neural networks model, wherein by the first of bronchus endoscopic image
Signature grey scale figure as sample verifies data as sample input data, by tumor type corresponding with fisrt feature grayscale image;
S105. the bronchus detection video flowing from endoscope is obtained;
S106. a frame bronchus video image is extracted as current to be checked from bronchus detection video flowing in real time
Altimetric image;
S107. it is directed to described current image to be detected, is generated pair according to the identical mode of processing bronchus endoscopic image
Second feature grayscale image answering and with M*M pixel;
S108. the second feature grayscale image of described current image to be detected is imported into and completes to swell by the step S104
Tumour identification prediction is carried out in the convolutional neural networks model of tumor recognition training, obtains the ownership probability of different tumor types;
S109. judge that tumor type is to have whether the ownership probability of tumor type is more than first threshold, determine if being more than
It was found that the tumour state of an illness, then issue reminder message.
Optimization, in the step S102, for certain bronchus endoscopic image, extract as follows corresponding
Characteristic value:
S211. brightness maximum region and brightness smallest region in the bronchus endoscopic image are obtained according to statistics with histogram
Domain;
S212. the first brightness intermediate value MB in brightness maximum region is calculated separatelyMaxWith the second brightness of brightness Minimum Area
Intermediate value MBMin, and calculate brightness ratio MBMax/MBMin;
S213. using the first brightness intermediate value, the second brightness intermediate value and the brightness ratio as the spy of wherein 3 dimensions
Value indicative.
Optimization, in the step S102, for certain bronchus endoscopic image, extract as follows corresponding
Characteristic value:
S221. the bronchus endoscopic image is divided by blackspot region and background area using Intensity threshold Separation method
Domain;
S222. the average brightness reduced value and gross area reduced value in blackspot region and background area are calculated, and counts blackspot
Total blackspot number in region;
S223. using the average brightness reduced value, the gross area reduced value and total blackspot number as wherein 3 dimensions
Characteristic value.
Optimization, fisrt feature grayscale image is generated as follows in the step S103:
S301. for each characteristic value in characteristic value collection, according to following formula carry out value range between 0~255 it
Between numerical value mapping:
In formula, RiFor the mapping value of i-th dimension characteristic value, round () is round function, viFor i-th dimension feature
Value, vmaxFor the maximum value in all i-th dimension characteristic values, vminFor the minimum value in all i-th dimension characteristic values, i is between 1~M2
Between natural number;
S302. for each characteristic value in characteristic value collection, one by one using correspondence mappings value as pixel
Gray value obtains the fisrt feature grayscale image with M*M pixel.
Optimization, in the step S104, the convolutional neural networks model includes input layer, convolutional layer, activation letter
Several layers, full articulamentum, give up layer and output layer;
The input layer is for importing fisrt feature grayscale image and second feature grayscale image;
The convolutional layer is used to carry out convolution operation to the signature grey scale figure of importing, wherein being configured with N number of size is m*m*
1 convolution kernel, N are the natural number greater than 8, and m is the natural number not less than 3 and no more than M;
The activation primitive layer is for activating the output result of convolutional layer, wherein select Sigmoid function as
Activation primitive;
The full articulamentum will be for that will be mapped to a sample mark by characteristic pattern caused by each convolution kernel in convolutional layer
Remember space;
The layer of giving up is for propagating or updated in full articulamentum and randomly selected partial nerve member each
It is set as 0 in journey, prevents overfitting phenomenon;
The output layer is used to export the ownership probability of different tumor types, wherein is determined using Softmax classifier
It imports the correspondence tumor type of signature grey scale figure and calculates the ownership probability of different tumor types.
Optimization, during the tumour recognition training of the step S104, according to the resulting most probable tumour class of training
The matching result of type and sample verification data, continues to optimize convolutional neural networks model, until completing training or until training
Resulting most probable tumor type and the matching rate of sample verification data reach second threshold.
Optimization, in the step S106, if the bronchus detects video flowing for mpeg video stream, described in extraction
I frame image that is in mpeg video stream and belonging to GOP image frame is as current image to be detected.
It advanced optimizes, in the step S106, a frame image is extracted in every continuous φ I frame image as current
Image to be detected, wherein φ is the natural number between 1~30;
In the step S109, if discovery the tumour state of an illness, reduce parameter phi, then proceed to execute step S106~
S109。
Specifically, described, to have tumor type include benign tumour type and cancer type.
Another technical solution of the present invention are as follows:
Tumor of bronchus automatic checkout system under a kind of scope, including realize Microendoscopic tumor of bronchus as previously described from
The computer equipment of dynamic detection method, further includes endoscope, transmission cable and display screen, wherein the endoscope passes through described
Transmission cable communicates to connect the computer equipment, and the computer equipment also communicates to connect the display screen.
The invention has the benefit that
(1) the invention provides a kind of based on convolutional neural networks and can substitute doctor's automatic identification bronchus
The new detection method and detection system of the tumour state of an illness, i.e., by first carrying out a large amount of bronchus endoscopic image based on convolutional Neural
Then the tumour recognition training of network method carries out identification prediction to real-time detection image using training pattern, can substitute existing
There is the mode of structure figures, identification tumor type is removed in bronchus real-time detection, can not only greatly reduce real time data processing
Operation demand, can also find the tumour state of an illness automatically, remind that doctor is further to be diagnosed, make a definite diagnosis the state of an illness in time, in turn
It can reduce the working strength of doctor, whether have lesion, avoid the therapic opportunity of the delay state of an illness, especially help if making a definite diagnosis in time
In the discovery of early lesion.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the flow diagram of tumor of bronchus automatic testing method under scope provided by the invention.
Fig. 2 is the structural schematic diagram of tumor of bronchus automatic checkout system under scope provided by the invention.
Specific embodiment
With reference to the accompanying drawing and specific embodiment come the present invention is further elaborated.It should be noted that for
Although the explanation of these way of example is to be used to help understand the present invention, but and do not constitute a limitation of the invention.It is public herein
The specific structure and function detail opened are only used for description example embodiments of the present invention.However, can with many alternative forms come
The present invention is embodied, and is not construed as limiting the invention in embodiment set forth herein.
It will be appreciated that though various units may be described herein using term first, second etc., but these units
It should not be limited by these terms.These terms are only used to distinguish a unit and another unit.Such as it can be by
Unit one is referred to as second unit, and similarly second unit can be referred to as first unit, shows without departing from of the invention
The range of example embodiment.
It should be appreciated that being only a kind of pass for describing affiliated partner to the term "and/or" being likely to occur in this article
Connection relationship indicates may exist three kinds of relationships, for example, A and/or B, can indicate: individualism A, individualism B are deposited simultaneously
In tri- kinds of situations of A and B;To the term "/and " being likely to occur in this article, it is to describe another affiliated partner relationship, indicates
There may be two kinds of relationships, for example, A/ and B, can indicate: two kinds of situations of individualism A, individualism A and B;In addition, for
The character "/" being likely to occur herein, typicallying represent forward-backward correlation object is a kind of "or" relationship.
If should be appreciated that, it can when unit being referred to as with another unit " connection ", " connected " or " coupling " herein
To be directly connected with another unit or couple or temporary location may exist.Relatively, if herein by unit be referred to as with
When another unit " being connected directly " or " direct-coupling ", indicate that temporary location is not present.Additionally, it should solve in a similar manner
Release for describing the relationship between unit other words (for example, " ... between " to " between directly existing ... ", " adjacent "
To " direct neighbor " etc.).
It should be appreciated that terms used herein are only used for description specific embodiment, it is not intended to limit example of the invention
Embodiment.If used herein, singular "a", "an" and "the" is intended to include plural form, unless context
Contrary is explicitly indicated.If being also understood that, term " includes ", " including ", "comprising" and/or " containing " are herein
When being used, specify stated feature, integer, step, operation, unit and/or component existence, and be not excluded for one
Or other multiple features, quantity, step, operation, unit, component and/or their combination existence or increase.
It should be appreciated that it will be further noted that the function action occurred may go out with attached drawing in some alternative embodiments
Existing sequence is different.Such as related function action is depended on, it can actually substantially be executed concurrently, or sometimes
Two figures continuously shown can be executed in reverse order.
It should be appreciated that providing specific details, in the following description in order to which example embodiment is understood completely.
However those of ordinary skill in the art are it is to be understood that implementation example embodiment without these specific details.
Such as system can be shown in block diagrams, to avoid with unnecessary details come so that example is unclear.In other instances, may be used
Or not show well-known process, structure and technology unnecessary details, to avoid making example embodiment unclear.
Embodiment one
As shown in Figure 1, present embodiments providing tumor of bronchus automatic testing method under a kind of scope, include the following steps
S101~S109.
S101. multiple bronchus endoscopic images are obtained and mark the tumor type of each bronchus endoscopic image, wherein
The tumor type includes no tumor type and has tumor type, and is directed to various tumor types, corresponding bronchus endoscopic image
Number be no less than 300.
In the step S101, the bronchus endoscopic image be tumor of bronchus is detected using scope obtained by
History acquire image.The mode of corresponding tumor type is marked for the bronchus endoscopic image, concretely manual type.
In order to ensure the required sample of subsequent training is enough, the identification model with higher forecasting accuracy can be obtained, correspondence is all kinds of
The bronchus endoscopic image number of tumor type should be no less than 300.In addition, the tumor type can also specifically be subdivided into it is more
The tumor type of kind of group, specifically, it is described have tumor type can with but be not limited to include benign tumour type and pernicious swell
Tumor type etc..
S102. it is directed to each bronchus endoscopic image, extracts corresponding characteristic value collection, wherein the characteristic value collection
It include M2The characteristic value of a different dimensions, M are the natural number not less than 3.
In the step S102, specifically, can be, but not limited to for certain bronchus endoscopic image according to such as lower section
Formula extracts corresponding characteristic value: S211. according to statistics with histogram obtain in the bronchus endoscopic image brightness maximum region and
Brightness Minimum Area;S212. the first brightness intermediate value MB in brightness maximum region is calculated separatelyMaxWith the of brightness Minimum Area
Two brightness intermediate value MBMin, and calculate brightness ratio MBMax/MBMin;It S213. will be in the first brightness intermediate value, second brightness
The characteristic value of value and the brightness ratio as wherein 3 dimensions.In the step S211, statistic histogram refers to a certain physics
Amount does duplicate measurements several times under the same conditions, obtains series of measured values, finds out its maximum value and minimum value, then really
A fixed section, makes it includes whole measurement data, section is divided into several minizones, statistical measurements appear in each cell
Between frequency F, using frequency F as ordinate, mark each minizone and its corresponding frequency height using measurement data as abscissa,
A histogram, i.e. statistic histogram then can be obtained, can so count and obtain brightness maximum in the bronchus endoscopic image
Region and brightness Minimum Area.In addition, the concrete mode for calculating two brightness intermediate values is existing usual manner, repeated no more in this.
In the step S102, specifically, for certain bronchus endoscopic image, can with but be not limited to according to as follows
Mode extracts corresponding characteristic value: the bronchus endoscopic image is divided into blackspot using Intensity threshold Separation method by S221.
Region and background area;S222. the average brightness reduced value and gross area reduced value in blackspot region and background area are calculated, and is united
Count total blackspot number in blackspot region;S223. by the average brightness reduced value, the gross area reduced value and total blackspot
Characteristic value of the number as wherein 3 dimensions.
In addition, numerical value M can be exemplified as 6 in the step S102,36 differences can be obtained so in characteristic value collection
The characteristic value of dimension.
S103. it is directed to each bronchus endoscopic image, generate corresponding according to corresponding characteristic value collection and there are M*M
The fisrt feature grayscale image of pixel.
In the step S103, specifically, can be, but not limited to generate fisrt feature grayscale image as follows:
S301. for each characteristic value in characteristic value collection, numerical value of the value range between 0~255 is carried out according to following formula
Mapping:
In formula, RiFor the mapping value of i-th dimension characteristic value, round () is round function, viFor i-th dimension feature
Value, vmaxFor the maximum value in all i-th dimension characteristic values, vminFor the minimum value in all i-th dimension characteristic values, i is between 1~M2
Between natural number;S302. for each characteristic value in characteristic value collection, one by one using correspondence mappings value as a pixel
The gray value of point, obtains the fisrt feature grayscale image with M*M pixel.
S104. using the fisrt feature grayscale image of each bronchus endoscopic image and the tumor type of correspondence markings as primary
Training sample imported into progress tumour recognition training in convolutional neural networks model, wherein by the first of bronchus endoscopic image
Signature grey scale figure as sample verifies data as sample input data, by tumor type corresponding with fisrt feature grayscale image.
In the step S104, the convolutional neural networks model is that a kind of application couples similar to cerebral nerve cynapse
Structure carry out information process- mathematics computing model, specifically included input layer, convolutional layer, activation primitive layer, Quan Lian
It connects layer, give up layer and output layer;The second spy that the input layer is used to import in fisrt feature grayscale image and subsequent step S108
Levy grayscale image;The convolutional layer is used to carry out convolution operation to the signature grey scale figure of importing, wherein being configured with N number of size is m*
The convolution kernel of m*1, N are the natural number greater than 8, and m is the natural number not less than 3 and no more than M;The activation primitive layer for pair
The output result of convolutional layer is activated, wherein selects Sigmoid function as activation primitive;The full articulamentum is used for will
Characteristic pattern caused by each convolution kernel is mapped to a sample labeling space in convolutional layer;The layer of giving up for that will connect entirely
It connects in layer and randomly selected partial nerve member and is set as 0 in each propagation or renewal process, prevent overfitting
Phenomenon;The output layer is used to export the ownership probability of different tumor types, wherein determines to lead using Softmax classifier
Enter the correspondence tumor type of signature grey scale figure and calculates the ownership probability of different tumor types.
It is the signature grey scale of 6*6*1 (i.e. width is 6, a height of 6, Color Channel number is 1) for size in the convolutional layer
Figure, can concrete example construct convolution kernel that 32 sizes are 3*3*1 (i.e. width is 3, a height of 3, Color Channel number is 1), and be arranged
Step-length stride=1 fills padding=1.Due to having carried out convolution operation in convolutional layer, it will usually lead to signature grey scale figure
The size signature grey scale figure size that changes, therefore export can be used following formula to indicate:
In formula, W1And H1It is width and height of the signature grey scale figure before convolution operation, W respectively2And H2It is signature grey scale figure respectively
Width and height after convolution operation, WK、HKAnd d2Be respectively convolution kernel width, height and output after signature grey scale figure port number.
Padding is Filling power, because needing in convolution process it is possible that image pixel is unsatisfactory for the convolution condition of convolution kernel
Filling a 0 value pixel on image periphery allows convolution operation to go on.
In the activation primitive layer, the output result to convolutional layer is needed to be input in activation primitive, activation primitive has
Many kinds select Sigmoid function as activation letter since the application scenarios of the present embodiment are tumor of bronchus detection
Number, it, which is utilized, has the characteristics of inhibition to both ends numerical value to middle section variation sensitivity, the change of capture characteristic value that can be fine and smooth
Change simultaneously again numerical value can be all compressed in zone of reasonableness (convolution operation is substantially linear operation, increase " activation " operation mesh
Be introduce non-linear factor so that neural network possesses better generalization ability, performance is more preferable).
In the full articulamentum, due to that can generate a characteristic pattern after each convolution kernel operation, and full articulamentum
It is responsible for 32 width characteristic patterns caused by 32 convolution kernels being mapped to a sample labeling space.
Give up in layer described, give up and operated as Dropout, which is the phenomenon that preventing over-fitting hair
It is raw.Network model in the present embodiment can be operated in full articulamentum 20% neuron by Dropout propagate each or
It is arranged to 0 in renewal process.
In the output layer, can illustrate 19 neurons of setting, and with corresponding 19 kinds of tumour classifications, (i.e. a kind without tumour class
Type and 18 kinds of groups have tumor type), and construct Softmax classifier especially in this layer to determine tumour classification.
The output of Softmax classifier can be expressed as follows:
In formula, e is the bottom of natural logrithm, and n indicates the number of tumour classification, WjRepresent full articulamentum and j-th of output layer
The connected weight parameter of neuron, P (yi|xi) indicate the probability for belonging to i-th kind of tumor type, that is, correspond to certain tumor type
Belong to probability, yiIt is as a result, meaning in xiPremise or under the conditions of event occur probability or possibility.
During the tumour recognition training of the step S104, according to the resulting most probable tumor type of training and sample
The matching result for verifying data, continues to optimize convolutional neural networks model, until completing training or until training is resulting most
Possible tumor type and the matching rate of sample verification data reach second threshold.The second threshold both can be preset door
Limit value is also possible to default value, such as 90%.Specifically, the ownership probability that can use the output of Softmax classifier obtains most
The matching rate of possible tumor type and sample verification data, ownership probability is higher, i.e., matching rate is higher, and matching is better.
S105. the bronchus detection video flowing from endoscope is obtained.
In the step S105, the bronchus detection video flowing is carrying out tumor of bronchus detection by the endoscope
When collect in real time.
S106. a frame bronchus video image is extracted as current to be checked from bronchus detection video flowing in real time
Altimetric image.
In the step S106, optimization, if the bronchus detects video flowing for mpeg video stream, described in extraction
I frame image that is in mpeg video stream and belonging to GOP image frame is as current image to be detected.For the ease of to the bronchus
It detects video flowing to carry out over long distances or be wirelessly transferred, can encode to generate in endoscope side has GOP (GroupofPictures, meaning
Think of is picture group, and a GOP is exactly one group of continuous picture) MPEG of image frame (MovingPictureExpertsGroup,
Dynamic image expert group is ISO and IEC marking specifically for moving image and the compress speech formulation world in establishment in 1988
Quasi- tissue;The coding characteristic of mpeg video stream is that image frame is divided into I frame, P frame and three kinds of B frame, wherein I frame is in frame
Encoded image frame, it can be understood as the complete reservation of this frame picture only needs this frame data that can complete most in decoding
The generation of whole picture, I frame are usually first frame of each GOP, and by moderately compressing, can be as the ginseng of random access
Examination point;P frame is forward predictive coded picture frame, indicates this frame with the difference of an I frame or P frame before, in decoding
It needs the picture cached before to be superimposed with the difference of this frame definition, generates final picture;B frame is bidirectionally predictive coded picture
Frame, what is recorded is the difference of this frame and before and after frames, the caching picture before not only obtaining in decoding, it is also necessary to decode it
Picture afterwards is obtained final picture, can must not be deposited in some mpeg video streams by being superimposed for front and back picture and this frame data
) video flowing etc., if so directly extracting I frame image as current image to be detected, it may not need reduction treatment P frame or B frame
Image greatly reduces data processing amount, and guarantees to find the tumour state of an illness in time.In order to be further reduced data processing amount,
Optimization, a frame image can be extracted as current image to be detected in every continuous φ I frame image, wherein φ is between 1~30
Between natural number.
S107. it is directed to described current image to be detected, is generated pair according to the identical mode of processing bronchus endoscopic image
Second feature grayscale image answering and with M*M pixel.
S108. the second feature grayscale image of described current image to be detected is imported into and completes to swell by the step S104
Tumour identification prediction is carried out in the convolutional neural networks model of tumor recognition training, obtains the ownership probability of different tumor types.
In the step S108, specifically the ownership of different tumor types can be obtained by the output of Softmax classifier
Probability.
S109. judge that tumor type is to have whether the ownership probability of tumor type is more than first threshold, determine if being more than
It was found that the tumour state of an illness, then issue reminder message.
In the step S109, the first threshold both can be preset threshold value, be also possible to default value, example
Such as 68%.The reminder message is for prompting doctor on the scene to have found the tumour state of an illness, further to be diagnosed, in time really
It diagnoses a disease feelings, the voice prompting message for having tumor type and corresponding ownership probability comprising identification can be exemplified as.In addition, institute
It states in step S109, if the discovery tumour state of an illness, also reduces parameter phi, then proceedes to execute step S106~S109, so
Can in the tumour state of an illness having found that it is likely that, in time promoted identification frequency, avoid the occurrence of the state of an illness omission, convenient for quickly and in time really
It examines.
From there through abovementioned steps S101~S109, existing structure figures can be substituted based on convolutional neural networks method
Mode removes identification tumor type in bronchus real-time detection, can not only greatly reduce the operation demand of real time data processing
Amount, can also find the tumour state of an illness automatically, remind that doctor is further to be diagnosed, and make a definite diagnosis the state of an illness in time, and then can reduce doctor
Whether raw working strength, make a definite diagnosis has lesion in time, avoids the therapic opportunity of the delay state of an illness, is particularly useful to early lesion
Discovery.
To sum up, using tumor of bronchus automatic testing method under scope provided by the present embodiment, there is following technology effect
Fruit:
(1) it present embodiments provides a kind of based on convolutional neural networks and can to substitute doctor's automatic identification bronchus swollen
The new detection method of the tumor state of an illness, i.e., by first swell based on convolutional neural networks method to a large amount of bronchus endoscopic image
Then tumor recognition training carries out identification prediction to real-time detection image using training pattern, can substitute the side of existing structure figures
Formula removes identification tumor type in bronchus real-time detection, can not only greatly reduce the operation demand of real time data processing,
It can also find the tumour state of an illness automatically, remind that doctor is further to be diagnosed, make a definite diagnosis the state of an illness in time, and then can reduce doctor's
Whether working strength, make a definite diagnosis has lesion in time, avoids the therapic opportunity of the delay state of an illness, is particularly useful to the hair of early lesion
It is existing.
Embodiment two
It is propped up under a kind of scope based on identical inventive concept as shown in Fig. 2, the present embodiment relative to embodiment one, provides
Tracheal neoplasm automatic checkout system, the meter including realizing the Microendoscopic tumor of bronchus automatic testing method as described in embodiment one
Machine equipment is calculated, further includes endoscope, transmission cable and display screen, wherein the endoscope passes through the transmission cable communication link
The computer equipment is connect, the computer equipment also communicates to connect the display screen.Under the scope tumor of bronchus from
In the specific structure of dynamic detection system, the computer equipment can be exemplified as desktop computer or hand-held intelligent equipment.It is peeped in described
Mirror is for protruding into human body bronchus portion, and acquisition in real time can detect video by the bronchus that the transmission cable transmits outward
Stream can be used existing bronchial endoscope and realize.The transmission cable for transmitting the bronchus detection video outward
Stream can be used existing cable and realize.The display screen is used to show the bronchus detection video flowing, and in discovery tumour
When the state of an illness, the reminder message is broadcasted, existing display screen can also be used and realize.
The particular technique details and total technical effect of detection system described in the present embodiment, can refer to embodiment always
It connects and is derived by, do not repeated in this.
Multiple embodiments described above are only schematical, if being related to unit as illustrated by the separation member,
It may or may not be physically separated;If being related to component shown as a unit, can be or
It can not be physical unit, it can it is in one place, or may be distributed over multiple network units.It can be according to reality
Some or all of the units may be selected to achieve the purpose of the solution of this embodiment for the needs on border.Those of ordinary skill in the art
Without paying creative labor, it can understand and implement.
The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although with reference to the foregoing embodiments
Invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each implementation
Technical solution documented by example is modified or equivalent replacement of some of the technical features.And these modification or
Replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.
Finally it should be noted that the present invention is not limited to above-mentioned optional embodiment, anyone is in enlightenment of the invention
Under can all obtain other various forms of products.Above-mentioned specific embodiment should not be understood the limit of pairs of protection scope of the present invention
System, protection scope of the present invention should be subject to be defined in claims, and specification can be used for explaining that right is wanted
Seek book.
Claims (10)
1. tumor of bronchus automatic testing method under a kind of scope, which comprises the steps of:
S101. multiple bronchus endoscopic images are obtained and mark the tumor type of each bronchus endoscopic image, wherein is described
Tumor type includes no tumor type and has tumor type, and is directed to various tumor types, of corresponding bronchus endoscopic image
Number no less than 300;
S102. it is directed to each bronchus endoscopic image, extracts corresponding characteristic value collection, wherein the characteristic value collection includes
There is M2The characteristic value of a different dimensions, M are the natural number not less than 3;
S103. it is directed to each bronchus endoscopic image, generate corresponding according to corresponding characteristic value collection and there is M*M pixel
The fisrt feature grayscale image of point;
S104. using the fisrt feature grayscale image of each bronchus endoscopic image and the tumor type of correspondence markings as primary training
Sample imported into progress tumour recognition training in convolutional neural networks model, wherein by the fisrt feature of bronchus endoscopic image
Grayscale image as sample verifies data as sample input data, by tumor type corresponding with fisrt feature grayscale image;
S105. the bronchus detection video flowing from endoscope is obtained;
S106. a frame bronchus video image is extracted as current mapping to be checked from bronchus detection video flowing in real time
Picture;
S107. it is directed to described current image to be detected, is generated according to mode identical with processing bronchus endoscopic image corresponding
And the second feature grayscale image with M*M pixel;
S108. the second feature grayscale image of described current image to be detected is imported into and completes tumour knowledge by the step S104
Not Xun Lian convolutional neural networks model in carry out tumour identification prediction, obtain the ownership probability of different tumor types;
S109. judge that tumor type is to have whether the ownership probability of tumor type is more than first threshold, determine to find if being more than
The tumour state of an illness, then issue reminder message.
2. tumor of bronchus automatic testing method under a kind of scope as described in claim 1, which is characterized in that in the step
In S102, for certain bronchus endoscopic image, corresponding characteristic value is extracted as follows:
S211. brightness maximum region and brightness Minimum Area in the bronchus endoscopic image are obtained according to statistics with histogram;
S212. the first brightness intermediate value MB in brightness maximum region is calculated separatelyMaxWith the second brightness intermediate value of brightness Minimum Area
MBMin, and calculate brightness ratio MBMax/MBMin;
S213. using the first brightness intermediate value, the second brightness intermediate value and the brightness ratio as the feature of wherein 3 dimensions
Value.
3. tumor of bronchus automatic testing method under a kind of scope as described in claim 1, which is characterized in that in the step
In S102, for certain bronchus endoscopic image, corresponding characteristic value is extracted as follows:
S221. the bronchus endoscopic image is divided by blackspot region and background area using Intensity threshold Separation method;
S222. the average brightness reduced value and gross area reduced value in blackspot region and background area are calculated, and counts blackspot region
Total blackspot number;
S223. using the average brightness reduced value, the gross area reduced value and total blackspot number as the spy of wherein 3 dimensions
Value indicative.
4. tumor of bronchus automatic testing method under a kind of scope as described in claim 1, which is characterized in that in the step
Fisrt feature grayscale image is generated in S103 as follows:
S301. for each characteristic value in characteristic value collection, value range is carried out between 0~255 according to following formula
Numerical value mapping:
In formula, RiFor the mapping value of i-th dimension characteristic value, round () is round function, viFor i-th dimension characteristic value, vmax
For the maximum value in all i-th dimension characteristic values, vminFor the minimum value in all i-th dimension characteristic values, i is between 1~M2Between
Natural number;
S302. for each characteristic value in characteristic value collection, one by one using correspondence mappings value as the gray scale of a pixel
Value, obtains the fisrt feature grayscale image with M*M pixel.
5. tumor of bronchus automatic testing method under a kind of scope as described in claim 1, which is characterized in that in the step
In S104, the convolutional neural networks model includes input layer, convolutional layer, activation primitive layer, full articulamentum, gives up layer and output
Layer;
The input layer is for importing fisrt feature grayscale image and second feature grayscale image;
The convolutional layer is used to carry out convolution operation to the signature grey scale figure of importing, wherein being configured with N number of size is m*m*1's
Convolution kernel, N are the natural number greater than 8, and m is the natural number not less than 3 and no more than M;
The activation primitive layer is for activating the output result of convolutional layer, wherein selects Sigmoid function as activation
Function;
The full articulamentum will be for that will be mapped to a sample labeling sky by characteristic pattern caused by each convolution kernel in convolutional layer
Between;
The layer of giving up is used in full articulamentum and randomly selected partial nerve member in each propagation or renewal process
It is set as 0, prevents overfitting phenomenon;
The output layer is used to export the ownership probability of different tumor types, wherein determines to import using Softmax classifier
The correspondence tumor type of signature grey scale figure and the ownership probability for calculating different tumor types.
6. tumor of bronchus automatic testing method under a kind of scope as described in claim 1, which is characterized in that in the step
During the tumour recognition training of S104, according to the matching knot of the resulting most probable tumor type of training and sample verification data
Fruit continues to optimize convolutional neural networks model, until completing training or until the resulting most probable tumor type of training and sample
The matching rate of this verification data reaches second threshold.
7. tumor of bronchus automatic testing method under a kind of scope as described in claim 1, which is characterized in that in the step
In S106, if bronchus detection video flowing is mpeg video stream, extracts in the mpeg video stream and belong to GOP picture
The I frame image of face frame is as current image to be detected.
8. tumor of bronchus automatic testing method under a kind of scope as claimed in claim 7, which is characterized in that in the step
In S106, extract a frame image as current image to be detected per in continuous φ I frame image, wherein φ between 1~30 it
Between natural number;
In the step S109, if discovery the tumour state of an illness, reduce parameter phi, then proceed to execute step S106~
S109。
9. tumor of bronchus automatic testing method under a kind of scope as described in claim 1, which is characterized in that described to have tumour
Type includes benign tumour type and cancer type.
10. tumor of bronchus automatic checkout system under a kind of scope, which is characterized in that including realizing as claim 1~9 is any
The computer equipment of one Microendoscopic tumor of bronchus automatic testing method further includes endoscope, transmission cable and is shown
Display screen, wherein the endoscope communicates to connect the computer equipment by the transmission cable, and the computer equipment is also logical
Letter connects the display screen.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910722331.9A CN110458223B (en) | 2019-08-06 | 2019-08-06 | Automatic detection method and detection system for bronchial tumor under endoscope |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910722331.9A CN110458223B (en) | 2019-08-06 | 2019-08-06 | Automatic detection method and detection system for bronchial tumor under endoscope |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110458223A true CN110458223A (en) | 2019-11-15 |
CN110458223B CN110458223B (en) | 2023-03-17 |
Family
ID=68485178
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910722331.9A Active CN110458223B (en) | 2019-08-06 | 2019-08-06 | Automatic detection method and detection system for bronchial tumor under endoscope |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110458223B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111681204A (en) * | 2020-04-30 | 2020-09-18 | 北京深睿博联科技有限责任公司 | CT rib fracture focus relation modeling method and device based on graph neural network |
Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101063662A (en) * | 2007-05-15 | 2007-10-31 | 广州市万世德包装机械有限公司 | Method for detecting empty bottle bottom defect and device for detecting empty bottle bottom defect based on DSP |
CN101556650A (en) * | 2009-04-01 | 2009-10-14 | 东北大学 | Distributed self-adapting pulmonary nodule computer detection method and system thereof |
CN102721702A (en) * | 2012-06-27 | 2012-10-10 | 山东轻工业学院 | Distributed paper defect detection system and method based on embedded processor |
CN104143101A (en) * | 2014-07-01 | 2014-11-12 | 华南理工大学 | Method for automatically identifying breast tumor area based on ultrasound image |
CN106469302A (en) * | 2016-09-07 | 2017-03-01 | 成都知识视觉科技有限公司 | A kind of face skin quality detection method based on artificial neural network |
CN106683081A (en) * | 2016-12-17 | 2017-05-17 | 复旦大学 | Brain glioma molecular marker nondestructive prediction method and prediction system based on radiomics |
US20170140248A1 (en) * | 2015-11-13 | 2017-05-18 | Adobe Systems Incorporated | Learning image representation by distilling from multi-task networks |
US9739783B1 (en) * | 2016-03-15 | 2017-08-22 | Anixa Diagnostics Corporation | Convolutional neural networks for cancer diagnosis |
CN107145756A (en) * | 2017-05-17 | 2017-09-08 | 上海辉明软件有限公司 | A kind of stroke types Forecasting Methodology and device |
CN107729078A (en) * | 2017-09-30 | 2018-02-23 | 广东欧珀移动通信有限公司 | Background application management-control method, device, storage medium and electronic equipment |
US20180075599A1 (en) * | 2015-03-31 | 2018-03-15 | Mayo Foundation For Medical Education And Research | System and methods for automatic polyp detection using convulutional neural networks |
CN108133219A (en) * | 2018-01-19 | 2018-06-08 | 天津市国瑞数码安全系统股份有限公司 | The nude picture detection method being combined based on HSV, SURF with LBP features |
CN108464840A (en) * | 2017-12-26 | 2018-08-31 | 安徽科大讯飞医疗信息技术有限公司 | A kind of breast lump automatic testing method and system |
CN108537168A (en) * | 2018-04-09 | 2018-09-14 | 云南大学 | Human facial expression recognition method based on transfer learning technology |
CN108681480A (en) * | 2017-09-30 | 2018-10-19 | 广东欧珀移动通信有限公司 | Background application management-control method, device, storage medium and electronic equipment |
CN108734614A (en) * | 2017-04-13 | 2018-11-02 | 腾讯科技(深圳)有限公司 | Traffic congestion prediction technique and device, storage medium |
CN108830853A (en) * | 2018-07-20 | 2018-11-16 | 东北大学 | A kind of melanoma aided diagnosis method based on artificial intelligence |
CN108960020A (en) * | 2017-05-27 | 2018-12-07 | 富士通株式会社 | Information processing method and information processing equipment |
CN108960322A (en) * | 2018-07-02 | 2018-12-07 | 河南科技大学 | A kind of coronary calcification patch automatic testing method based on cardiac CT image |
CN109063712A (en) * | 2018-06-22 | 2018-12-21 | 哈尔滨工业大学 | A kind of multi-model Hepatic diffused lesion intelligent diagnosing method and system based on ultrasound image |
CN109376777A (en) * | 2018-10-18 | 2019-02-22 | 四川木牛流马智能科技有限公司 | Cervical cancer tissues pathological image analysis method and equipment based on deep learning |
CN109948667A (en) * | 2019-03-01 | 2019-06-28 | 桂林电子科技大学 | Image classification method and device for the prediction of correct neck cancer far-end transfer |
CN109977955A (en) * | 2019-04-03 | 2019-07-05 | 南昌航空大学 | A kind of precancerous lesions of uterine cervix knowledge method for distinguishing based on deep learning |
CN110021431A (en) * | 2019-04-11 | 2019-07-16 | 上海交通大学 | Artificial intelligence assistant diagnosis system, diagnostic method |
-
2019
- 2019-08-06 CN CN201910722331.9A patent/CN110458223B/en active Active
Patent Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101063662A (en) * | 2007-05-15 | 2007-10-31 | 广州市万世德包装机械有限公司 | Method for detecting empty bottle bottom defect and device for detecting empty bottle bottom defect based on DSP |
CN101556650A (en) * | 2009-04-01 | 2009-10-14 | 东北大学 | Distributed self-adapting pulmonary nodule computer detection method and system thereof |
CN102721702A (en) * | 2012-06-27 | 2012-10-10 | 山东轻工业学院 | Distributed paper defect detection system and method based on embedded processor |
CN104143101A (en) * | 2014-07-01 | 2014-11-12 | 华南理工大学 | Method for automatically identifying breast tumor area based on ultrasound image |
US20180075599A1 (en) * | 2015-03-31 | 2018-03-15 | Mayo Foundation For Medical Education And Research | System and methods for automatic polyp detection using convulutional neural networks |
US20170140248A1 (en) * | 2015-11-13 | 2017-05-18 | Adobe Systems Incorporated | Learning image representation by distilling from multi-task networks |
US9739783B1 (en) * | 2016-03-15 | 2017-08-22 | Anixa Diagnostics Corporation | Convolutional neural networks for cancer diagnosis |
CN106469302A (en) * | 2016-09-07 | 2017-03-01 | 成都知识视觉科技有限公司 | A kind of face skin quality detection method based on artificial neural network |
CN106683081A (en) * | 2016-12-17 | 2017-05-17 | 复旦大学 | Brain glioma molecular marker nondestructive prediction method and prediction system based on radiomics |
CN108734614A (en) * | 2017-04-13 | 2018-11-02 | 腾讯科技(深圳)有限公司 | Traffic congestion prediction technique and device, storage medium |
CN107145756A (en) * | 2017-05-17 | 2017-09-08 | 上海辉明软件有限公司 | A kind of stroke types Forecasting Methodology and device |
CN108960020A (en) * | 2017-05-27 | 2018-12-07 | 富士通株式会社 | Information processing method and information processing equipment |
CN108681480A (en) * | 2017-09-30 | 2018-10-19 | 广东欧珀移动通信有限公司 | Background application management-control method, device, storage medium and electronic equipment |
CN107729078A (en) * | 2017-09-30 | 2018-02-23 | 广东欧珀移动通信有限公司 | Background application management-control method, device, storage medium and electronic equipment |
CN108464840A (en) * | 2017-12-26 | 2018-08-31 | 安徽科大讯飞医疗信息技术有限公司 | A kind of breast lump automatic testing method and system |
CN108133219A (en) * | 2018-01-19 | 2018-06-08 | 天津市国瑞数码安全系统股份有限公司 | The nude picture detection method being combined based on HSV, SURF with LBP features |
CN108537168A (en) * | 2018-04-09 | 2018-09-14 | 云南大学 | Human facial expression recognition method based on transfer learning technology |
CN109063712A (en) * | 2018-06-22 | 2018-12-21 | 哈尔滨工业大学 | A kind of multi-model Hepatic diffused lesion intelligent diagnosing method and system based on ultrasound image |
CN108960322A (en) * | 2018-07-02 | 2018-12-07 | 河南科技大学 | A kind of coronary calcification patch automatic testing method based on cardiac CT image |
CN108830853A (en) * | 2018-07-20 | 2018-11-16 | 东北大学 | A kind of melanoma aided diagnosis method based on artificial intelligence |
CN109376777A (en) * | 2018-10-18 | 2019-02-22 | 四川木牛流马智能科技有限公司 | Cervical cancer tissues pathological image analysis method and equipment based on deep learning |
CN109948667A (en) * | 2019-03-01 | 2019-06-28 | 桂林电子科技大学 | Image classification method and device for the prediction of correct neck cancer far-end transfer |
CN109977955A (en) * | 2019-04-03 | 2019-07-05 | 南昌航空大学 | A kind of precancerous lesions of uterine cervix knowledge method for distinguishing based on deep learning |
CN110021431A (en) * | 2019-04-11 | 2019-07-16 | 上海交通大学 | Artificial intelligence assistant diagnosis system, diagnostic method |
Non-Patent Citations (5)
Title |
---|
KHALID M. AMIN ET AL: "Fully automatic liver tumor segmentation from abdominal CT scans", 《THE 2010 INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING & SYSTEMS》 * |
RONGQIANG QIAN ET AL: "Visual attribute classification using feature selection and convolutional neural network", 《2016 IEEE 13TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP)》 * |
刘刚: "灰度共生矩阵和BP神经网络在肝癌CT图像诊断中的应用", 《万方》 * |
章文辉: "《数字视频测量技术》", 30 September 2003 * |
贾燕等: "46例黑斑息肉综合征的诊治", 《中南大学学报(医学版)》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111681204A (en) * | 2020-04-30 | 2020-09-18 | 北京深睿博联科技有限责任公司 | CT rib fracture focus relation modeling method and device based on graph neural network |
CN111681204B (en) * | 2020-04-30 | 2023-09-26 | 北京深睿博联科技有限责任公司 | CT rib fracture focus relation modeling method and device based on graph neural network |
Also Published As
Publication number | Publication date |
---|---|
CN110458223B (en) | 2023-03-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113496489B (en) | Training method of endoscope image classification model, image classification method and device | |
WO2019088121A1 (en) | Image diagnosis assistance apparatus, data collection method, image diagnosis assistance method, and image diagnosis assistance program | |
TWI696145B (en) | Colonoscopy image computer-aided recognition system and method | |
Segui et al. | Categorization and segmentation of intestinal content frames for wireless capsule endoscopy | |
CN103198467B (en) | Image processing apparatus and image processing method | |
CN109063643B (en) | Facial expression pain degree identification method under condition of partial hiding of facial information | |
EP4120186A1 (en) | Computer-implemented systems and methods for object detection and characterization | |
CN113781489B (en) | Polyp image semantic segmentation method and device | |
Shanmuga Sundaram et al. | An enhancement of computer aided approach for colon cancer detection in WCE images using ROI based color histogram and SVM2 | |
CN110363768A (en) | A kind of early carcinoma lesion horizon prediction auxiliary system based on deep learning | |
CN105657580A (en) | Capsule endoscopy video summary generation method | |
Wang et al. | Multimodal medical image fusion based on multichannel coupled neural P systems and max-cloud models in spectral total variation domain | |
CN109259528A (en) | A kind of home furnishings intelligent mirror based on recognition of face and skin quality detection | |
Mathew et al. | Transform based bleeding detection technique for endoscopic images | |
Jaiswal et al. | rPPG-FuseNet: Non-contact heart rate estimation from facial video via RGB/MSR signal fusion | |
Cai et al. | Identifying architectural distortion in mammogram images via a se-densenet model and twice transfer learning | |
Liu et al. | Feature pyramid U‐Net for retinal vessel segmentation | |
CN110458223A (en) | Tumor of bronchus automatic testing method and detection system under a kind of scope | |
CN118097160A (en) | Critical critical illness state monitoring system based on vision technology | |
Park et al. | Improving performance of computer-aided detection scheme by combining results from two machine learning classifiers | |
CN117079197B (en) | Intelligent building site management method and system | |
CN115147636A (en) | Lung disease identification and classification method based on chest X-ray image | |
CN111754503B (en) | Enteroscope mirror-withdrawing overspeed duty ratio monitoring method based on two-channel convolutional neural network | |
CN117253034A (en) | Image semantic segmentation method and system based on differentiated context | |
CN112488165A (en) | Infrared pedestrian identification method and system based on deep learning model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Do not announce the inventor Inventor before: Do not announce the inventor |
|
GR01 | Patent grant | ||
GR01 | Patent grant |