CN107665491A - The recognition methods of pathological image and system - Google Patents

The recognition methods of pathological image and system Download PDF

Info

Publication number
CN107665491A
CN107665491A CN201710934902.6A CN201710934902A CN107665491A CN 107665491 A CN107665491 A CN 107665491A CN 201710934902 A CN201710934902 A CN 201710934902A CN 107665491 A CN107665491 A CN 107665491A
Authority
CN
China
Prior art keywords
image
characteristic
pathological
processing unit
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710934902.6A
Other languages
Chinese (zh)
Other versions
CN107665491B (en
Inventor
王书浩
徐葳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201710934902.6A priority Critical patent/CN107665491B/en
Publication of CN107665491A publication Critical patent/CN107665491A/en
Application granted granted Critical
Publication of CN107665491B publication Critical patent/CN107665491B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The application provides a kind of identifying system and method for pathological image.Wherein described identifying system includes:Image received device, for receiving pathological image to be identified;Feature deriving means, for carrying out feature extraction to the pathological image using the multi-stage characteristics processing unit set in convolutional neural networks;Identification device, for carrying out fusion treatment to the characteristic image that at least two-stage characteristic processing unit is exported, and the pathological information in the pathological image is identified based on the pathological characters in fused image.The application can be effectively retained low dimensional and high-dimensional pathological characters, and to accurately identify characteristics of lesion, and can exactly corresponds to characteristics of lesion the location of pixels of pathological image, to improve the accuracy of auxiliary diagnosis.

Description

The recognition methods of pathological image and system
Technical field
The application is related to computer processing technology field, recognition methods and system more particularly to a kind of pathological image.
Background technology
Pathological image is that the tissue of patient's diseased region is cut into slices, and is amplified the figure that imaging obtains under the microscope Picture.Pathological image can disclose the reason for disease and the order of severity, be the important means of clinical diagnosis.With the increasing of number of patients Add, in addition the scarcity of professional pathology doctor quantity, area of computer aided pathological image is identified as an important development side To.
Known computer aided detection technology is that pathological image is identified using pathological characters.For example, utilize volume Product filters pathological characters matrix to pathological image, recycles sorting algorithm to have the pathological characters image after filtering There is Pathological Information or classified without Pathological Information to obtain the recognition result of pathological image.It is well known, however, that detection skill Art either can not obtain enough pathological characters or can not accurately be gone back because of excessively filtering because of feature extraction deficiency Former pathology location.Such computer aided detection technology can not effectively be promoted in clinic.
The content of the invention
In view of the above the shortcomings that prior art, the purpose of the application is to provide a kind of identifying system of pathological image And method, for solving the problems, such as that the computer aided detection pathological image degree of accuracy is low.
In order to achieve the above objects and other related objects, the first aspect of the application provides a kind of identification system of pathological image System, including:Image received device, for receiving pathological image to be identified;Feature deriving means, for utilizing convolutional Neural net The multi-stage characteristics processing unit set in network carries out feature extraction to the pathological image;Identification device, at least two-stage The characteristic image that characteristic processing unit is exported carries out fusion treatment, and based on described in the pathological characters identification in fused image Pathological information in pathological image.
In some embodiments of the first aspect of the application, the characteristic processing unit includes at least one set by first The first structure of filter, normalization module and active module composition;Wherein, the first filter according to default step-length to institute The image of reception carries out pathological characters extraction.
In some embodiments of the first aspect of the application, the first filter is using empty convolution to being received Image carry out pathological characters extraction.
In some embodiments of the first aspect of the application, the characteristic processing unit comprising the first structure is located at The end of the convolutional neural networks.
In some embodiments of the first aspect of the application, the characteristic processing unit includes at least one set of by the The second structure that tow filtrator, normalization module, the first merging module and active module form.
In some embodiments of the first aspect of the application, first merging module merges at least two received The characteristic image of the individual normalization module output;Or first merging module merges the spy of the normalization module output The characteristic image that sign image and previous stage characteristic processing unit are exported.
In some embodiments of the first aspect of the application, the feature deriving means also include being located at two-stage feature Downsampling unit between processing unit, for the characteristic image received to be carried out into down-sampling processing.
In some embodiments of the first aspect of the application, the identification device includes:With each feature unit The 3rd filter individually connected;Each 3rd filter carries out classification processing to the characteristic image received;Up-sample module, Individually it is connected with the filter, the image for respective filter to be exported recovers to the size of the pathological image;The Two merging modules, it is connected with each up-sampling module, for merging the characteristic image after each recovery.
In some embodiments of the first aspect of the application, the identification device includes:Identification module, for based on Pathological characters in characteristic image after merging corresponding to each pixel evaluate the pathological information in the pathological image.
The second aspect of the application provides a kind of identifying system of cancer pathology image, including:Image memory device, it is used for Store cancer pathology image to be identified;As above any described identifying system, for the disease in the cancer pathology image Reason information is identified;Display device, for the cancer pathology image comprising pathological information identified to be shown.
The third aspect of the application provides a kind of recognition methods of pathological image, including:Obtain pathological image to be identified; Feature extraction is carried out to the original image using the multi-stage characteristics processing unit set in convolutional neural networks;To at least two-stage The characteristic image that characteristic processing unit is exported carries out fusion treatment, and based on described in the pathological characters identification in fused image Pathological information in pathological image.
In some embodiments of the third aspect of the application, the characteristic processing unit includes at least one set by first The first structure of filter, normalization module and active module composition;Wherein, the first filter according to default step-length to institute The image of reception carries out pathological characters extraction.
Include in some embodiments of the third aspect of the application, in second structure and walked according to default cavity Length carries out the first filter of pathological characters extraction to the image received.
In some embodiments of the third aspect of the application, the characteristic processing unit comprising the first structure is located at At least afterbody of the convolutional neural networks.
In some embodiments of the third aspect of the application, the characteristic processing unit includes at least one set of by the The second structure that tow filtrator, normalization module, the first merging module and active module form.
In some embodiments of the third aspect of the application, first merging module merges at least two received The image of the individual normalization module output;Or first merging module merge the normalization module output image and The characteristic image that previous stage characteristic processing unit is exported.
In some embodiments of the third aspect of the application, in addition to the spy that the characteristic processing unit is exported Levy the step of image carries out down-sampling processing.
It is described that at least two-stage characteristic processing unit exported in some embodiments of the third aspect of the application Characteristic image carries out fusion treatment, and identifies the pathological information in the pathological image based on the pathological characters in fused image Mode include:Independent process of convolution is carried out to each characteristic image using filter corresponding with feature units at different levels;Will be corresponding The image that 3rd filter is exported recovers to the size of the pathological image, and merges the characteristic image after each amplification.
In some embodiments of the third aspect of the application, in addition to based on each pixel in the characteristic image after merging The step of corresponding pathological characters of point evaluate the pathological information in the pathological image.
The fourth aspect of the application provides a kind of recognition methods of cancer pathology image, including:Using such as weighing upper any institute The recognition methods identification cancer pathology image stated;The cancer pathology image comprising pathological information identified is shown.
As described above, the identifying system and method for the pathological image of the application, have the advantages that:By to convolution The characteristic image that at least two-stage characteristic processing unit is exported in neutral net carries out fusion treatment, and based in fused image Pathological characters identify pathological information in the pathological image, low dimensional and high-dimensional pathological characters can be effectively retained, To accurately identify characteristics of lesion, characteristics of lesion is corresponded to the location of pixels of pathological image, examined with improving auxiliary by and can exactly Disconnected accuracy.
Brief description of the drawings
Fig. 1 is shown as the structural representation of characteristic processing unit in one embodiment in the application convolutional neural networks.
Fig. 2 is shown as the structural representation of characteristic processing unit in yet another embodiment in the application convolutional neural networks Figure.
Fig. 3 is shown as the structural representation of the application identifying system in one embodiment.
Fig. 4 is shown as the structural representation of the second structure in one embodiment in the application characteristic processing unit.
Fig. 5 is shown as the structural representation of the second structure in yet another embodiment in the application characteristic processing unit.
Fig. 6 is shown as the structural representation of the application characteristic processing unit in one embodiment.
Fig. 7 is shown as the structural representation of the application identifying system in yet another embodiment.
Fig. 8 is shown as the structural representation of the application identifying system in another embodiment.
Fig. 9 is shown as the structural representation of identification device in one embodiment in the application identifying system.
Figure 10 is shown as the structural representation of the identifying system of the application cancer pathology image in one embodiment.
Figure 11 is shown as the flow chart of the application recognition methods in one embodiment.
Figure 12 is shown as the flow chart of the recognition methods of the application cancer pathology image in one embodiment.
Embodiment
Presently filed embodiment is illustrated by particular specific embodiment below, those skilled in the art can be by this explanation Content disclosed by book understands other advantages and effect of the application easily.
As used in herein, singulative " one ", "one" and "the" are intended to also include plural form, unless Have opposite instruction in context it will be further understood that term "comprising", " comprising " show to exist described feature, step, Operation, element, component, project, species, and/or group, but it is not excluded for other one or more features, step, operation, element, group Presence, appearance or the addition of part, project, species, and/or group.Term "or" and "and/or" used herein are interpreted to wrap Including property, or mean any one or any combinations therefore, " A, B or C " or " A, B and/or C " mean " following any It is individual:A;B;C;A and B;A and C;B and C;A, B and C ".Only when element, function, step or the combination of operation under some modes in When ground is mutually exclusive, the exception of this definition just occurs.
In order to improve the diagnosis efficiency of doctor, on the one hand, doctor obtains respective patient using the computer equipment of networking Pathological image in order to being diagnosed by amplifying pathological image locally;Another aspect doctor can utilize area of computer aided to know Other system quick lock in lesion region.Wherein, because the recognizer of known pathological characters can not take into account lesion region identification Ability and lesion region detection accuracy, therefore, the application provide recognition methods and the system of a kind of pathological image.The identification System includes the hardware and software of installation on a computing device.Wherein, the hardware in the computer equipment includes:Input is single Member, processing unit, memory cell, caching and display unit etc., wherein, it can be included in the processing unit and be exclusively used in convolution The chip or integrated circuit of neutral net and the computer program for including convolutional neural networks algorithm.The processing unit leads to The sequential for crossing program setting distributes the operation of each hardware, to perform the function of following each devices.Wherein, the computer equipment bag Include but be not limited to:Single server, multiple servers coordinate the server cluster of operation, PC, or even such as tablet personal computer Deng handheld terminal etc..
The identifying system of the pathological image includes:Image received device, feature deriving means and identification device.
Described image reception device is used to receive pathological image to be identified.Wherein, the pathological image can be pathology Pathological image that sectioning image or radiation device scanning obtain etc..Here, described image reception device can include processing list Member, caching and the interface being connected with storing the image library of pathological image.Processing unit in described image reception device according to The sequential instruction of program, the pathological image for shooting or scanning in advance is read from image library by interface.The interface include but Computer equipment and the data-interface of storage device where image library, or computer equipment are not limited to being stored where image library The network interface of communication between devices.
The feature deriving means are used to utilize the multi-stage characteristics processing unit set in convolutional neural networks to the disease Manage image and carry out feature extraction.
Here, the convolutional neural networks can be formed by characteristic processing is unit cascaded, wherein, each characteristic processing unit according to Pathological characters dimension is set step by step from low to high.The feature deriving means can be that can individually handle convolutional neural networks calculation The computing device of method, or it is included as handling the processing unit of convolutional neural networks algorithm and matching buffer unit. The processing unit, which can share with the processing unit in image received device or refer to include, is exclusively used in convolutional neural networks Chip or integrated circuit unit etc..
Here, first order characteristic processing unit receives original pathological image and carried out at the pathological characters of minimum dimension Reason, each characteristic processing unit of rear class receive the characteristic image that previous stage characteristic processing unit is exported, more high-dimensional to carry out Pathological characters processing.Wherein, each characteristic processing unit can export the characteristic image after pathological characters extract to next stage Characteristic processing unit.Here, characteristic image of the next stage characteristic processing unit based on the prime characteristic processing unit cascaded enters Row feature extraction.Every grade of characteristic processing unit can be traveled through using the convolution kernel corresponding to multiple pathological characters of same dimension To obtain the characteristic image after feature extraction.
The identification device is used to carry out fusion treatment to the characteristic image that at least two-stage characteristic processing unit is exported, and The pathological information in the pathological image is identified based on the pathological characters in fused image.
Here, the identification device can share the processing unit of computer equipment with feature deriving means or individually match somebody with somebody It is equipped with processor that can handle Neural Network Data ability etc..
The identification device receives characteristic image at least two characteristic processing units of feature deriving means, and by institute Each characteristic image received is up-sampled to obtain the characteristic image consistent with artwork size;Again to the characteristic pattern after up-sampling As merging, so obtain one-to-one with artwork pixel and be used for the pathology for reflecting each pixel in original image The characteristic image of feature distribution.Wherein, due to characteristic processing unit of each characteristic image from different cascade positions, so each spy Pathological characters on sign image corresponding to each point can determine by individual features processing unit.For example, come from first order feature The value of each point reflects the pathological characters that first order characteristic processing unit is extracted in the characteristic image of processing unit.And for example, come The first order is reflected from the value of each point in the characteristic image of third level characteristic processing unit to third level characteristic processing unit each The pathological characters extracted.
By the pathological information in the pathology distribution identification original image of the characteristic image merged.For example, the identification dress Put and determine (x, y) pixel (or image-region) in the characteristic image after merging using default pathological characters evaluation algorithms Pathological characters include cell tumour the feature M2 and a3% of a1% cell benign character M1, a2% cell benign character M3, The pathology letter of (x, y) pixel (or image-region) in corresponding artwork (pathological image) is determined after identification device evaluation Cease for benign tumour.The identifying system obtains captured pathology by evaluating the pathological information of each pixel in pathological image The accurate pathology distribution of image, and thereby aid in doctor and quickly delimit primary part observation region.
In some embodiments, refer to shown in Fig. 1, it is shown as in characteristic processing unit in a kind of embodiment Structural representation, at least a portion characteristic processing unit 1 may include it is at least one set of by first filter, normalization module and The first structure of active module composition.One or more can be included in first first filter 111 in the first structure 11 Convolution kernel Mi*Ni, i is natural number, aiFor convolution nuclear volume, each convolution kernel corresponds to a kind of pathological characters.The convolution kernel can Represent sick cell contour feature, sick cell patterned feature, sick cell shape facility, or even high-dimensional abstract pathology One kind in feature.The convolution kernel can be obtained by machine learning, or carry out signature analysis to pathological image by other approach And obtain.The first filter respectively cascades according in the size of the image received, the size of convolution kernel and neural network model The many factors such as relation set traversal step-length.For example, it is pointed at least one-level characteristic processing unit of neutral net front end each The step-length of first filter is more than 1.In order that the pathological characters of the pixel of each pathological image are identified, rear class feature The step-length of each first filter can be smaller in processing unit, as the step-length of each first filter in rear class characteristic processing unit is set to 1 or more than 1 and less than prime first filter step-length.
In each first structure, convolution kernel is traveled through received lesion image or feature by first filter according to step-length Image simultaneously by resulting characteristic image be supplied to normalization module be normalized, then by active module to normalization after Characteristic image enter line activating processing.Wherein, the normalization module citing normalizes the pixel value in the characteristic image To between [0,1].In order to prevent the problem of pathological characters are with exponent number increase and gradient weakening is cascaded, utilized by active module non-thread Property activation primitive the characteristic image after normalization is entered into line activating processing, marked institute in the characteristic image after activation is handled The characteristic point of the pathological characters of filtering.Wherein, the nonlinear activation function includes but is not limited to:Relu functions, sigmoid letters Number, tanh functions etc..For example, the matrix of the characteristic image after normalization is expressed asFeature after activation is handled The matrix of image is expressed asWherein, be expressed as that there are corresponding pathological characters more than 0.5 point in matrix, it is on the contrary then Without corresponding pathological characters.
It should be noted that each point value is only used for representing that activating numerical classification before and after the processing and change becomes in above-mentioned matrix Gesture, rather than the limitation used activation primitive.
It should also be noted that, each characteristic processing unit can select to have filtered and do not normalized, according to the design needs Normalization and un-activation or the characteristic image activated are exported to rear class characteristic processing unit or identification device.
It is smaller by gradients affect and focal area size itself in other embodiment, rear class characteristic processing list The characteristics of lesion that the characteristic image handled by first filter in member can reflect may less than 1 pixel, can so give image Segmentation band comes difficult.At least part first filter in rear class characteristic processing unit in the convolutional neural networks is using empty Hole convolution carries out pathological characters extraction to the image received.For example, at least one feature in the end of convolutional neural networks Manage the first filter included in unit using empty convolution.
Wherein, the empty convolution is to be expanded convolution kernel, and the numerical value expanded is usually 0.Such as the volume by 2*2 10 is filled between adjacent values in product core, obtains 3*3 convolution kernel.Adjustment can be passed through using the first filter of empty convolution Step-length is to ensure the validity of feature extraction.For example, using the first filter of empty convolution compared to same characteristic processing list The step-length of other first filters of first clock is larger.Any of the above-described kind of first filter or two can be included in a first structure The combination of kind first filter.For example, refer to shown in Fig. 2, it is shown as in characteristic processing unit each first structure another Structural representation in kind embodiment, first filter 111 is to use not extended convolution kernel wherein in first structure 11 Convolution is carried out to extract pathological characters, first filter 121 is uses empty convolution to extract pathological characters in first structure 12.
In some specific examples, multi-stage characteristics processing unit is included in constructed convolutional neural networks, and it is each special Levy the first structure that processing unit includes one or more cascades.Wherein, referring to Fig. 3, it is shown as convolutional Neural net The structural representation of the characteristic processing unit of network cascade.Positioned at least one characteristic processing unit in the front end of convolutional neural networks The first filter of each first structure 111 carries out pathological characters extraction using the convolution kernel not expanded in 11, positioned at convolution god The first filter of each first structure 121, which includes, at least one characteristic processing unit 12 in end through network uses empty convolution Carry out characteristics of lesion extraction.The feature deriving means are put forward the feature that pathological image is carried out step by step using the convolutional neural networks Take.Identification device 13 obtains characteristic image from least two characteristic processing units, due to each characteristic processing unit (11,12) institute The characteristic image size of output is not necessarily the same, and the identification device 13 can be up-sampled first to the pathology to each characteristic image The size of image;Further according to the weight of default each characteristic processing unit, each characteristic image after up-sampling is merged.Close The pixel of characteristic image and pathological image after and corresponds.The identification device 13 can be by each picture of characteristic image after merging The pathological characters that vegetarian refreshments is marked are corresponded on each pixel of pathological image.The characteristic image merged by identification device 13 In contain low dimensional and high-dimensional pathological characters simultaneously, identification device 13 can integrate prime and the pathological characters of rear class are known The pathological characters of not each pixel, therefore the lesion region of the pathological image identified is more accurate.
In other embodiment, at least partly the characteristic processing unit include it is at least one set of by the second filter, Normalize the second structure of module, the first merging module and active module composition.
Wherein, the second filter in second structure is used for the dimension using convolution adjustment characteristic image.For example, First structure and the second structure are included in one characteristic processing unit, wherein, two the second filters are included in the second structure Filter 1, filter 2 receive the characteristic image P1 and characteristic processing unit input that the first structure is exported respectively Characteristic image P2, wherein, the second filter filter 1 using convolution algorithm by characteristic image P1 from m1 dimensions be adjusted to n dimension Degree;Characteristic image P2 is adjusted to n dimensions by the second filter filter 2 using convolution algorithm from m2 dimensions.Thus it is easy to One merging module carries out splicing to two width characteristic images.Here, the second filter in the second structure can be only used for adjusting The dimension of whole characteristic image, or even feature extraction can also be carried out while dimension is adjusted.
The statistics that second filter the second filter the second filter of the second filter is distributed for the ease of subsequent characteristics, the The characteristic image of each second filter output is supplied to the first merging after carrying out normalization via normalization module in two structures Module.
Unlike the first structure, the first merging module is also included in the second structure.First merging module The input side of active module is arranged on, for by least two characteristic images in the characteristic processing unit of place and being merged. Wherein, the input of characteristic processing unit where the characteristic image that first merging module is received may be from and described the The output end of module is normalized in two structures.For example, referring to Fig. 4, it is shown as the knot of the second structure in one embodiment Structure schematic diagram.Comprising two group of second filter (211,211 ') and normalization module (212,212 ') in second structure 21, this two The second filter (211,211 ') of group and normalization module (212,212 ') adjust respectively the dimension of the image each received so as to It is identical in the dimension of two characteristic images exported, then by the first merging module 214 by two group of second filter (211, 211 ') and normalization module (212,212 ') two characteristic images for being exported carry out corresponding points weighting take and, by measure feature Image is combined into one, then transfers to active module 213 to enter line activating processing.And for example, referring to Fig. 5, it is shown as the second structure exists Structural representation in another embodiment.One group of second filter 311 and normalization module 312 are included in second structure 31, The characteristic image and characteristic processing that first merging module 314 is exported second filter of group 311 and normalization module 312 The image that unit is received merges, then transfers to active module 313 to enter line activating processing.For another example, referring to Fig. 6, it shows It is characterized the structural representation of processing unit in another embodiment.Characteristic processing unit includes the first structure of multiple cascades 41 and the second structure 42 for being connected with the first structure and characteristic processing unit input.
Wherein, the active module in second structure communicates with the activation processing mode of the active module in first structure Or it is similar, it will not be described in detail herein.
It can more be become more meticulous using second structure in convolutional neural networks while retain low dimensional and high-dimensional Pathological characters.In a specific example, constructed convolutional neural networks include multi-stage characteristics processing unit, wherein, front end Only include first structure at least one characteristic processing unit, in each characteristic processing unit of subsequent cascaded comprising first structure and Second structure.For example, referring to shown in Fig. 7, convolutional neural networks are a kind of real used by it is shown as feature deriving means Apply the structural representation in mode.A characteristic processing being only made up of first structure is included in constructed convolutional neural networks Unit 51 and multiple characteristic processing units 52 comprising first structure and the second structure, are therefore ensured that during characteristics of lesion is extracted Dimension reduces characteristics of lesion loss as far as possible in the case of gradually increasing.Wherein, can in the characteristic processing unit 52 of multiple cascades Selection staggeredly is comprising similar to the second structure shown in Fig. 4 and Fig. 5.On this basis, last two-stage characteristic processing unit 52 In include using empty convolution first filter, effectively to eliminate speckle phenomena.The feature deriving means utilize the volume Product neutral net carries out pathological characters extraction to pathological image, and by multiple characteristic processing units in the convolutional neural networks The characteristic image exported is delivered to identification device, and pathology identification is carried out by the identification device 53.Here, the identification device The mode of 53 identification pathology can be same or similar with aforementioned exemplary, will not be described here.
In order to further reduce the operand of computer equipment and have the accuracy to pathological characters extraction, the feature concurrently Extraction element also includes the downsampling unit between two-stage characteristic processing unit, for the characteristic image received to be carried out Down-sampling processing.
Here, in constructed convolutional neural networks, at least one downsampling unit can be set.In order to reduce feature Loss, the downsampling unit is set in the front end of convolutional neural networks.For example, referring to Fig. 8, it is shown as including above-mentioned volume The structural representation of the identifying system of product neutral net in one embodiment.The downsampling unit 54 is located at first order feature Between processing unit 51 and second level characteristic processing unit 52.Wherein, the downsampling unit 54 can use maximum down-sampling Down-sampling processing is carried out etc. mode.
In some specific examples, the identification device selection is at least one characteristic processing unit before down-sampling with The characteristic image of at least one characteristic processing unit after sampling carries out fusion treatment.For example, the as shown in figure 8, identification dress The prime that the 53 characteristic processing units 51 connected are located at downsampling unit 54 is put, under the characteristic processing unit 52 connected is located at The rear class of sampling unit 54 and comprising first structure and the second structure, the characteristic processing unit 52 connected is located at convolutional Neural net The end of network, the identification device 53 carry out fusion treatment by the characteristic image exported to three characteristic processing units and known Other pathological information.
Here, each characteristic image that the identification device is received can be individual features processing unit after activation is handled , individual features processing unit is filtered and non-normalized and through in normalized and un-activation processing extremely Few one kind.
In some embodiments, referring to Fig. 9, it is shown as the frame structure of identification device in one embodiment shows It is intended to.The identification device includes:3rd filter 631, the up-sampling merging module 633 of module 632 and second.
The quantity of 3rd filter 631 is related to the quantity for the characteristic image that the identification device is received.In order to Improve recognition efficiency, the identification device using each 3rd filter 631 individually reception one characteristic image by the way of, so as to Give parallel processing in computer equipment.The characteristic image received is carried out classification processing by the 3rd filter 631.Each Three filters 631 can use step-length to be handled by 1 traversal mode receiving the classification that characteristic image is traveled through.
Here, multiple filtering windows for being used to classify are preset with the 3rd filter 631, by the way that each filtering window is distinguished The received characteristic image of traversal, classification processing is carried out to each point in characteristic image.For example, disease is included in the 3rd filter Manage classification filtering window and non-pathological classification filtering window, and by the characteristic image received respectively with two filtering window carry out time The superposition gone through, same pixel represents to belong to lesion classification respectively and is not belonging to lesion point in two characteristic images exported The possibility of class.And for example, the filtering of the filtering window, the second pathological classification of the first pathological classification is included in the 3rd filter Window ..., the filtering window of N pathological classifications and non-lesion classification filtering window, and by the characteristic image received respectively with each window The superposition traveled through, same pixel is represented to belong to each pathological classification respectively and not belonged in each characteristic image exported In the possibility of lesion classification.We are subsequently to identify as foundation using each pathological classification, lesion classification as pathological characters.
Each characteristic image after each 3rd filter 631 handles classification is delivered to the up-sampling module individually connected 632.The up-sampling module 632 is used to recover the image that corresponding 3rd filter 631 is exported to the pathological image Size.
It should be noted that the 3rd filter can also be first according to the class probability of each pixel after filtering to each picture Vegetarian refreshments is classified to obtain a width characteristic image, then is amplified processing to this feature image by up-sampling module 632.
Wherein, each the window chi of up-sampling is separately configured according to the characteristic image size received for up-sampling module 632 It is very little, and by traveling through the window to be filled to characteristic image.For example, the up-sampling module 632 can use secondary-plug Value mode or copy mode are recovered.
Second merging module 633 connects all up-sampling modules 632, for merging the characteristic image after each recovery. Here, second merging module directly can merge the characteristic image after each recovery.In some embodiments, it is described Second merging module 633 merges the characteristic image after each recovery according to default weight.Wherein, each weight can be according to characteristic processing Position of the unit in neutral net, the pathological characters extracted the evaluation of many aspects such as dimension and set.Each weight is also It can be set based on advance machine learning.
The probability of all classification is described in characteristic image after merging in each pixel, according to weighting posterior probability proportion Pathological information correspondence markings represented by maximum classification can obtain the pathological image on each pixel of pathological image In pathological information.
The identification device can also include identification module 634, each pixel in its characteristic image after being used for based on merging The corresponding pathological characters of point evaluate the pathological information in the pathological image.
Include the evaluation weight of each pathological characters in the identification module 634, by each disease corresponding to pixel Reason feature is weighted processing to determine the pathological information of the pixel.Further, the identification module 634 can utilize temperature Color corresponding to each pathological information is depicted on pathological image by the drafting mode of prediction, is thus easy to doctor to carry out diagnosis ginseng Examine.
The application also provides a kind of identifying system of cancer pathology image.Wherein, the identification system of the cancer pathology image System can be applicable in the medical system of hospital, and doctor can obtain the cancer pathology image of corresponding patient by the medical system of operation, And pathology identification is carried out to cancer pathology image by the identifying system.Wherein, the cancer pathology image includes but is not limited to: Tumor of breast pathology image, lung tumors pathological image etc..It is part or all of in the identifying system of the cancer pathology image Be mountable to hospital or third party provide computer equipment in.The computer equipment includes but is not limited to hospital or PC used in tripartite's service end or doctor.Wherein, the service end includes but is not limited to:Single server, service Device cluster or the service end based on cloud framework.
Wherein, Figure 10 is refer to, it is shown as a kind of framework of embodiment of the identifying system of cancer pathology image and shown It is intended to.The identifying system 7 of the cancer pathology image includes image memory device 71, identifying system 72 and display device 73.
Described image storage device 71 is used to store cancer pathology image to be identified.It can be with the pathological examination system of hospital System is connected, and the pathological image captured by pathological examination system is classified and preserved.Wherein, the disease captured by the pathological examination system Reason image includes but is not limited to:As the pathological image captured by B ultrasound equipment, CT equipment, cell sample culture device etc..
The pathological image for being marked with patient information and tumor type is carried out corresponding preservation by described image storage device 71.Institute It can be single storage server or storage array to state image memory device, then or for it is multiple can data communication storage clothes Business device.For example, cancer pathology image is stored in single storage server, the patient information associated with cancer pathology image It is stored in the server for being configured with database with histological type, is closed each cancer pathology image and information using database Connection.
The identifying system 72 can be located at same computer equipment, or the computer to be separately configured with image memory device Equipment and communicated with the data of described image storage device 71, to obtain cancer pathology image from image memory device 71.
Therefore, the identifying system 72 includes:Image received device, feature deriving means and identification device.
Described image reception device is used to receive cancer pathology image to be identified.Wherein, the cancer pathology image can Think cancer pathology sectioning image or the cancer pathology image of radiation device scanning acquisition etc..Here, described image reception device Processing unit, caching and the interface being connected with storing the image library of cancer pathology image can be included.Described image receives dress Processing unit in putting indicates according to the sequential of program, reads the neoplastic disease for shooting or scanning in advance from image library by interface Manage image.The interface includes but is not limited to the data-interface of computer equipment and image memory device, or computer equipment The network interface to be communicated between image memory device.
The feature deriving means are used to utilize the multi-stage characteristics processing unit set in convolutional neural networks to described swollen Knurl pathological image carries out feature extraction;Wherein, every grade of characteristic processing unit is used for the feature that output token has cancer pathology feature Image.
Here, the convolutional neural networks can be formed by characteristic processing is unit cascaded, wherein, each characteristic processing unit according to Cancer pathology characteristic dimension is set step by step from low to high.The feature deriving means can be that can individually handle convolutional Neural net The computing device of network algorithm, or it is included as handling the processing unit of convolutional neural networks algorithm and matching caching list Member.The processing unit, which can share with the processing unit in image received device or refer to include, is exclusively used in convolutional Neural The chip of network or the unit of integrated circuit etc..
Here, first order characteristic processing unit receives original cancer pathology image and carries out the cancer pathology of minimum dimension Characteristic processing, each characteristic processing unit of rear class receive the characteristic image that previous stage characteristic processing unit is exported, higher to carry out The cancer pathology characteristic processing of dimension.Wherein, each characteristic processing unit can be by the characteristic pattern after cancer pathology feature extraction As output to next stage characteristic processing unit.Here, next stage characteristic processing unit is based on the prime characteristic processing list cascaded The characteristic image of member carries out feature extraction.Every grade of characteristic processing unit can be utilized corresponding to multiple pathological characters of same dimension Convolution kernel is traveled through to obtain the characteristic image after feature extraction.
The identification device is used to carry out fusion treatment to the characteristic image that at least two-stage characteristic processing unit is exported, and Based on the cancer pathology information in cancer pathology image described in the cancer pathology feature recognition in fused image.
Here, the identification device can share the processing unit of computer equipment with feature deriving means or individually match somebody with somebody It is equipped with processor that can handle Neural Network Data ability etc..
The identification device receives characteristic image at least two characteristic processing units of feature deriving means, and by institute Each characteristic image received is up-sampled to obtain the characteristic image consistent with artwork size;Again to the characteristic pattern after up-sampling As merging, so obtain one-to-one with artwork pixel and be used for the tumour for reflecting each pixel in original image The characteristic image of pathological characters distribution.Wherein, due to characteristic processing unit of each characteristic image from different cascade positions, so Cancer pathology feature on each characteristic image corresponding to each point can determine by individual features processing unit.For example, from the The value of each point reflects the cancer pathology that first order characteristic processing unit is extracted in the characteristic image of one-level characteristic processing unit Feature.And for example, the value of each point reflects the first order to third level feature in the characteristic image from third level characteristic processing unit The cancer pathology feature that processing unit is each extracted.
By the cancer pathology information in the cancer pathology distribution identification original image of the characteristic image merged.For example, institute State identification device using default pathological characters evaluation algorithms determine merge after characteristic image in (x, y) pixel (or figure As region) cancer pathology feature include a1% cell benign character M1, a2% cell tumour feature M2 and a3% it is thin Born of the same parents benign character M3, (x, y) pixel in corresponding artwork (cancer pathology image) is determined after identification device evaluation The cancer pathology information of (or image-region) is benign tumour.The identifying system is by evaluating each picture in cancer pathology image The cancer pathology information of vegetarian refreshments obtains the accurate cancer pathology distribution of captured cancer pathology image, and it is fast to thereby aid in doctor Speed delimit primary part observation region.
In some embodiments, as shown in figure 1, at least a portion characteristic processing unit 1 may include at least one set The first structure being made up of first filter, normalization module and active module.In the first structure 11 in filter 111 One or more convolution kernel M can be includedi*Ni, i is natural number, aiFor convolution nuclear volume, it is special that each convolution kernel corresponds to a kind of pathology Sign.The convolution kernel can represent sick cell contour feature, sick cell patterned feature, sick cell shape facility, or even high One kind in the abstract cancer pathology feature of dimension.The convolution kernel can be obtained by machine learning, or pass through other approach pair Cancer pathology image carries out signature analysis and obtained.The first filter is according to the size of the image received, the chi of convolution kernel The many factors such as each cascade connection set traversal step-length in very little and neural network model.For example, it is pointed to neutral net front end The step-length of each first filter is more than 1 at least in one-level characteristic processing unit.In order that the pixel of each cancer pathology image Cancer pathology feature be identified, the step-length of each first filter can be smaller in rear class characteristic processing unit, as rear class is special The step-length of each first filter is set to 1 or more than 1 and less than the step-length of prime first filter in sign processing unit.
In each first structure, convolution kernel is traveled through received lesion image or feature by first filter according to step-length Image simultaneously by resulting characteristic image be supplied to normalization module be normalized, then by active module to normalization after Characteristic image enter line activating processing.Wherein, the normalization module citing normalizes the pixel value in the characteristic image To between [0,1].In order to prevent the problem of cancer pathology feature is with exponent number increase and gradient weakening is cascaded, utilized by active module Characteristic image after normalization is entered line activating processing by nonlinear activation primitive, is marked in the characteristic image after activation is handled The characteristic point of the cancer pathology feature filtered.Wherein, the nonlinear activation function includes but is not limited to:Relu functions, Sigmoid functions, tanh functions etc..For example, the matrix of the characteristic image after normalization is expressed asAt activation The matrix of characteristic image after reason is expressed asWherein, the point in matrix more than 0.5 is expressed as having corresponding tumour Pathological characters, it is on the contrary then do not have corresponding cancer pathology feature.
It should be noted that each point value is only used for representing that activating numerical classification before and after the processing and change becomes in above-mentioned matrix Gesture, rather than the limitation used activation primitive.
It should also be noted that, each characteristic processing unit can select to have filtered and do not normalized, according to the design needs Normalization and un-activation or the characteristic image activated are exported to rear class characteristic processing unit or identification device.In other reality Apply in mode, first filter institute in rear class characteristic processing unit smaller by gradients affect and focal area size itself The characteristics of lesion that the characteristic image of processing can reflect may less than 1 pixel, so can give image segmentation band come it is difficult.The volume At least part first filter in rear class characteristic processing unit in product neutral net is using empty convolution to the figure that is received As carrying out cancer pathology feature extraction.Adopted for example, being included at least one characteristic processing unit in the end of convolutional neural networks With the first filter of empty convolution.
Wherein, the empty convolution is to be expanded convolution kernel, and the numerical value expanded is usually 0.Such as the volume by 2*2 10 is filled between adjacent values in product core, obtains 3*3 convolution kernel.Adjustment can be passed through using the first filter of empty convolution Step-length is to ensure the validity of feature extraction.For example, using the first filter of empty convolution compared to same characteristic processing list The step-length of other first filters of first clock is larger.Any of the above-described kind of first filter or two can be included in a first structure The combination of kind first filter.For example, as shown in Fig. 2 first filter 111 is that use is not extended wherein in first structure 11 Convolution kernel carry out convolution to extract cancer pathology feature, in first structure 12 first filter 121 for use empty convolution with Extract cancer pathology feature.
In some specific examples, multi-stage characteristics processing unit is included in constructed convolutional neural networks, and it is each special Levy the first structure that processing unit includes one or more cascades.As shown in figure 3, positioned at the front end of convolutional neural networks extremely It is special to carry out cancer pathology using the convolution kernel not expanded for the first filter of each first structure in a few characteristic processing unit Sign extraction, the first filter of each first structure, which includes, at least one processing unit in the end of convolutional neural networks uses Empty convolution carries out characteristics of lesion extraction.The feature deriving means are carried out cancer pathology image using the convolutional neural networks Feature extraction step by step.Identification device obtains characteristic image from least two characteristic processing units, due to each characteristic processing unit The characteristic image size exported is not necessarily the same, and the identification device can be up-sampled first to the tumour to each characteristic image The size of pathological image;Further according to the weight of default each characteristic processing unit, each characteristic image after up-sampling is closed And.The pixel of characteristic image and cancer pathology image after merging corresponds.The identification device can be by feature after merging The cancer pathology feature that each pixel of image is marked is corresponded on each pixel of cancer pathology image.By identification device institute Contain low dimensional and high-dimensional cancer pathology feature simultaneously in the characteristic image of merging, identification device can integrate prime and The cancer pathology feature of each pixel of cancer pathology feature recognition of rear class, therefore the lesion of the cancer pathology image identified Region is more accurate.
In other embodiment, at least partly the characteristic processing unit include it is at least one set of by the second filter, Normalize the second structure of module, the first merging module and active module composition.
Wherein, the second filter in second structure is used for the dimension using convolution adjustment characteristic image.For example, First structure and the second structure are included in one characteristic processing unit, wherein, two the second filters are included in the second structure Filter 1, filter 2 receive the characteristic image P1 and characteristic processing unit input that the first structure is exported respectively Characteristic image P2, wherein, the second filter filter 1 using convolution algorithm by characteristic image P1 from m1 dimensions be adjusted to n dimension Degree;Characteristic image P2 is adjusted to n dimensions by the second filter filter 2 using convolution algorithm from m2 dimensions.Thus it is easy to One merging module carries out splicing to two width characteristic images.Here, the second filter in the second structure can be only used for adjusting The dimension of whole characteristic image, or even feature extraction can also be carried out while dimension is adjusted.
The statistics that second filter the second filter the second filter of the second filter is distributed for the ease of subsequent characteristics, the The characteristic image of each second filter output is supplied to the first merging after carrying out normalization via normalization module in two structures Module.
Unlike the first structure, the first merging module is also included in the second structure.First merging module The input side of active module is arranged on, for by least two characteristic images in the characteristic processing unit of place and being merged. Wherein, the input of characteristic processing unit where the characteristic image that first merging module is received may be from and described the The output end of module is normalized in two structures.For example, as shown in figure 4, in the second structure 21 comprising two group of second filter (211, 211 ') and normalization module (212,212 '), two group of second filter (211,211 ') and normalize module (212,212 ') The dimension of the image each received is adjusted respectively in order to which the dimension of two characteristic images exported is identical, then is merged by first Module 214 enters two characteristic images that two group of second filter (211,211 ') and normalization module (212,212 ') are exported The weighting of row corresponding points takes and is combined into one measure feature image, then transfer to active module 213 to enter line activating processing.And for example, As shown in figure 5, will comprising one group of second filter 311 and normalization module 312, the first merging module 314 in the second structure 31 The image that the characteristic image that second filter of group 311 and normalization module 312 are exported is received with characteristic processing unit enters Row merges, then transfers to active module 313 to enter line activating processing.For another example, as shown in fig. 6, characteristic processing unit includes multiple cascades First structure 41 and the second structure 42 for being connected with the first structure and characteristic processing unit input.
Wherein, the active module in second structure communicates with the activation processing mode of the active module in first structure Or it is similar, it will not be described in detail herein.
It can more be become more meticulous using second structure in convolutional neural networks while retain low dimensional and high-dimensional Cancer pathology feature.In a specific example, constructed convolutional neural networks include multi-stage characteristics processing unit, wherein, it is preceding First structure is only included at least one characteristic processing unit at end, the first knot is included in each characteristic processing unit of subsequent cascaded Structure and the second structure.For example, as shown in fig. 7, only it is made up of in constructed convolutional neural networks comprising one first structure Characteristic processing unit 51 and multiple characteristic processing units 52 comprising first structure and the second structure, are therefore ensured that in characteristics of lesion Characteristics of lesion loss is reduced in the case that dimension gradually increases during extraction as far as possible.Wherein, in the characteristic processing list of multiple cascades In member 52, selection that can be staggeredly is comprising similar to the second structure shown in Fig. 4 and Fig. 5.On this basis, at last two-stage feature The filter included in unit 52 using empty convolution is managed, effectively to eliminate speckle phenomena.The feature deriving means utilize institute State convolutional neural networks and cancer pathology feature extraction is carried out to cancer pathology image, and will be multiple in the convolutional neural networks The characteristic image that characteristic processing unit is exported is delivered to identification device, and cancer pathology identification is carried out by the identification device 53. Here, the identification device 53 identifies that the mode of cancer pathology can be same or similar with aforementioned exemplary, will not be described here.
It is described in order to further reduce the operand of computer equipment and have the accuracy to cancer pathology feature extraction concurrently Feature deriving means also include the downsampling unit between two-stage characteristic processing unit, for the characteristic image that will be received Carry out down-sampling processing.
Here, in constructed convolutional neural networks, at least one downsampling unit can be set.In order to reduce feature Loss, the downsampling unit is set in the front end of convolutional neural networks.For example, the as shown in figure 8, downsampling unit 54 Between first order characteristic processing unit 51 and second level characteristic processing unit 52.Wherein, the downsampling unit 54 can use The modes such as maximum down-sampling carry out down-sampling processing.
In some specific examples, the identification device selection is at least one characteristic processing unit before down-sampling with The characteristic image of at least one characteristic processing unit after sampling carries out fusion treatment.For example, the as shown in figure 8, identification dress The prime that the 53 characteristic processing units 51 connected are located at downsampling unit 54 is put, under the characteristic processing unit 52 connected is located at The rear class of sampling unit 54 and comprising first structure and the second structure, the characteristic processing unit 52 connected is located at convolutional Neural net The end of network, the identification device 53 carry out fusion treatment by the characteristic image exported to three characteristic processing units and known Other cancer pathology information.
Here, each characteristic image that the identification device is received can be individual features processing unit after activation is handled , individual features processing unit is filtered and non-normalized and through in normalized and un-activation processing extremely Few one kind.
In some embodiments, as Fig. 9, the identification device include:3rd filter 631, up-sampling module 632 and Merging module 633.
The quantity of 3rd filter 631 is related to the quantity for the characteristic image that the identification device is received.In order to Improve recognition efficiency, the identification device using each 3rd filter 631 individually reception one characteristic image by the way of, so as to Give parallel processing in computer equipment.The characteristic image received is carried out classification processing by the 3rd filter 631.Each Three filters 631 can use step-length to be handled by 1 traversal mode receiving the classification that characteristic image is traveled through.
Here, multiple filtering windows for being used to classify are preset with the 3rd filter 631, by the way that each filtering window is distinguished The received characteristic image of traversal, classification processing is carried out to each point in characteristic image.For example, disease is included in the 3rd filter Manage classification filtering window and non-pathological classification filtering window, and by the characteristic image received respectively with two filtering window carry out time The superposition gone through, same pixel represents to belong to lesion classification respectively and is not belonging to lesion point in two characteristic images exported The possibility of class.And for example, the filtering of the filtering window, the second pathological classification of the first pathological classification is included in the 3rd filter Window ..., the filtering window of N pathological classifications and non-lesion classification filtering window, and by the characteristic image received respectively with each window The superposition traveled through, same pixel is represented to belong to each pathological classification respectively and not belonged in each characteristic image exported In the possibility of lesion classification.We are subsequently to identify as foundation using each pathological classification, lesion classification as pathological characters.
Each characteristic image after each 3rd filter 631 handles classification is delivered to the up-sampling module individually connected 632.The up-sampling module 632 is used to recover the image that corresponding 3rd filter 631 is exported to the pathological image Size.
It should be noted that the 3rd filter can also be first according to the class probability of each pixel after filtering to each picture Vegetarian refreshments is classified to obtain a width characteristic image, then is amplified processing to this feature image by up-sampling module 632.
Wherein, each the window chi of up-sampling is separately configured according to the characteristic image size received for up-sampling module 632 It is very little, and by traveling through the window to be filled to characteristic image.For example, the up-sampling module 632 can use secondary-plug Value mode or copy mode are recovered.
Second merging module 633 connects all up-sampling modules 632, for merging the characteristic image after each recovery. Here, second merging module directly can merge the characteristic image after each recovery.In some embodiments, it is described Second merging module 633 merges the characteristic image after each recovery according to default weight.Wherein, each weight can be according to characteristic processing The evaluations of many aspects such as position of the unit in neutral net, the dimension of cancer pathology feature extracted and set.Each power Escheat can be set based on advance machine learning.
The probability of all classification is described in characteristic image after merging in each pixel, according to weighting posterior probability proportion Pathological information correspondence markings represented by maximum classification can obtain the tumour on each pixel of cancer pathology image Cancer pathology information in pathological image.
The identification device can also include identification module 634, each pixel in its characteristic image after being used for based on merging Cancer pathology information in cancer pathology image described in the corresponding cancer pathology characteristic evaluating of point.
Include the evaluation weight of each cancer pathology feature in the identification module 634, by corresponding to pixel Each cancer pathology feature is weighted processing to determine the cancer pathology information of the pixel.Further, the identification module 634 can utilize the drafting mode of temperature prediction that the color corresponding to each cancer pathology information is depicted on cancer pathology image, Thus it is easy to doctor to carry out diagnosis reference.
The cancer pathology information that the identification device is identified may be superimposed on cancer pathology image to be preserved with new images In image memory device.Or the identification device by the cancer pathology information identified with similar to map datum Mode is packaged into individual files and is stored in image memory device with cancer pathology image.
The identifying system 7 of the cancer pathology image also checks display device 73 used in pathological image including doctor. The display device 73 may connect on the computer equipment of physician visits, or the conference system where consultation of doctor.It is described Cancer pathology image with cancer pathology information is shown to the phases such as doctor by display device 73 by reading image memory device Pass personnel.
For example, the color corresponding to each cancer pathology information that the display device 73 is marked according to identification device, Each color is rendered on cancer pathology image so that doctor can diagnose lesion according to color.
11 are referred to, it is shown as a kind of recognition methods of pathological image of the application offer.The recognition methods is main Performed by identifying system.Wherein, the identifying system includes the software and hardware in computer equipment.The computer equipment Including but not limited to PC, single server, server cluster or service end based on cloud framework etc..The identification system System is pre- by performing using the memory cell in computer equipment, processing unit, even interface unit, the hardware such as display unit If recognizer sequentially call each device to perform following steps:
In step s 110, pathological image to be identified is obtained.Wherein, the pathological image can be pathological section figure Pathological image that picture or radiation device scanning obtain etc..The pathological image can obtain from local storage unit, or pass through net Network obtains from other storage devices.
In the step s 120, the pathological image is entered using the multi-stage characteristics processing unit set in convolutional neural networks Row feature extraction;Wherein, every grade of characteristic processing unit is used for the characteristic image that output token has pathological characters.
Here, the convolutional neural networks can be formed by characteristic processing is unit cascaded, wherein, each characteristic processing unit according to Pathological characters dimension is set step by step from low to high.This step can be set by the calculating that can individually handle convolutional neural networks algorithm It is standby perform, or it is included as handling the processing unit of convolutional neural networks algorithm and matching buffer unit is held OK.
Here, first order characteristic processing unit receives original pathological image and carried out at the pathological characters of minimum dimension Reason, each characteristic processing unit of rear class receive the characteristic image that previous stage characteristic processing unit is exported, more high-dimensional to carry out Pathological characters processing.Wherein, each characteristic processing unit can export the characteristic image after pathological characters extract to next stage Characteristic processing unit.Here, characteristic image of the next stage characteristic processing unit based on the prime characteristic processing unit cascaded enters Row feature extraction.Every grade of characteristic processing unit can be traveled through using the convolution kernel corresponding to multiple pathological characters of same dimension To obtain the characteristic image after feature extraction.
In step s 130, fusion treatment, and base are carried out to the characteristic image that at least two-stage characteristic processing unit is exported Pathological characters in fused image identify the pathological information in the pathological image.
Here, this step can by computer equipment processing unit or be separately configured and can handle neutral net number Performed according to the processor of ability.
This step receives characteristic image at least two characteristic processing units described in step S120, and will be received Each characteristic image up-sampled to obtain the characteristic image consistent with artwork size;The characteristic image after up-sampling is entered again Row merges, and so obtains one-to-one with artwork pixel and is used for the pathological characters for reflecting each pixel in original image The characteristic image of distribution.Wherein, due to characteristic processing unit of each characteristic image from different cascade positions, so each characteristic pattern Pathological characters as corresponding to upper each point can determine by individual features processing unit.For example, come from first order characteristic processing The value of each point reflects the pathological characters that first order characteristic processing unit is extracted in the characteristic image of unit.And for example, from The value of each point reflects the first order to third level characteristic processing unit and each carried in the characteristic image of three-level characteristic processing unit The pathological characters taken.
By the pathological information in the pathology distribution identification original image of the characteristic image merged.For example, using default Pathological characters evaluation algorithms determine that the pathological characters of (x, y) pixel (or image-region) in the characteristic image after merging include Cell tumour the feature M2 and a3% of a1% cell benign character M1, a2% cell benign character M3, are determined after evaluation The pathological information of (x, y) pixel (or image-region) in corresponding artwork (pathological image) is benign tumour.By evaluating disease The pathological information of each pixel obtains the accurate pathology distribution of captured pathological image in reason image, and it is fast to thereby aid in doctor Speed delimit primary part observation region.
In some embodiments, as shown in figure 1, at least a portion characteristic processing unit 1 may include at least one set The first structure being made up of first filter, normalization module and active module.The first filter in the first structure 11 One or more convolution kernel M can be included in 111i*Ni, i is natural number, aiFor convolution nuclear volume, a kind of corresponding disease of each convolution kernel Manage feature.The convolution kernel can represent sick cell contour feature, sick cell patterned feature, sick cell shape facility, very One kind into high-dimensional abstract pathological characters.The convolution kernel can be obtained by machine learning, or pass through other approach pair Pathological image carries out signature analysis and obtained.The first filter according to the size of the image received, the size of convolution kernel and The many factors such as each cascade connection set traversal step-length in neural network model.For example, it is pointed to neutral net front end at least The step-length of each first filter is more than 1 in one-level characteristic processing unit.In order that the pathology of the pixel of each pathological image is special Sign is identified, and the step-length of each first filter can be smaller in rear class characteristic processing unit, in rear class characteristic processing unit The step-length of each first filter is set to 1 or more than 1 and less than the step-length of prime first filter.
In each first structure, convolution kernel is traveled through received lesion image or feature by first filter according to step-length Image simultaneously by resulting characteristic image be supplied to normalization module be normalized, then by active module to normalization after Characteristic image enter line activating processing.Wherein, the normalization module citing normalizes the pixel value in the characteristic image To between [0,1].In order to prevent the problem of pathological characters are with exponent number increase and gradient weakening is cascaded, utilized by active module non-thread Property activation primitive the characteristic image after normalization is entered into line activating processing, marked institute in the characteristic image after activation is handled The characteristic point of the pathological characters of filtering.Wherein, the nonlinear activation function includes but is not limited to:Relu functions, sigmoid letters Number, tanh functions etc..For example, the matrix of the characteristic image after normalization is expressed asFeature after activation is handled The matrix of image is expressed asWherein, be expressed as that there are corresponding pathological characters more than 0.5 point in matrix, it is on the contrary then Without corresponding pathological characters.
It should be noted that each point value is only used for representing that activating numerical classification before and after the processing and change becomes in above-mentioned matrix Gesture, rather than the limitation used activation primitive.
It should also be noted that, each characteristic processing unit can select to have filtered and do not normalized, according to the design needs Normalization and un-activation or the characteristic image activated are exported to rear class characteristic processing unit or identification device.
It is smaller by gradients affect and focal area size itself in other embodiment, rear class characteristic processing list The characteristics of lesion that the characteristic image handled by first filter in member can reflect may less than 1 pixel, can so give image Segmentation band comes difficult.At least part first filter in rear class characteristic processing unit in the convolutional neural networks is using empty Hole convolution carries out pathological characters extraction to the image received.For example, at least one feature in the end of convolutional neural networks Manage the first filter included in unit using empty convolution.
Wherein, the empty convolution is to be expanded convolution kernel, and the numerical value expanded is usually 0.Such as the volume by 2*2 10 is filled between adjacent values in product core, obtains 3*3 convolution kernel.Adjustment can be passed through using the first filter of empty convolution Step-length is to ensure the validity of feature extraction.For example, using the first filter of empty convolution compared to same characteristic processing list The step-length of other first filters of first clock is larger.Any of the above-described kind of first filter or two can be included in a first structure The combination of kind first filter.For example, as shown in Fig. 2 first filter 111 is to use not extended volume in first structure 11 Product core carries out convolution to extract pathological characters, and first filter 121 is uses empty convolution to extract pathology in first structure 12 Feature.
In some specific examples, multi-stage characteristics processing unit is included in constructed convolutional neural networks, and it is each special Levy the first structure that processing unit includes one or more cascades.Wherein as shown in figure 3, before convolutional neural networks The first filter of each first structure 111 at least one characteristic processing unit 11 is held to carry out disease using the convolution kernel not expanded Manage feature extraction, the first of each first structure 121 at least one characteristic processing unit 12 in the end of convolutional neural networks Filter includes carries out characteristics of lesion extraction using empty convolution.The convolutional neural networks are utilized in step S120 by pathological image Carry out feature extraction step by step.Characteristic image is obtained from least two characteristic processing units (11,12) by step S130 again, due to The characteristic image size that each characteristic processing unit is exported is not necessarily the same, so first each characteristic image can be up-sampled to institute State the size of pathological image;Further according to the weight of default each characteristic processing unit, each characteristic image after up-sampling is carried out Merge.The pixel of characteristic image and pathological image after merging corresponds.Each pixel institute of characteristic image after merging again The pathological characters of mark are corresponded on each pixel of pathological image.By putting in merged characteristic image while containing low Dimension and high-dimensional pathological characters, so this method can integrate the disease of each pixel of pathological characters identification of prime and rear class Feature is managed, therefore the lesion region of the pathological image identified is more accurate.
In other embodiment, at least partly the characteristic processing unit include it is at least one set of by the second filter, Normalize the second structure of module, the first merging module and active module composition.
Wherein, the second filter in second structure is used for the dimension using convolution adjustment characteristic image.For example, First structure and the second structure are included in one characteristic processing unit, wherein, two the second filters are included in the second structure Filter 1, filter 2 receive the characteristic image P1 and characteristic processing unit input that the first structure is exported respectively Characteristic image P2, wherein, the second filter filter 1 using convolution algorithm by characteristic image P1 from m1 dimensions be adjusted to n dimension Degree;Characteristic image P2 is adjusted to n dimensions by the second filter filter 2 using convolution algorithm from m2 dimensions.Thus it is easy to One merging module carries out splicing to two width characteristic images.Here, the second filter in the second structure can be only used for adjusting The dimension of whole characteristic image, or even feature extraction can also be carried out while dimension is adjusted.
The statistics that second filter the second filter the second filter of the second filter is distributed for the ease of subsequent characteristics, the The characteristic image of each second filter output is supplied to the first merging after carrying out normalization via normalization module in two structures Module.
Unlike the first structure, the first merging module is also included in the second structure.First merging module The input side of active module is arranged on, by least two characteristic images in the characteristic processing unit of place and is merged.Wherein, The input of characteristic processing unit and second knot where the characteristic image that first merging module is received may be from The output end of module is normalized in structure.For example, as shown in figure 4, in the second structure 21 comprising two group of second filter (211, 211 ') and normalization module (212,212 '), two group of second filter (211,211 ') and normalize module (212,212 ') The dimension of the image each received is adjusted respectively in order to which the dimension of two characteristic images exported is identical, then is merged by first Module 214 enters two characteristic images that two group of second filter (211,211 ') and normalization module (212,212 ') are exported The weighting of row corresponding points takes and is combined into one measure feature image, then transfer to active module 213 to enter line activating processing.And for example, Referring to Fig. 5, it is shown as the structural representation of the second structure in yet another embodiment.One group the is included in second structure 31 Tow filtrator 311 and normalization module 312, the first merging module 314 is by second filter of group 311 and normalizes module 312 The image that the characteristic image exported is received with characteristic processing unit merges, then transfers to active module 313 to enter line activating Processing.For another example, referring to Fig. 6, it is shown as the structural representation of characteristic processing unit in another embodiment.Characteristic processing First structure 41 of the unit comprising multiple cascades and the second knot being connected with the first structure and characteristic processing unit input Structure 42.
Wherein, the active module in second structure communicates with the activation processing mode of the active module in first structure Or it is similar, it will not be described in detail herein.
It can more be become more meticulous using second structure in convolutional neural networks while retain low dimensional and high-dimensional Pathological characters.In a specific example, constructed convolutional neural networks include multi-stage characteristics processing unit, wherein, front end Only include first structure at least one characteristic processing unit, in each characteristic processing unit of subsequent cascaded comprising first structure and Second structure.For example, as shown in fig. 7, a feature being only made up of first structure is included in constructed convolutional neural networks Processing unit 51 and multiple characteristic processing units 52 comprising first structure and the second structure, therefore ensure that and are carried in characteristics of lesion Characteristics of lesion loss is reduced in the case that dimension gradually increases during taking as far as possible.Wherein, in the characteristic processing unit of multiple cascades In 52, selection that can be staggeredly is comprising similar to the second structure shown in Fig. 4 and Fig. 5.On this basis, last two-stage characteristic processing The first filter using empty convolution is included in unit 52, effectively to eliminate speckle phenomena.Described in the step S130 is utilized Convolutional neural networks carry out pathological characters extraction to pathological image, and by multiple characteristic processing lists in the convolutional neural networks The characteristic image that member is exported carries out pathology identification.Here, the mode of identification pathology can be same or similar with aforementioned exemplary, herein It will not go into details.
In order to further reduce the operand of computer equipment and have the accuracy to pathological characters extraction, methods described concurrently The step of characteristic image for also including being exported the characteristic processing unit carries out down-sampling processing.
Here, in constructed convolutional neural networks, at least one downsampling unit can be set.In order to reduce feature Loss, the downsampling unit is set in the front end of convolutional neural networks.For example, as shown in figure 8, downsampling unit 54 is positioned at the Between one-level characteristic processing unit 51 and second level characteristic processing unit 52.Wherein, the downsampling unit 54 can use maximum The modes such as value down-sampling carry out down-sampling processing.
In some specific examples, the characteristic image selected in step S130 may come from least one before down-sampling The characteristic image of at least one characteristic processing unit after individual characteristic processing unit and down-sampling, and fusion treatment is carried out to it. For example, as shown in figure 8, the characteristic processing unit 51 received is located at the prime of downsampling unit 54, the characteristic processing list connected Member 52 is located at the rear class of downsampling unit 54 and comprising first structure and the second structure, and the characteristic processing unit 52 connected is located at The end of convolutional neural networks, then the characteristic image by being exported to three characteristic processing units carry out fusion treatment and identified Pathological information.
Here, each characteristic image that institute step S130 is received can be individual features processing unit after activation is handled , individual features processing unit is filtered and non-normalized and through in normalized and un-activation processing extremely Few one kind.
In some embodiments, as shown in figure 9, the step S130 is by including the 3rd filter 631, up-sampling module 632 and second the identification device of merging module 633 perform.
The quantity of 3rd filter 631 is related to the quantity for the characteristic image that the identification device is received.In order to Improve recognition efficiency, the identification device using each 3rd filter 631 individually reception one characteristic image by the way of, so as to Give parallel processing in computer equipment.The characteristic image received is carried out classification processing by the 3rd filter 631.Each Three filters 631 can use step-length to be handled by 1 traversal mode receiving the classification that characteristic image is traveled through.
Here, multiple filtering windows for being used to classify are preset with the 3rd filter 631, by the way that each filtering window is distinguished The received characteristic image of traversal, classification processing is carried out to each point in characteristic image.For example, disease is included in the 3rd filter Manage classification filtering window and non-pathological classification filtering window, and by the characteristic image received respectively with two filtering window carry out time The superposition gone through, same pixel represents to belong to lesion classification respectively and is not belonging to lesion point in two characteristic images exported The possibility of class.And for example, the filtering of the filtering window, the second pathological classification of the first pathological classification is included in the 3rd filter Window ..., the filtering window of N pathological classifications and non-lesion classification filtering window, and by the characteristic image received respectively with each window The superposition traveled through, same pixel is represented to belong to each pathological classification respectively and not belonged in each characteristic image exported In the possibility of lesion classification.We are subsequently to identify as foundation using each pathological classification, lesion classification as pathological characters.
Each characteristic image after each 3rd filter 631 handles classification is delivered to the up-sampling module individually connected 632.The up-sampling module 632 is used to recover the image that corresponding 3rd filter 631 is exported to the pathological image Size.
It should be noted that the 3rd filter can also be first according to the class probability of each pixel after filtering to each picture Vegetarian refreshments is classified to obtain a width characteristic image, then is amplified processing to this feature image by up-sampling module 632.
Wherein, each the window chi of up-sampling is separately configured according to the characteristic image size received for up-sampling module 632 It is very little, and by traveling through the window to be filled to characteristic image.For example, the up-sampling module 632 can use secondary-plug Value mode or copy mode are recovered.
Second merging module 633 connects all up-sampling modules 632, for merging the characteristic image after each recovery. Here, second merging module directly can merge the characteristic image after each recovery.In some embodiments, it is described Second merging module 633 merges the characteristic image after each recovery according to default weight.Wherein, each weight can be according to characteristic processing Position of the unit in neutral net, the pathological characters extracted the evaluation of many aspects such as dimension and set.Each weight is also It can be set based on advance machine learning.
The probability of all classification is described in characteristic image after merging in each pixel, according to weighting posterior probability proportion Pathological information correspondence markings represented by maximum classification can obtain the pathological image on each pixel of pathological image In pathological information.
The step S130 also includes:Based on the pathological characters evaluation corresponding to each pixel in the characteristic image after merging The step of pathological information in the pathological image.
The evaluation weight of each pathological characters is preset with, by being weighted processing to each pathological characters corresponding to pixel To determine the pathological information of the pixel.Further, using the drafting mode of temperature prediction by corresponding to each pathological information Color be depicted on pathological image, be thus easy to doctor to carry out diagnosis reference.
The application also provides a kind of recognition methods of cancer pathology image.Wherein, the identification side of the cancer pathology image Method is mainly performed by the identifying system of cancer pathology image.Wherein, the cancer pathology image includes but is not limited to:Mammary gland swells Knurl pathological image, lung tumors pathological image etc..The identifying system of the cancer pathology image can be applicable to the medical system of hospital In system, doctor can obtain the cancer pathology image of corresponding patient by the medical system of operation, and by the identifying system to tumour Pathological image carries out pathology identification.In the identifying system of the cancer pathology image partly or entirely be mountable to hospital, Or in the computer equipment of third party's offer.The computer equipment includes but is not limited to hospital or third party's service end or doctor PC used in life.Wherein, the service end includes but is not limited to:Single server, server cluster or based on cloud The service end of framework.
Wherein, the recognition methods of the cancer pathology image comprises the following steps:
In step 210, cancer pathology image to be identified is obtained.Wherein, the cancer pathology image can be tumour Cancer pathology image that pathological section image or radiation device scanning obtain etc..The cancer pathology image can be obtained from local Storage device, or obtained from the storage device in network.
In step S220, using the multi-stage characteristics processing unit set in convolutional neural networks to the cancer pathology figure As carrying out feature extraction;Wherein, every grade of characteristic processing unit is used for the characteristic image that output token has cancer pathology feature.
Here, the convolutional neural networks can be formed by characteristic processing is unit cascaded, wherein, each characteristic processing unit according to Cancer pathology characteristic dimension is set step by step from low to high.This step can be by can individually handle the meter of convolutional neural networks algorithm Equipment is calculated to perform, or by the processing unit for being included as handling convolutional neural networks algorithm and matching buffer unit To perform.The processing unit, which can share with the processing unit in image received device or refer to include, is exclusively used in convolution The chip of neutral net or the unit of integrated circuit etc..
Here, first order characteristic processing unit receives original cancer pathology image and carries out the cancer pathology of minimum dimension Characteristic processing, each characteristic processing unit of rear class receive the characteristic image that previous stage characteristic processing unit is exported, higher to carry out The cancer pathology characteristic processing of dimension.Wherein, each characteristic processing unit can be by the characteristic pattern after cancer pathology feature extraction As output to next stage characteristic processing unit.Here, next stage characteristic processing unit is based on the prime characteristic processing list cascaded The characteristic image of member carries out feature extraction.Every grade of characteristic processing unit can be utilized corresponding to multiple pathological characters of same dimension Convolution kernel is traveled through to obtain the characteristic image after feature extraction.
In step S230, fusion treatment, and base are carried out to the characteristic image that at least two-stage characteristic processing unit is exported Cancer pathology information in cancer pathology image described in cancer pathology feature recognition in fused image.
Here, this step can by computer equipment processing unit or be separately configured and can handle Neural Network Data Processor of ability etc. performs.
Characteristic image, and each characteristic pattern that will be received are received at least two characteristic processing units in step S220 As being up-sampled to obtain the characteristic image consistent with artwork size;The characteristic image after up-sampling is merged again, such as This obtains one-to-one with artwork pixel and is used to reflect the cancer pathology feature distribution of each pixel in original image Characteristic image.Wherein, due to characteristic processing unit of each characteristic image from different cascade positions, so it is each on each characteristic image The corresponding cancer pathology feature of point can determine by individual features processing unit.For example, come from first order characteristic processing list The value of each point reflects the cancer pathology feature that first order characteristic processing unit is extracted in the characteristic image of member.And for example, come from The value of each point reflects the first order to the respective institute of third level characteristic processing unit in the characteristic image of third level characteristic processing unit The cancer pathology feature of extraction.
By the cancer pathology information in the cancer pathology distribution identification original image of the characteristic image merged.It is for example, sharp (x, y) pixel (or image-region) in the characteristic image after merging is determined with default cancer pathology characteristic evaluating algorithm Cancer pathology feature includes cell tumour the feature M2 and a3% of a1% cell benign character M1, a2% cell benign character M3, (x, y) pixel (or image-region) in corresponding artwork (cancer pathology image) is determined after identification device evaluation Cancer pathology information be benign tumour.Obtain being clapped by the cancer pathology information for evaluating each pixel in cancer pathology image The accurate cancer pathology distribution for the cancer pathology image taken the photograph, and thereby aid in doctor and quickly delimit primary part observation region.
In some embodiments, as shown in figure 1, at least a portion characteristic processing unit 1 may include at least one set The first structure being made up of first filter, normalization module and active module.The first filter in the first structure 11 One or more convolution kernel M can be included in 111i*Ni, i is natural number, aiFor convolution nuclear volume, a kind of corresponding disease of each convolution kernel Manage feature.In the first structure, one or more convolution kernels can be included in first filter, wherein each convolution kernel is corresponding A kind of cancer pathology feature.The convolution kernel can represent sick cell contour feature, sick cell patterned feature, sick cell shape One kind in shape feature, or even high-dimensional abstract cancer pathology feature.The convolution kernel can be obtained by machine learning, or logical Other approach are crossed to carry out signature analysis to cancer pathology image and obtain.The first filter is according to the chi of the image received The many factors such as each cascade connection set traversal step-length in very little, convolution kernel size and neural network model.For example, it is pointed to The step-length of each first filter is more than 1 at least one-level characteristic processing unit of neutral net front end.In order that each neoplastic disease The cancer pathology feature for managing the pixel of image is identified, and the step-length of each first filter can in rear class characteristic processing unit It is smaller, as the step-length of each first filter in rear class characteristic processing unit is set to 1 or more than 1 and is less than prime first filter Step-length.
In each first structure, convolution kernel is traveled through received lesion image or feature by first filter according to step-length Image simultaneously by resulting characteristic image be supplied to normalization module be normalized, then by active module to normalization after Characteristic image enter line activating processing.Wherein, the normalization module citing normalizes the pixel value in the characteristic image To between [0,1].In order to prevent the problem of cancer pathology feature is with exponent number increase and gradient weakening is cascaded, utilized by active module Characteristic image after normalization is entered line activating processing by nonlinear activation primitive, is marked in the characteristic image after activation is handled The characteristic point of the cancer pathology feature filtered.Wherein, the nonlinear activation function includes but is not limited to:Relu functions, Sigmoid functions, tanh functions etc..For example, the matrix of the characteristic image after normalization is expressed asAt activation The matrix of characteristic image after reason is expressed asWherein, the point in matrix more than 0.5 is expressed as having corresponding tumour Pathological characters, it is on the contrary then do not have corresponding cancer pathology feature.
It should be noted that each point value is only used for representing that activating numerical classification before and after the processing and change becomes in above-mentioned matrix Gesture, rather than the limitation used activation primitive.
It should also be noted that, each characteristic processing unit can select to have filtered and do not normalized, according to the design needs Normalization and un-activation or the characteristic image activated are exported to rear class characteristic processing unit or identification device.
It is smaller by gradients affect and focal area size itself in other embodiment, rear class characteristic processing list The characteristics of lesion that the characteristic image handled by first filter in member can reflect may less than 1 pixel, can so give image Segmentation band comes difficult.At least part first filter in rear class characteristic processing unit in the convolutional neural networks is using empty Hole convolution carries out cancer pathology feature extraction to the image received.For example, at least one spy in the end of convolutional neural networks Levy the first filter included in processing unit using empty convolution.
Wherein, the empty convolution is to be expanded convolution kernel, and the numerical value expanded is usually 0.Such as the volume by 2*2 10 is filled between adjacent values in product core, obtains 3*3 convolution kernel.Adjustment can be passed through using the first filter of empty convolution Step-length is to ensure the validity of feature extraction.For example, using the first filter of empty convolution compared to same characteristic processing list The step-length of other first filters of first clock is larger.Any of the above-described kind of first filter or two can be included in a first structure The combination of kind first filter.For example, as shown in Fig. 2 first filter 111 is that use is not extended wherein in first structure 11 Convolution kernel carry out convolution to extract cancer pathology feature, in first structure 12 first filter 121 for use empty convolution with Extract cancer pathology feature.
In some specific examples, multi-stage characteristics processing unit is included in constructed convolutional neural networks, and it is each special Levy the first structure that processing unit includes one or more cascades.As shown in figure 3, positioned at the front end of convolutional neural networks extremely It is special to carry out cancer pathology using the convolution kernel not expanded for the first filter of each first structure in a few characteristic processing unit Sign extraction, the first filter of each first structure, which includes, at least one processing unit in the end of convolutional neural networks uses Empty convolution carries out characteristics of lesion extraction.The feature deriving means are carried out cancer pathology image using the convolutional neural networks Feature extraction step by step.Identification device obtains characteristic image from least two characteristic processing units, due to each characteristic processing unit The characteristic image size exported is not necessarily the same, and the identification device can be up-sampled first to the tumour to each characteristic image The size of pathological image;Further according to the weight of default each characteristic processing unit, each characteristic image after up-sampling is closed And.The pixel of characteristic image after merging and cancer pathology image corresponds, then each pixel of characteristic image after merging The cancer pathology feature marked is corresponded on each pixel of cancer pathology image.By in the characteristic image that is merged simultaneously Low dimensional and high-dimensional cancer pathology feature are contained, so the cancer pathology feature recognition that can integrate prime and rear class is each The cancer pathology feature of pixel, therefore the lesion region of the cancer pathology image identified is more accurate.
In other embodiment, at least partly the characteristic processing unit include it is at least one set of by the second filter, Normalize the second structure of module, the first merging module and active module composition.
Wherein, the second filter in second structure is used for the dimension using convolution adjustment characteristic image.For example, First structure and the second structure are included in one characteristic processing unit, wherein, two the second filters are included in the second structure Filter 1, filter 2 receive the characteristic image P1 and characteristic processing unit input that the first structure is exported respectively Characteristic image P2, wherein, the second filter filter 1 using convolution algorithm by characteristic image P1 from m1 dimensions be adjusted to n dimension Degree;Characteristic image P2 is adjusted to n dimensions by the second filter filter 2 using convolution algorithm from m2 dimensions.Thus it is easy to One merging module carries out splicing to two width characteristic images.Here, the second filter in the second structure can be only used for adjusting The dimension of whole characteristic image, or even feature extraction can also be carried out while dimension is adjusted.
The statistics that second filter the second filter the second filter of the second filter is distributed for the ease of subsequent characteristics, the The characteristic image of each second filter output is supplied to the first merging after carrying out normalization via normalization module in two structures Module.
Unlike the first structure, the first merging module is also included in the second structure.First merging module The input side of active module is arranged on, for by least two characteristic images in the characteristic processing unit of place and being merged. Wherein, the input of characteristic processing unit where the characteristic image that first merging module is received may be from and described the The output end of module is normalized in two structures.For example, as shown in figure 4, in the second structure 21 comprising two group of second filter (211, 211 ') and normalization module (212,212 '), two group of second filter (211,211 ') and normalize module (212,212 ') The dimension of the image each received is adjusted respectively in order to which the dimension of two characteristic images exported is identical, then is merged by first Module 214 enters two characteristic images that two group of second filter (211,211 ') and normalization module (212,212 ') are exported The weighting of row corresponding points takes and is combined into one measure feature image, then transfer to active module 213 to enter line activating processing.And for example, As shown in figure 5, will comprising one group of second filter 311 and normalization module 312, the first merging module 314 in the second structure 31 The image that the characteristic image that second filter of group 311 and normalization module 312 are exported is received with characteristic processing unit enters Row merges, then transfers to active module 313 to enter line activating processing.For another example, as shown in fig. 6, characteristic processing unit includes multiple cascades First structure 41 and the second structure 42 for being connected with the first structure and characteristic processing unit input.
Wherein, the active module in second structure communicates with the activation processing mode of the active module in first structure Or it is similar, it will not be described in detail herein.
It can more be become more meticulous using second structure in convolutional neural networks while retain low dimensional and high-dimensional Cancer pathology feature.In a specific example, constructed convolutional neural networks include multi-stage characteristics processing unit, wherein, it is preceding First structure is only included at least one characteristic processing unit at end, the first knot is included in each characteristic processing unit of subsequent cascaded Structure and the second structure.For example, as shown in fig. 7, only it is made up of in constructed convolutional neural networks comprising one first structure Characteristic processing unit 51 and multiple characteristic processing units 52 comprising first structure and the second structure, are therefore ensured that in characteristics of lesion Characteristics of lesion loss is reduced in the case that dimension gradually increases during extraction as far as possible.Wherein, in the characteristic processing list of multiple cascades In member 52, selection that can be staggeredly is comprising similar to the second structure shown in Fig. 4 and Fig. 5.On this basis, at last two-stage feature The first filter included in unit 52 using empty convolution is managed, effectively to eliminate speckle phenomena.According to step S220 and S230, profit Cancer pathology feature extraction is carried out to cancer pathology image with the convolutional neural networks, and by the convolutional neural networks The characteristic image that multiple characteristic processing units are exported carries out cancer pathology identification.Here, the mode of identification cancer pathology can be with Aforementioned exemplary is same or similar, will not be described here.In order to further reduce the operand of computer equipment and have concurrently to tumour The accuracy of pathological characters extraction, methods described also includes will the characteristic image that be received the step of carrying out down-sampling processing.
Here, in constructed convolutional neural networks, at least one downsampling unit can be set.In order to reduce feature Loss, the downsampling unit is set in the front end of convolutional neural networks.For example, the as shown in figure 8, downsampling unit 54 Between first order characteristic processing unit 51 and second level characteristic processing unit 52.Wherein, the downsampling unit 54 can use The modes such as maximum down-sampling carry out down-sampling processing.
In some specific examples, the step S230 may be selected at least one characteristic processing unit before down-sampling and The characteristic image of at least one characteristic processing unit after down-sampling carries out fusion treatment.
For example, as shown in figure 8, the images that are received of the step S230 are respectively from positioned at the prime of downsampling unit 54 's:Characteristic processing unit 51, the characteristic processing unit positioned at the rear class of downsampling unit 54 and comprising first structure and the second structure 52 and the characteristic processing unit 52 positioned at the end of convolutional neural networks, pass through what three characteristic processing units were exported Characteristic image carries out fusion treatment to identify cancer pathology information.
Here, each characteristic image that the step S230 is received can be individual features processing unit after activation is handled , individual features processing unit is filtered and non-normalized and through in normalized and un-activation processing extremely Few one kind.
In some embodiments, as shown in figure 9, the step S230 is by including the 3rd filter 631, up-sampling module 632 and second the identification device of merging module 633 perform.
The quantity of 3rd filter 631 is related to the quantity for the characteristic image that the identification device is received.In order to Improve recognition efficiency, the identification device using each 3rd filter 631 individually reception one characteristic image by the way of, so as to Give parallel processing in computer equipment.The characteristic image received is carried out classification processing by the 3rd filter 631.Each Three filters 631 can use step-length to be handled by 1 traversal mode receiving the classification that characteristic image is traveled through.
Here, multiple filtering windows for being used to classify are preset with the 3rd filter 631, by the way that each filtering window is distinguished The received characteristic image of traversal, classification processing is carried out to each point in characteristic image.For example, comprising swollen in the 3rd filter The filtering window of knurl pathological classification and non-cancer pathology classification filtering window, and by the characteristic image received respectively with this two filtering The superposition that window is traveled through, same pixel is represented to belong to lesion classification respectively and not belonged in two characteristic images exported In the possibility of lesion classification.And for example, the filtering window comprising the classification of the first cancer pathology in the 3rd filter, the second tumour The filtering window of pathological classification ..., the filtering window of the filtering window of N cancer pathologies classification and non-lesion classification, and will be received The superposition that characteristic image is traveled through with each window respectively, same pixel represents to belong to respectively in each characteristic image exported Classify and be not belonging to the possibility of lesion classification in each cancer pathology.We classify each cancer pathology, lesion classification as Cancer pathology is characterized as that follow-up identification is used as foundation.
Each characteristic image after each 3rd filter 631 handles classification is delivered to the up-sampling module individually connected 632.The up-sampling module 632 is used to recover the image that corresponding 3rd filter 631 is exported to the cancer pathology figure The size of picture.
It should be noted that the 3rd filter can also be first according to the class probability of each pixel after filtering to each picture Vegetarian refreshments is classified to obtain a width characteristic image, then is amplified processing to this feature image by up-sampling module 632.
Wherein, each the window chi of up-sampling is separately configured according to the characteristic image size received for up-sampling module 632 It is very little, and by traveling through the window to be filled to characteristic image.Wherein, the up-sampling module 632 can use secondary-plug Value mode or copy mode are recovered.
Second merging module 633 connects all up-sampling modules 632, for merging the characteristic image after each recovery. Here, second merging module directly can merge the characteristic image after each recovery.In some embodiments, it is described Second merging module 633 merges the characteristic image after each recovery according to default weight.Wherein, each weight can be according to characteristic processing The evaluations of many aspects such as position of the unit in neutral net, the dimension of cancer pathology feature extracted and set.Each power Escheat can be set based on advance machine learning.
The probability of all classification is described in characteristic image after merging in each pixel, according to weighting posterior probability proportion Pathological information correspondence markings represented by maximum classification can obtain the tumour on each pixel of cancer pathology image Cancer pathology information in pathological image.
The step S230 can also include the cancer pathology corresponding to based on each pixel in the characteristic image after merging The step of cancer pathology information in cancer pathology image described in characteristic evaluating.
The evaluation weight of each cancer pathology feature is preset with, by being carried out to each cancer pathology feature corresponding to pixel Weighting is handled to determine the cancer pathology information of the pixel.Further, will be each swollen using the drafting mode of temperature prediction Color corresponding to knurl pathological information is depicted on cancer pathology image, is thus easy to doctor to carry out diagnosis reference.
The cancer pathology information that the step S230 is identified may be superimposed on cancer pathology image to be preserved with new images. Or the cancer pathology information identified is packaged into independent text in a manner of similar to map datum with cancer pathology image Part simultaneously preserves.
The recognition methods of the cancer pathology image also performs step S240:Show the swelling comprising pathological information identified Knurl pathological image.
Wherein, the identifying system of the cancer pathology image checks display device used in pathological image including doctor. The display device may connect on the computer equipment of physician visits, or the conference system where consultation of doctor.It is described aobvious Cancer pathology image with cancer pathology information is shown to the relevant peoples such as doctor by showing device by reading image memory device Member.
For example, the color corresponding to each cancer pathology information marked according to step S230, on cancer pathology image Render each color so that doctor can diagnose lesion according to color.
It should be noted that through the above description of the embodiments, those skilled in the art can be understood that Part or all of to the application can be realized by general hardware platform necessary to software and combination.Based on such understanding, The part that the technical scheme of the application substantially contributes to prior art in other words can be embodied in the form of software product Out, the computer software product may include the one or more machine readable medias for being stored thereon with machine-executable instruction, These instructions may be such that this when being performed by one or more machines such as computer, computer network or other electronic equipments One or more machines perform operation according to embodiments herein.Machine readable media may include, but be not limited to, floppy disk, CD, CD-ROM (compact-disc-read-only storage), magneto-optic disk, ROM (read-only storage), RAM (random access memory), EPROM (Erasable Programmable Read Only Memory EPROM), EEPROM (Electrically Erasable Read Only Memory), magnetic or optical card, sudden strain of a muscle Deposit or suitable for store machine-executable instruction other kinds of medium/machine readable media.
The application can be used in numerous general or special purpose computing system environments or configuration.Such as:Personal computer, service Device computer, handheld device or portable set, laptop device, multicomputer system, the system based on microprocessor, top set Box, programmable consumer-elcetronics devices, network PC, minicom, mainframe computer including any of the above system or equipment DCE etc..
The application can be described in the general context of computer executable instructions, such as program Module.Usually, program module includes performing particular task or realizes routine, program, object, the group of particular abstract data type Part, data structure etc..The application can also be put into practice in a distributed computing environment, in these DCEs, by Task is performed and connected remote processing devices by communication network.In a distributed computing environment, program module can be with In the local and remote computer-readable storage medium including storage device.
It should be noted that it will be understood by those skilled in the art that above-mentioned members can be PLD, Including:Programmable logic array (Programmable Array Logic, PAL), GAL (Generic Array Logic, GAL), field programmable gate array (Field-Programmable Gate Array, FPGA), complex programmable One or more in logical device (Complex Programmable Logic Device, CPLD), the present invention/invention pair This is not particularly limited.
The principle and its effect of above-described embodiment only illustrative the application, not for limitation the application.It is any ripe Know the personage of this technology all can without prejudice to spirit herein and under the scope of, modifications and changes are carried out to above-described embodiment.Cause This, those of ordinary skill in the art is complete without departing from spirit disclosed herein and institute under technological thought such as Into all equivalent modifications or change, should be covered by claims hereof.

Claims (20)

  1. A kind of 1. identifying system of pathological image, it is characterised in that including:
    Image received device, for receiving pathological image to be identified;
    Feature deriving means, for being entered using the multi-stage characteristics processing unit set in convolutional neural networks to the pathological image Row feature extraction;
    Identification device, for carrying out fusion treatment to the characteristic image that at least two-stage characteristic processing unit is exported, and it is based on melting Pathological characters after conjunction in image identify the pathological information in the pathological image.
  2. 2. the identifying system of pathological image according to claim 1, it is characterised in that the characteristic processing unit is included extremely The few one group first structure being made up of first filter, normalization module and active module;Wherein, the first filter according to Default step-length carries out pathological characters extraction to the image received.
  3. 3. the identifying system of pathological image according to claim 2, it is characterised in that the first filter is using cavity Convolution carries out pathological characters extraction to the image received.
  4. 4. the identifying system of pathological image according to claim 3, it is characterised in that include the feature of the first structure Processing unit is located at the end of the convolutional neural networks.
  5. 5. the identifying system of pathological image according to claim 1, it is characterised in that the characteristic processing unit includes At least one set of the second structure being made up of the second filter, normalization module, the first merging module and active module.
  6. 6. the identifying system of pathological image according to claim 5, it is characterised in that first merging module merges institute The characteristic image of at least two normalization module outputs received;Or
    First merging module merges the characteristic image for normalizing module output and previous stage characteristic processing unit institute is defeated The characteristic image gone out.
  7. 7. the identifying system of pathological image according to claim 1, it is characterised in that the feature deriving means also include Downsampling unit between two-stage characteristic processing unit, for the characteristic image received to be carried out into down-sampling processing.
  8. 8. the identifying system of pathological image according to claim 1, it is characterised in that the identification device includes:
    The 3rd filter being individually connected with each feature unit;Each 3rd filter is carried out to the characteristic image received Classification is handled;
    Module is up-sampled, is individually connected with the 3rd filter, the image for corresponding 3rd filter to be exported recovers To the size of the pathological image;
    Second merging module, it is connected with each up-sampling module, for merging the characteristic image after each recovery.
  9. 9. the identifying system of pathological image according to claim 7, it is characterised in that the identification device includes:Identification Module, for evaluating the disease in the pathological image based on the pathological characters corresponding to each pixel in the characteristic image after merging Manage information.
  10. A kind of 10. identifying system of cancer pathology image, it is characterised in that including:
    Image memory device, for storing cancer pathology image to be identified;
    Identifying system as described in any in claim 1-9, for being carried out to the pathological information in the cancer pathology image Identification;
    Display device, for the cancer pathology image comprising pathological information identified to be shown.
  11. A kind of 11. recognition methods of pathological image, it is characterised in that including:
    Obtain pathological image to be identified;
    Feature extraction is carried out to the original image using the multi-stage characteristics processing unit set in convolutional neural networks;
    Fusion treatment is carried out to the characteristic image that at least two-stage characteristic processing unit is exported, and based on the disease in fused image Manage the pathological information in pathological image described in feature recognition.
  12. 12. the recognition methods of pathological image according to claim 11, it is characterised in that the characteristic processing unit includes At least one set of first structure being made up of first filter, normalization module and active module;Wherein, the first filter is pressed Pathological characters extraction is carried out to the image received according to default step-length.
  13. 13. the recognition methods of pathological image according to claim 12, it is characterised in that include and press in the first structure The first filter of pathological characters extraction is carried out to the image received according to default empty step-length.
  14. 14. the recognition methods of pathological image according to claim 13, it is characterised in that include the spy of the first structure Sign processing unit is located at least afterbody of the convolutional neural networks.
  15. 15. the recognition methods of pathological image according to claim 11, it is characterised in that wrapped in the characteristic processing unit Include at least one set of the second structure being made up of the second filter, normalization module, the first merging module and active module.
  16. 16. the recognition methods of pathological image according to claim 15, it is characterised in that first merging module merges The image of at least two normalization module outputs received;Or
    First merging module merges what the image for normalizing module output and previous stage characteristic processing unit were exported Characteristic image.
  17. 17. the recognition methods of pathological image according to claim 11, it is characterised in that also include the characteristic processing The characteristic image that unit is exported carries out the step of down-sampling processing.
  18. 18. the recognition methods of pathological image according to claim 11, it is characterised in that described at least at two-stage feature The characteristic image that reason unit is exported carries out fusion treatment, and identifies the pathology figure based on the pathological characters in fused image The mode of pathological information as in includes:
    Classification processing is individually carried out to each characteristic image using 3rd filter corresponding with feature units at different levels;
    The image that corresponding 3rd filter is exported is recovered to the size of the pathological image, and merges the feature after each amplification Image.
  19. 19. the recognition methods of pathological image according to claim 18, it is characterised in that also include based on the spy after merging Levy the step of pathological characters in image corresponding to each pixel evaluate the pathological information in the pathological image.
  20. A kind of 20. recognition methods of cancer pathology image, it is characterised in that including:
    Cancer pathology image is identified using the recognition methods as described in any in claim 11-19;
    The cancer pathology image comprising pathological information identified is shown.
CN201710934902.6A 2017-10-10 2017-10-10 Pathological image identification method and system Active CN107665491B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710934902.6A CN107665491B (en) 2017-10-10 2017-10-10 Pathological image identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710934902.6A CN107665491B (en) 2017-10-10 2017-10-10 Pathological image identification method and system

Publications (2)

Publication Number Publication Date
CN107665491A true CN107665491A (en) 2018-02-06
CN107665491B CN107665491B (en) 2021-04-09

Family

ID=61098068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710934902.6A Active CN107665491B (en) 2017-10-10 2017-10-10 Pathological image identification method and system

Country Status (1)

Country Link
CN (1) CN107665491B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108389198A (en) * 2018-02-27 2018-08-10 深思考人工智能机器人科技(北京)有限公司 The recognition methods of atypia exception gland cell in a kind of Cervical smear
CN108509895A (en) * 2018-03-28 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for detecting facial image
CN108776969A (en) * 2018-05-24 2018-11-09 复旦大学 Breast ultrasound image lesion segmentation approach based on full convolutional network
CN108805918A (en) * 2018-06-11 2018-11-13 南通大学 Pathological image based on DCAE structures dyes invariance low-dimensional representation method
CN108846327A (en) * 2018-05-29 2018-11-20 中国人民解放军总医院 A kind of intelligent distinguishing system and method for mole and melanoma
CN109034183A (en) * 2018-06-07 2018-12-18 北京飞搜科技有限公司 A kind of object detection method, device and equipment
CN109363699A (en) * 2018-10-16 2019-02-22 杭州依图医疗技术有限公司 A kind of method and device of breast image lesion identification
CN109447963A (en) * 2018-10-22 2019-03-08 杭州依图医疗技术有限公司 A kind of method and device of brain phantom identification
CN109461144A (en) * 2018-10-16 2019-03-12 杭州依图医疗技术有限公司 A kind of method and device of breast image identification
CN110232394A (en) * 2018-03-06 2019-09-13 华南理工大学 A kind of multi-scale image semantic segmentation method
CN110363210A (en) * 2018-04-10 2019-10-22 腾讯科技(深圳)有限公司 A kind of training method and server of image, semantic parted pattern
TWI677230B (en) * 2018-09-25 2019-11-11 瑞昱半導體股份有限公司 Image processing circuit and associated image processing method
CN110490878A (en) * 2019-07-29 2019-11-22 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111010492A (en) * 2018-10-08 2020-04-14 瑞昱半导体股份有限公司 Image processing circuit and related image processing method
CN111814893A (en) * 2020-07-17 2020-10-23 首都医科大学附属北京胸科医院 Lung full-scan image EGFR mutation prediction method and system based on deep learning
CN111832625A (en) * 2020-06-18 2020-10-27 中国医学科学院肿瘤医院 Full-scan image analysis method and system based on weak supervised learning
CN109784149B (en) * 2018-12-06 2021-08-20 苏州飞搜科技有限公司 Method and system for detecting key points of human skeleton
WO2022028127A1 (en) * 2020-08-06 2022-02-10 腾讯科技(深圳)有限公司 Artificial intelligence-based pathological image processing method and apparatus, electronic device, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469100A (en) * 2015-11-30 2016-04-06 广东工业大学 Deep learning-based skin biopsy image pathological characteristic recognition method
US20160342888A1 (en) * 2015-05-20 2016-11-24 Nec Laboratories America, Inc. Memory efficiency for convolutional neural networks operating on graphics processing units
CN106203327A (en) * 2016-07-08 2016-12-07 清华大学 Lung tumor identification system and method based on convolutional neural networks
CN106203488A (en) * 2016-07-01 2016-12-07 福州大学 A kind of galactophore image Feature fusion based on limited Boltzmann machine
CN106600571A (en) * 2016-11-07 2017-04-26 中国科学院自动化研究所 Brain tumor automatic segmentation method through fusion of full convolutional neural network and conditional random field
CN106682435A (en) * 2016-12-31 2017-05-17 西安百利信息科技有限公司 System and method for automatically detecting lesions in medical image through multi-model fusion
CN106780475A (en) * 2016-12-27 2017-05-31 北京市计算中心 A kind of image processing method and device based on histopathologic slide's image organizational region
US20170262735A1 (en) * 2016-03-11 2017-09-14 Kabushiki Kaisha Toshiba Training constrained deconvolutional networks for road scene semantic segmentation

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160342888A1 (en) * 2015-05-20 2016-11-24 Nec Laboratories America, Inc. Memory efficiency for convolutional neural networks operating on graphics processing units
CN105469100A (en) * 2015-11-30 2016-04-06 广东工业大学 Deep learning-based skin biopsy image pathological characteristic recognition method
US20170262735A1 (en) * 2016-03-11 2017-09-14 Kabushiki Kaisha Toshiba Training constrained deconvolutional networks for road scene semantic segmentation
CN106203488A (en) * 2016-07-01 2016-12-07 福州大学 A kind of galactophore image Feature fusion based on limited Boltzmann machine
CN106203327A (en) * 2016-07-08 2016-12-07 清华大学 Lung tumor identification system and method based on convolutional neural networks
CN106600571A (en) * 2016-11-07 2017-04-26 中国科学院自动化研究所 Brain tumor automatic segmentation method through fusion of full convolutional neural network and conditional random field
CN106780475A (en) * 2016-12-27 2017-05-31 北京市计算中心 A kind of image processing method and device based on histopathologic slide's image organizational region
CN106682435A (en) * 2016-12-31 2017-05-17 西安百利信息科技有限公司 System and method for automatically detecting lesions in medical image through multi-model fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YASUNORI KUDO ET AL.: "Dilated convolutions for image classification and object localization", 《2017 FIFTEENTH IAPR INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS》 *
郭树旭 等: "基于全卷积神经网络的肝脏CT影像分割研究", 《计算机工程与应用》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108389198A (en) * 2018-02-27 2018-08-10 深思考人工智能机器人科技(北京)有限公司 The recognition methods of atypia exception gland cell in a kind of Cervical smear
CN110232394A (en) * 2018-03-06 2019-09-13 华南理工大学 A kind of multi-scale image semantic segmentation method
CN110232394B (en) * 2018-03-06 2021-08-10 华南理工大学 Multi-scale image semantic segmentation method
CN108509895A (en) * 2018-03-28 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for detecting facial image
CN108509895B (en) * 2018-03-28 2022-09-27 百度在线网络技术(北京)有限公司 Method and device for detecting face image
CN110363210A (en) * 2018-04-10 2019-10-22 腾讯科技(深圳)有限公司 A kind of training method and server of image, semantic parted pattern
CN110363210B (en) * 2018-04-10 2023-05-05 腾讯科技(深圳)有限公司 Training method and server for image semantic segmentation model
CN108776969A (en) * 2018-05-24 2018-11-09 复旦大学 Breast ultrasound image lesion segmentation approach based on full convolutional network
CN108776969B (en) * 2018-05-24 2021-06-22 复旦大学 Breast ultrasound image tumor segmentation method based on full convolution network
CN108846327A (en) * 2018-05-29 2018-11-20 中国人民解放军总医院 A kind of intelligent distinguishing system and method for mole and melanoma
CN109034183A (en) * 2018-06-07 2018-12-18 北京飞搜科技有限公司 A kind of object detection method, device and equipment
CN108805918B (en) * 2018-06-11 2022-03-01 南通大学 Pathological image staining invariance low-dimensional representation method based on DCAE structure
CN108805918A (en) * 2018-06-11 2018-11-13 南通大学 Pathological image based on DCAE structures dyes invariance low-dimensional representation method
TWI677230B (en) * 2018-09-25 2019-11-11 瑞昱半導體股份有限公司 Image processing circuit and associated image processing method
US11157769B2 (en) * 2018-09-25 2021-10-26 Realtek Semiconductor Corp. Image processing circuit and associated image processing method
CN111010492B (en) * 2018-10-08 2022-05-13 瑞昱半导体股份有限公司 Image processing circuit and related image processing method
CN111010492A (en) * 2018-10-08 2020-04-14 瑞昱半导体股份有限公司 Image processing circuit and related image processing method
CN109363699A (en) * 2018-10-16 2019-02-22 杭州依图医疗技术有限公司 A kind of method and device of breast image lesion identification
CN109461144B (en) * 2018-10-16 2021-02-23 杭州依图医疗技术有限公司 Method and device for identifying mammary gland image
CN109461144A (en) * 2018-10-16 2019-03-12 杭州依图医疗技术有限公司 A kind of method and device of breast image identification
CN109447963A (en) * 2018-10-22 2019-03-08 杭州依图医疗技术有限公司 A kind of method and device of brain phantom identification
CN109784149B (en) * 2018-12-06 2021-08-20 苏州飞搜科技有限公司 Method and system for detecting key points of human skeleton
CN110490878A (en) * 2019-07-29 2019-11-22 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111832625B (en) * 2020-06-18 2021-04-27 中国医学科学院肿瘤医院 Full-scan image analysis method and system based on weak supervised learning
CN111832625A (en) * 2020-06-18 2020-10-27 中国医学科学院肿瘤医院 Full-scan image analysis method and system based on weak supervised learning
CN111814893A (en) * 2020-07-17 2020-10-23 首都医科大学附属北京胸科医院 Lung full-scan image EGFR mutation prediction method and system based on deep learning
WO2022028127A1 (en) * 2020-08-06 2022-02-10 腾讯科技(深圳)有限公司 Artificial intelligence-based pathological image processing method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
CN107665491B (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN107665491A (en) The recognition methods of pathological image and system
Kumar et al. An integration of blockchain and AI for secure data sharing and detection of CT images for the hospitals
WO2021082691A1 (en) Segmentation method and apparatus for lesion area of eye oct image, and terminal device
Burdick et al. Rethinking skin lesion segmentation in a convolutional classifier
Oskal et al. A U-net based approach to epidermal tissue segmentation in whole slide histopathological images
CN110120047A (en) Image Segmentation Model training method, image partition method, device, equipment and medium
CN108615236A (en) A kind of image processing method and electronic equipment
CN111107783A (en) Method and system for computer-aided triage
Alzubaidi et al. Robust application of new deep learning tools: an experimental study in medical imaging
US20220207744A1 (en) Image processing method and apparatus
CN110246109B (en) Analysis system, method, device and medium fusing CT image and personalized information
CN111667468A (en) OCT image focus detection method, device and medium based on neural network
Yao et al. Pneumonia Detection Using an Improved Algorithm Based on Faster R‐CNN
Vij et al. A systematic review on diabetic retinopathy detection using deep learning techniques
CN110364250A (en) Automatic marking method, system and the storage medium of breast molybdenum target image
Fu et al. Deep‐Learning‐Based CT Imaging in the Quantitative Evaluation of Chronic Kidney Diseases
Elayaraja et al. An efficient approach for detection and classification of cancer regions in cervical images using optimization based CNN classification approach
Tursynova et al. Brain Stroke Lesion Segmentation Using Computed Tomography Images based on Modified U-Net Model with ResNet Blocks.
CN114782452B (en) Processing system and device of fluorescein fundus angiographic image
CN116129184A (en) Multi-phase focus classification method, device, equipment and readable storage medium
Stanitsas et al. Image descriptors for weakly annotated histopathological breast cancer data
CN115829980A (en) Image recognition method, device, equipment and storage medium for fundus picture
Wang et al. FSOU-Net: Feature supplement and optimization U-Net for 2D medical image segmentation
CN112837324A (en) Automatic tumor image region segmentation system and method based on improved level set
CN110674872B (en) High-dimensional magnetic resonance image classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant