WO2019232824A1 - 生物组织影像识别的方法及其系统、计算机储存介质 - Google Patents

生物组织影像识别的方法及其系统、计算机储存介质 Download PDF

Info

Publication number
WO2019232824A1
WO2019232824A1 PCT/CN2018/092499 CN2018092499W WO2019232824A1 WO 2019232824 A1 WO2019232824 A1 WO 2019232824A1 CN 2018092499 W CN2018092499 W CN 2018092499W WO 2019232824 A1 WO2019232824 A1 WO 2019232824A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
biological tissue
recognition
tissue image
parameters
Prior art date
Application number
PCT/CN2018/092499
Other languages
English (en)
French (fr)
Inventor
汪艳
侯金林
冯前进
Original Assignee
南方医科大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南方医科大学 filed Critical 南方医科大学
Publication of WO2019232824A1 publication Critical patent/WO2019232824A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the present invention mainly relates to the field of image recognition, in particular to a method for biological tissue image recognition, a system thereof, and a computer storage medium.
  • the invention provides a method for biological tissue image recognition.
  • the method includes the following steps: performing area calculation on a biological tissue image and distinguishing several primary structural regions; and distinguishing at least one primary structural region based on at least one of the primary structural regions.
  • a secondary structure region subordinate to any one of the primary structure regions; at least one characteristic region is distinguished based on at least one of the secondary structure regions.
  • the invention also provides a biological tissue image recognition system, which includes a processor for sending instructions to other parts of the recognition system and executing a preset program to perform various levels of structure on the input biological tissue image Area identification; geometric calculation module for receiving instructions from the processor to calculate geometric feature parameters of the input biological tissue image and feeding them back to the processor; gray processing module for receiving the processing Instructions of the processor, which process the input gray value of the biological tissue image, and feed them back to the processor; a condition filtering module, for receiving instructions of the processor, selecting a specified filtering condition to perform geometric calculation and And / or performing gray scale processing on the biological tissue image until further identifying at least one characteristic region.
  • the invention also provides a computer storage medium.
  • the computer storage medium includes a biological tissue image recognition program.
  • the biological tissue image recognition program is executed by a processor, the following steps are implemented: Area calculation and distinguish several primary structure regions; based on the at least one primary structure region, distinguish at least one secondary structure region subordinate to any of the primary structure regions; based on the at least one secondary structure region Structural area, distinguishing at least one characteristic area.
  • the invention provides a biological tissue image recognition method, a system thereof, and a computer storage medium, which can solve some technical problems and have the following advantages:
  • a method and system for biological tissue image recognition according to the present invention which can realize multi-level structural area division of biological tissue images, and finally obtain desired feature areas, and provide quantification around the recognition intermediate process and recognition results.
  • Parameter indicators through digital identification and mathematical quantification, can provide relatively reliable, unified and standardized identification results;
  • a method and system for biological tissue image recognition according to the present invention through the limitation of conditions and parameters such as feature attribute parameters and condition screening, can relatively accurately define the area or object that is expected to be identified to avoid ambiguity. Identification and judgment to make the identification process more accurate and reliable;
  • a method and system for biological tissue image recognition according to the present invention. By establishing a database and inputting typical sample data and statistical historical sample data, the method can be given a deep learning function to improve the recognition result. reliability;
  • the method and system for biological tissue image recognition according to the present invention can further improve the reliability of the recognition result through pre-processing means or adding a pre-processing module, can also avoid invalid recognition, save system resources, and improve Identification efficiency.
  • FIG. 1 is a diagram showing steps of a method for biological tissue image recognition according to the present invention
  • FIG. 3 is a structural display diagram of a biological tissue image recognition system according to the present invention.
  • FIG. 4 is an image 2 recognized by a biological tissue image recognition method according to the present invention.
  • an image of liver tissue (referred to as "image 1", see Fig. 2) is used as a biological tissue image for image recognition.
  • image 1 an image of liver tissue
  • FIG. 1 for performing the recognition steps by applying the method for biological tissue image recognition to image 1. The steps include:
  • Step S1 Calculate the area of the biological tissue image and distinguish several primary structure regions. Specifically, in this embodiment, area calculation is performed on the image 1 and three primary structure regions are distinguished.
  • the primary structure region specifically includes a cell structure region, a blood vessel structure region, and a fiber structure region.
  • the basis for distinguishing the primary structure area in step S1 may be parameters suitable for computer recognition, such as image wavelength values or wavelength value ranges, light and dark contrast, suitable for black and white or color images, or may be based on obvious boundaries and special morphological features To distinguish the primary structure area.
  • Step S2 Based on at least one of the primary structure regions, distinguish at least one secondary structure region subordinate to any of the primary structure regions.
  • the bridged fiber region is obtained based on the fiber structure region discrimination.
  • the bridged fiber region is subordinate to the fiber structure region, that is, one of the secondary structure regions.
  • the manifold fiber region Based on the interrelationship between the cell structure region, vascular structure region, and fibrous collagen structure region, the manifold fiber region, which is also a secondary structure region, is distinguished.
  • the step of distinguishing the secondary structure region from the primary structure region applying a gray value threshold method in each primary structure region to select at least one pixel point within a preset threshold range as The recognition object, which is highlighted to become the first candidate structure region through the increase of the gray value, and then the secondary structure region is obtained through conditional screening.
  • fiber pixels are randomly or comprehensively selected as recognition objects in the fiber structure area, and the pixels are within a preset threshold range.
  • the gray value threshold method is applied to the selected fiber pixels. Specifically, the fiber pixels are increased by the gray value to highlight a series of fiber pixel points and form a primary bridged fiber region belonging to the first candidate structure region. . Continue to use the primary bridging fiber area as the recognition object, and further apply conditional screening to remove areas or objects that do not meet the pre-selected conditions. After the trimming, the bridging fiber area belonging to the secondary structure area is obtained.
  • a gray value threshold method is applied to the fiber structure region in combination with the blood vessel structure region.
  • the gray value of the fiber pixel points in the fiber structure region is increased, it is performed between the blood vessel structure regions. In this way, the identification of the growth of fibers is limited to the gaps in the vascular structure region, and the identified object is the bridging fibers.
  • conditional screening identifies bridged fiber regions based on the positional relationship between the vascular structure regions, excludes the initial bridged fiber regions that do not conform to the positional relationship between the vascular structure regions, and then obtains The bridged fiber area meets the pre-selection criteria.
  • the method further includes a step of fusing at least two adjacent or overlapping primary structure regions.
  • the primary structure region is fused in advance, and the pre-fused primary structure region includes a vascular structure region and a fibrous structure region that are intertwined in the image 1, and are adjacent to each other or in an overlapping state.
  • the fused primary structure regions can be further identified through inter-regional interrelationships.
  • the vascular structure region and the fiber structure region are fused with each other to form a primary tube region fiber region, and the primary tube region fiber region belongs to a primary structure region.
  • the fiber region of the primary tube region was identified by the gray value threshold method. Specifically, randomly or comprehensively select fiber pixels that meet the threshold value in the fiber region of the primary tube area as the recognition object, and increase the gray value of the fiber pixels to highlight the first sub-fiber domain, which belongs to the first sub-fiber domain.
  • the first sub-fiber domain identified by the gray value threshold method for fiber pixels has been removed from areas where no bridged fibers exist. By conditional screening, the first sub-fiber domain further removes areas smaller than the specified area to form a second sub-fiber domain.
  • the second sub-fiber domain belongs to a second structural region.
  • the specified area is an average area of one liver cell.
  • the conditional screening can be distinguished in step S2 by adjusting, transforming, and repeating multiple times to obtain the first candidate structure region of the same level or candidate regions of different levels.
  • the gray value threshold method is applied to the vascular structure area, and the vascular pixel points in the vascular structure area are randomly or comprehensively selected as the recognition object. The gray value growth of the vascular pixel points is highlighted to become the first sub-vascular domain.
  • the first subvascular region belongs to a first candidate structure region.
  • the gray value threshold method can be used to obtain the first sub-vascular domain where only blood vessels are present.
  • the edge of the blood vessel wall can be calculated by identifying the blood vessel pixels.
  • the conditional screening is continued to be applied to the first child blood vessel domain.
  • the conditional screening is to remove the region corresponding to the blood vessel wall thickness greater than or equal to 5 ⁇ m to obtain a second child blood vessel domain, which belongs to the second structural region. .
  • a feature attribute parameter may also be extracted for the recognition object, and the set of feature attribute parameters is entered into the first database according to the classification.
  • the identified object is more specifically embodied in this embodiment as a biological tissue or a cell, a cell bundle formed by cells, that is, including vascular tissue, connective tissue, nerve cells, nerve fibers formed by nerve cells, muscle tissue, collagen fibers, fat Adipose tissue formed by cells, adipocytes.
  • Feature attribute parameters are extracted for a single or multiple of the identified objects, and the extracted feature attribute parameters may be one item or multiple items.
  • the characteristic attribute parameters may be attribute parameters of tissue morphology and cell morphology, such as cell size, cell arrangement, cell boundary morphology, tissue morphology, and the like.
  • the first database can also be compared with preset feature attribute parameters and subsequently extracted feature attribute parameters, and the comparison result can be calculated to reflect the reliability of the recognition result, such as the fault tolerance rate and reliability.
  • the statistical data can also be used as one of the basis for judging whether the identification object meets. A test mechanism for judging again or providing statistical data can help those skilled in the art better grasp the actual state of the image 1.
  • Step S3 distinguish at least one characteristic region based on at least one secondary structure region.
  • a manifold fiber region is obtained by performing position mapping, and the manifold fiber region belongs to a characteristic region.
  • the bound second subfiber domain and the second blood vessel domain both belong to the secondary structure region.
  • distinguishing the feature region includes at least the step of: obtaining the feature based on at least one secondary structure region, combining the primary structure region, and then matching the second database of feature attribute parameters of other specified structural organizations to obtain the feature. region.
  • a nodular characteristic region is further identified. Prior to the identification of the nodular feature region, it was determined that the nodular feature region conformed to a predetermined feature attribute parameter through the second database performing a corresponding matching identification step, and the feature attribute parameter reflected the characteristics of other specified structural organizations.
  • further differentiation can be made by combining regions of different levels.
  • the area of the mesangial fiber combined with the area of the cell structure can be further distinguished to obtain a characteristic region of the cell fibers.
  • the characteristic region may further be distinguished from at least one level region or an identification object by specific conditions.
  • the bridged fiber region is mapped to the region between hepatic sinusoidal endothelial cells and hepatocytes to obtain a characteristic region of the sinusoidal fibers.
  • the characteristic attribute parameters of the hepatic sinusoidal endothelial cells and the regions between liver cells referred to under specific conditions can be used to screen the characteristic regions of the sinusoidal fibers.
  • the nodular feature region, the peripheral cell feature region, and the sinus fiber feature region belong to the characteristic region.
  • the feature attribute parameters in the first database and the second database can be optimized and deep-learned by a preset input method, or by collecting intermediate data, final results, and the like that have been previously identified.
  • the characteristic attribute parameter may be an attribute parameter of tissue morphology and cell morphology, such as a cell size, a cell arrangement, a cell boundary morphology, a tissue morphology, a histology and a cytological staining and the like, and may also be
  • the interrelationships between specific tissues or cells can also include attribute parameters that are distinguishable in data messages and computer-recognizable carriers, such as contrast, grayscale, borders, colors, and attribute parameters, which are obtained by acquiring images. Features of fit.
  • the first database and the second database can also apply the feature attribute parameters in step S1 and subsequent steps.
  • the method for biological tissue image recognition further includes the step of extracting quantization parameters for any one of the identified primary structure region, secondary structure region, feature region, and recognition object, and the quantization parameter includes At least one parameter of boundary, area, geometric size, and quantity.
  • the time point for extracting the quantization parameter may be immediately after identification, or may be uniformly extracted in the last step.
  • the quantization parameters can be obtained through methods such as boundary extraction and calculus calculation.
  • the index amount in the quantization parameter is a specific number of one of the primary structure region, the secondary structure region, the characteristic region, and the identification object, for example, there are multiple isolated blood vessel structure regions in the image 1, and the image 1 There are also multiple nodular feature areas. If there are multiple primary structure areas, secondary structure areas, feature areas, and identification objects and there is a calculation requirement, the specific number needs to be calculated.
  • the geometric size has different manifestations for different recognition objects.
  • the geometric dimension refers to a vascular wall thickness value, a blood vessel diameter, a blood vessel region, and a degree of concentration of blood vessels, which reflect the geometric characteristics of the blood vessel;
  • the geometric dimension value is the length of collagen fibers and collagen
  • the size of fiber bundles, the area covered by collagen fibers, the boundary of collagen fibers, and other dimensions reflect the geometric characteristics of collagen fibers and their regions.
  • the geometric dimension refers to The above three parameters such as the position of the centerline, the position of the center of mass, the area, the boundary, and the contour reflect the geometric characteristics of the three.
  • the quantitative parameters are classified and a mathematical model is established, and the classification is based on the attributes of the parameters themselves or the classification logic formed according to the needs of identification. According to the characteristics of the quantization parameters, normalize and then test to adjust the accuracy of the quantization classification multiple times and continue to optimize the classification logic or the attribute settings of the parameters themselves.
  • the quantification parameter related to the blood vessel can be classified as a vascular parameter, and the quantization parameter related to the nodular characteristic region can be classified as a nodule parameter and the like.
  • it can also be classified into different quantization parameter sequences according to whether it has morphological significance or pathological significance, and the quantization parameter sequence can be further refined according to the classification criteria.
  • the classification of quantitative parameters is an important pre-step for mathematical modeling.
  • the accuracy of classification is related to the reliability of subsequent mathematical models.
  • the reliability of the final database affects the subsequent Identify.
  • a qualitative and quantitative sample can be confirmed by a technician, and related quantitative parameters can be written into a database by the method of biological tissue image recognition for debugging or identification as a classic sample.
  • the determined sample can also be judged by the biological tissue image recognition method to quantify the accuracy of classification.
  • h ⁇ (x) is a description function describing the properties of the biological tissue image
  • is a weight coefficient
  • x is a feature vector
  • e is a natural logarithm
  • m is a feature vector dimension.
  • the description function is a method of mathematically normalizing and weighting the image 1 and finally reflecting the specific situation of the image 1 with a function value.
  • the weight coefficient ⁇ adjusts the weight ratio between the quantization parameters according to the importance of the quantization parameters themselves and their mutual influence to obtain a description function value that can reflect the actual situation of the image 1.
  • the feature vector x is a numerical value with a vector property that points to the recognition object, the primary structure region, the secondary structure region, and the feature region.
  • the feature vector dimension m refers to an amount that reflects the dimensional features in the feature vector x.
  • the method for biological tissue image recognition further includes the step of: performing at least one preprocessing means of color correction, contrast adjustment, ratio adjustment, and deformation repair on the biological tissue image.
  • the quality of the obtained image may be affected by factors such as the quality of the scanning tool such as a camera, the scanning environment, and the like.
  • the pre-processing method for adjusting the scale can introduce a scale when scanning the input image, and obtain a uniform-scale image 1 through recognition and scaling in the later stage.
  • a system 3 for biological tissue image recognition (hereinafter referred to as “system 3”) is also provided in this embodiment, please refer to FIG. 3.
  • the system 3 includes at least a processor 301, a geometry calculation module 302, a grayscale processing module 303, and a condition screening module 304.
  • the processor 301 is configured to send instructions to other parts of the recognition system 3 and execute preset programs to identify and distinguish structural regions at different levels with respect to the inputted biological tissue image.
  • the geometric calculation module 302 is configured to receive instructions from the processor 301, calculate geometric feature parameters of an input biological tissue image, and feed the geometric feature parameters to the processor 301.
  • the grayscale processing module 303 is configured to receive instructions from the processor 301, process the grayscale value of the input biological tissue image, and feed it back to the processor 301.
  • the condition screening module 304 is configured to receive an instruction from the processor 301, select a specified screening condition to further identify the biological tissue image that has been subjected to geometric calculation and / or gray level processing, until at least one is
  • the processor 301 first receives the data of the image 1 and distinguishes the structural regions of each level in advance, especially the primary structural regions. And the whole and the first-stage primary structural region of the image 1 are calculated by the geometric calculation module 302 as a whole area of the biological tissue and the area of the primary structure region. The geometric calculation module 302 feeds the calculation result to the processor 301. The processor 301 further obtains the secondary structure area subordinate to the primary structure area according to the primary structure area distinction, and distinguishes the tertiary structure area according to the secondary structure area.
  • the grayscale processing module 303 processes the image 1 processed by the processor 301, interacts with the processor 301, and feeds back the processing result to the processor 301.
  • condition screening module 304 is directly associated with a database (including the first database 305 and the second database 306) and interacts with the processor 301.
  • the condition screening module 304 receives the primary structure region included in the image 1 transmitted by the processor 301, and compares it with the screening conditions of the first database 305 or the second database 306.
  • the condition screening module 304 outputs the result of the condition screening.
  • the result of the condition screening may be a secondary structure region, a characteristic region, a recognition object, and the like.
  • the screening conditions of the condition screening module 304 are selected from the first database 305 and the second database 306.
  • the first database 305 collects feature attribute parameters extracted from the organization.
  • the tissue from which the characteristic attribute parameters are extracted in this embodiment is embodied as a biological tissue or a cell bundle formed by cells, that is, including vascular tissue, connective tissue, nerve cells, nerve fibers formed by nerve cells, muscle tissue, collagen fibers, fat cells, Adipose tissue formed by adipocytes.
  • the second database 306 gathers characteristic attribute parameters of other specified structural organizations.
  • the other specified structural tissues may be liver-related nodular tissues, manifold fibrous tissues, bridging fibrous tissues, perisinus fibrous tissues, pericellular fibrous tissues, etc., of course, the other specified structural tissues also include other tissues and organs.
  • the characteristic attribute parameters refer to parameters that reflect the tissue characteristics, such as histomorphology, cell morphology, and boundary characteristics of the tissue.
  • the gray-level processing module 303 applies a gray-level threshold method to select pixels within a preset threshold range in at least one primary structure region that is initially distinguished by the geometric calculation module 302, and passes the The growth of the gray value highlights the first candidate structure region, and then processes through the condition screening module 304 to obtain the next-level structure region.
  • the grayscale processing module 303 receives an instruction from the processor 301, performs grayscale processing on the transmitted image 1 and the area of the primary structure that has been distinguished, and removes parameters related to color.
  • the geometric calculation module 302 may also divide the primary structure region according to the geometric boundary.
  • the grayscale processing module 303 randomly or comprehensively scans the fiber structure region to select fiber pixels that meet a preset threshold range, and increases the grayscale value by On the basis of a single fiber pixel, the fiber pixels that meet the threshold are grown and combined to form a first sub-fiber domain.
  • the data of the first sub-fiber domain passes the condition screening module 304, and the condition screening module 304 requests related condition data from the first database 305 and further matches to determine a second sub-fiber domain that meets a preset condition in the first sub-fiber domain.
  • the first sub-fiber domain is a first candidate structural region
  • the second sub-fiber domain is a lower-level structural region.
  • the condition screening includes at least a condition screening determined by any numerical range of position, distance, area, boundary, morphology, contrast, and mutual relationship.
  • the conditional screening may limit the fiber pixel points of the second sub-fiber domain to be located between the blood vessel objects formed by the blood vessel pixel sites. Location conditions.
  • the geometric calculation module 302 is further configured to fuse at least two adjacent or overlapping levels of structural regions to facilitate extraction of geometric feature parameters of the corresponding regions.
  • the second structural region performs a collection operation through a mathematical operation function of the geometric calculation module 302. More specifically, it is distinguished that the overlapping second sub-fiber domain and the second blood vessel domain are fused by a collection operation, and the fusion facilitates the extraction of geometric feature parameters of a specific region of the geometric calculation module 302.
  • the geometric calculation module 302 is further configured to extract quantization parameters for the identified structural regions, feature regions, or intermediate recognition regions at each level.
  • the quantization parameters include at least one of a boundary, an area, a geometric size, and a quantity.
  • the structural regions at each level include a primary structural region, a secondary structural region, and a next-level structural region obtained through subsequent identification.
  • the intermediate identification region refers to the first candidate structural region, etc.
  • the auxiliary recognition obtains the middle area of the structural regions of each level, and also includes some recognition objects for auxiliary recognition.
  • the boundary, area, geometric size, and quantity may be the boundaries, areas, geometric sizes, and quantities of the structural regions at all levels, the intermediate recognition area, and the feature area as a whole, and may also be intermediate recognition objects or components.
  • the boundary, area, geometric size, and number of objects in the area may be the boundary contour and area of a single vascular structure region, the number and total area of the vascular structure regions of the entire image 1, the number of vascular objects constituting the vascular structure region, Quantification parameters such as distribution.
  • the quantification parameters can be the area, proportion, and contour of the fiber in a single fibrous structure region, and the total area and number of all fibrous structural regions.
  • the geometric size includes at least the wall thickness value of the middle blood vessel, the length value of the collagen fibers, and the position of the center line or the position of the center of mass of each level of structural region, characteristic region.
  • the geometric size can determine the structure area, feature area, intermediate recognition area, and characteristics of the recognition object at all levels, and is an important quantification parameter.
  • the quantization parameters are classified according to the attributes of the parameters and a mathematical model is established. After the normalization process, the test is performed to adjust the accuracy of the quantization classification multiple times.
  • the mathematical model and the normalization process result are input to the first Three databases 307.
  • the quantization parameters are classified more systematically and scientifically because the corresponding objects are different, which facilitates subsequent establishment of a mathematical model.
  • the classification may be classified according to attributes, similarity degree, and engineering needs of the identification object itself.
  • the third database 307 stores the established mathematical model, and the third database 307 is used for subsequent identification and matching of data. When the data that is identified and entered in the third database 307 gradually increases, the recognition result will also More reliable.
  • the normalization processing performed by the processor 301 may apply the following formula:
  • h ⁇ (x) is a description function describing the properties of the biological tissue image
  • is a weight coefficient
  • x is a feature vector
  • e is a natural logarithm
  • m is a feature vector dimension.
  • the main idea of applying the description function h ⁇ (x) is to quantify complex parameters and feature attributes through normalization, and use the weight coefficient ⁇ that reflects the importance of the parameter as a factor to adjust the description function, so that the description function h ⁇ ( The output value of x) can describe the image 1 itself in a comprehensive, accurate and focused manner.
  • the independent variable in the description function h ⁇ (x) is x representing the feature vector
  • m represents the vector dimension as the feature vector x.
  • the system 3 further includes a color correction module 308 for performing color correction on the biological tissue image, a contrast processing module 309 for performing contrast adjustment, a ratio adjusting module 310 for performing proportional adjustment, and a deformation repair for performing deformation repair.
  • Module 311 the color correction module 308, the contrast processing module 309, the ratio adjustment module 310, and the deformation repair module 311 are connected to the processor 301, respectively, and the image 1 will pass through the color correction module before being input to the processor 301 308. Adjustment of one of the contrast processing module 309, the ratio adjustment module 310, and the deformation repair module 311.
  • the processor 301 determines that the image 1 needs to be subjected to image adjustment processing, the processor 301 inputs a feedback result to a corresponding processing module, and the corresponding module returns the image 1 and related processing data after processing.
  • the related processing data will be finally reflected in the entire image 1 recognition result, which is used to reflect the quality and reliability of image 1.
  • the processor 301 may also feed back the recognition result finally output from image 1 according to the preprocessing, subsequent reprocessing times, processing quality, and processing degree of image 1, and the feedback result may be re-scanning image recognition, image unavailability, Feedback suggestions such as low image reliability, or suggestion to discontinue recognition directly to avoid outputting recognition results that do not meet the recognition requirements and have low reliability.
  • the computer-readable storage medium includes a program for biological tissue image recognition.
  • the program for biological tissue image recognition is executed by the processor 301, the following steps are implemented:
  • At least one characteristic region is distinguished.
  • the computer storage medium in this embodiment refers to being recorded in a computer storage medium in the form of a data message.
  • the computer storage medium in this embodiment refers to a floppy disk, a hard disk, a magnetic tape, a hole tape, or a server. Etc. in a medium that the computer can recognize.
  • a biological tissue image recognition method is used to identify the image 2.
  • the method includes the following steps:
  • Step S1 Calculate the area of the lung tissue in image 2 and distinguish several primary structure regions.
  • the primary structural regions distinguished in this embodiment include a lung cell region, a pulmonary blood vessel region, a lung trachea region, a lung fiber region, and the like.
  • the primary structure region can be used to preliminarily subdivide the lung tissue in the image 2 through obvious biohistological staining, contrast, boundary contour, state, and the like.
  • Step S2 Based on at least one of the primary structure regions, distinguish at least one secondary structure region subordinate to any of the primary structure regions.
  • the step S2 of distinguishing the secondary structure region includes at least the steps of: applying a gray value threshold method in each primary structure region to select at least one pixel within a preset threshold range as a recognition object, and the recognition object It becomes the first candidate structure region through the increase of gray value, and then the secondary structure region is obtained through conditional screening.
  • a pulmonary blood vessel region is taken as an example.
  • the pulmonary blood vessel pixels within the preset gray level threshold are selected comprehensively as the recognition object, and the gray value threshold method is used to control and grow the pulmonary blood vessel pixels to grow the recognition object into a complete lung.
  • Vascular structure, and the complete pulmonary vascular structure image is the first child pulmonary blood vessel region, and the first child pulmonary blood vessel region belongs to the first candidate structural region.
  • the pulmonary blood vessel structure in the first partial pulmonary blood vessel region is obtained through conditional screening to obtain a second partial pulmonary blood vessel region, and the second partial pulmonary blood vessel region belongs to a secondary structure region.
  • conditional screening excludes a range of values including position, distance, area, boundary, morphology, and contrast.
  • pulmonary blood vessels with a larger diameter are excluded by the shape of the pulmonary blood vessels.
  • Conditional screening can also be the state of the pulmonary blood vessels themselves, and in this embodiment, blocked pulmonary blood vessels can be excluded.
  • the conditional screening can also exclude shadowy and flocculent areas of the lung tissue.
  • the characteristic attribute parameters of the identified object are extracted from at least one tissue including blood vessels, connective tissue, nerve cells, muscle tissue, collagen fibers, and adipose tissue, and the set of characteristic attribute parameters is pre-entered into the first database.
  • the characteristic attribute parameters related to blood vessels and collagen fibers as examples, the diameter of a blood vessel, the thickness of a blood vessel wall, the length of collagen fibers, the location of collagen fibers, and the degree of accumulation of collagen fibers can be extracted.
  • the extracted feature attribute parameters will be collected and entered into the first database.
  • the previously obtained feature attribute parameters can be entered into the first database.
  • the method further includes a step of fusing at least two adjacent or overlapping primary structure regions.
  • the lung fiber region is taken as an example.
  • the lung fiber region and the pulmonary blood vessel region are fused to form a joint region.
  • lung fiber pixels are randomly selected as recognition objects in the joint region, and the lung fiber pixels meet a preset gray value threshold.
  • the gray value threshold method is used to control the growth of the lung fiber pixels, the recognition object is grown into a collection of lung fibrous tissue, and the complete lung fibrous tissue structure image becomes the first child lung fiber region, and the first child lung fiber region Belongs to the first candidate structure region.
  • the lung fibrous tissue structure in the first sub-lung fiber region is obtained through conditional screening to obtain a second sub-lung fiber region, and the second sub-lung fiber region belongs to a secondary structure region.
  • condition screening of the lung fiber region conditions such as selecting lung fibers connected between the identified vascular structures, and selecting lung fibers having a preset fiber length can be selected.
  • selecting lung fibers connected between the identified vascular structures, and selecting lung fibers having a preset fiber length can be selected.
  • the lung fibers in the region of the second child lung fiber obtained have the characteristics of bridging between vascular structures.
  • Step S3 distinguish at least one characteristic region based on at least one secondary structure region.
  • the distinguishing the characteristic region includes at least the step of: based on the at least one secondary structure region, combining the primary structure region, and then matching matching of a second database that combines characteristic attribute parameters of other specified structural organizations,
  • the characteristic region is obtained.
  • the other specified structural tissues may be lung-related nodular tissues, fibrous tissues, infected areas, etc.
  • the other specified structural tissues also include specific structures related to other tissues and organs, for example, they can be popularized and applied to the small intestine, stomach, and kidneys. , Skin, etc. image recognition.
  • the characteristic attribute parameters of the nodular tissue are embodied in this embodiment as characteristic data that can reflect the characteristics of the nodular tissue and distinguish it from other similar tissues, such as the connection relationship parameters between fibrous tissue and lung cells, and the shape of the nodular tissue.
  • the method for biological tissue image recognition further includes extracting quantization parameters for any of the identified primary structure area, secondary structure area, feature area, and recognition object, and the quantization parameters include a boundary and an area. , Geometric size, quantity at least one parameter.
  • the boundary, area, geometric size, and number of the pulmonary blood vessel region and the pulmonary nodule region can be extracted. It is worth noting that the number of identified pulmonary blood vessel regions, pulmonary nodular regions, pulmonary fibrous regions, and lung fibrous regions may be multiple scattered regions or objects, and the positional relationship between the regions or objects may be discrete or overlapping Adjacent one or more.
  • the geometric size may be specifically embodied as the wall thickness value of the pulmonary blood vessels, the length value of the collagen fibers, the coverage area of the lung fibrous tissue, and the position of the centerline and the position of the center of mass of the primary structure region, the secondary structure region, and the characteristic region. For better borders and contours.
  • the method includes at least the following steps: classifying the quantization parameter according to the attribute of the parameter and establishing a mathematical model, testing after normalization processing, and adjusting the accuracy of the quantization classification multiple times.
  • the quantization parameters are classified according to certain classification rules in this embodiment. Specifically, the quantization parameters may be classified according to the types of tissues and cells reflected by the quantization parameters, or may be classified according to the importance degree and weight ratio.
  • the mathematical model established according to the quantified parameters of the classification is tested for accuracy after normalization. If the accuracy is not satisfactory, the classification rules are readjusted.
  • the normalization process may apply the following formula:
  • h ⁇ (x) is a description function describing the properties of the biological tissue image
  • is a weight coefficient
  • x is a feature vector
  • e is a natural logarithm
  • m is a feature vector dimension.
  • the description function h ⁇ (x) is used to describe the state of liver tissue in image 2.
  • the description focus of the description function h ⁇ (x) will be controlled by controlling the weight coefficient ⁇ , which varies according to the feature vector x, and the dimension of the feature vector x is represented by the feature vector dimension m.
  • the image 2 is pre-processed in advance through pre-processing means.
  • the lung tissue in the image 2 has a complicated structure, and can be obtained through pre-processing such as adjustment of contrast and adjustment of gray scale.
  • the pre-processing means may further include color correction, scale adjustment, deformation repair, and the like. Standard scales can be introduced for proportion adjustment, and standard color cards can be introduced for color trimming
  • this embodiment further provides a system 3 for biological tissue image recognition, which specifically includes:
  • the processor 301 is configured to send instructions to other parts of the recognition system 3 and execute a preset program to identify and distinguish structural regions at different levels for the input image 2;
  • a geometric calculation module 302 configured to receive instructions from the processor 301, calculate geometric feature parameters of the input image 2, and feed the geometric feature parameters to the processor 301;
  • a grayscale processing module 303 configured to receive an instruction from the processor 301, process the grayscale value of the input image 2, and feed it back to the processor 301;
  • the condition screening module 304 is configured to receive an instruction from the processor 301 and select a specified screening condition to further identify the image 2 that has been subjected to geometric calculation and / or gray processing, until at least one feature region is distinguished. .
  • the processor 301 and the geometric calculation module 302 distinguish the primary structure region in advance, and the distinguished primary structure region includes a lung cell region, a pulmonary blood vessel region, a lung trachea region, a lung fiber region, and the like.
  • the processor 301 can perform lung biopsy in the image 2 by obvious biohistological staining, contrast recognized by the gray processing module 303, boundary contours recognized by the geometric calculation module 302, and matching conditions determined by the condition filtering module 304, etc. Perform preliminary segmentation.
  • the system 3 recognizes the secondary structure area in each primary structure area.
  • the grayscale processing module 303 uses the grayscale threshold method to select at least one pixel within a preset threshold range as the recognition object.
  • the recognition object becomes the first candidate structure region through the increase of gray value, and then the secondary structure region is obtained through conditional screening.
  • a pulmonary blood vessel region is taken as an example.
  • the grayscale processing module 303 comprehensively selects pulmonary blood vessel pixels within a preset grayscale value threshold as a recognition object in the pulmonary blood vessel region, and further controls the growth of the pulmonary blood vessel pixel points by using a grayscale threshold method to identify the recognition object It grows into a complete pulmonary vascular structure, and the complete pulmonary vascular structure image becomes the first child pulmonary blood vessel region.
  • the first child pulmonary blood vessel region belongs to the first candidate structural region.
  • the gray processing module 303 feeds back the processing result to the processor. 301.
  • the processor 301 sends an instruction and sends the data to be processed to the conditional screening module 304.
  • the pulmonary blood vessel structure in the first child pulmonary blood vessel region is conditionally screened by the conditional screening module 304 to obtain a second child pulmonary blood vessel region.
  • the second subpulmonary vascular region belongs to the secondary structure region.
  • the conditional screening excludes a range of values including position, distance, area, boundary, morphology, and contrast.
  • pulmonary blood vessels with a larger diameter are excluded by the shape of the pulmonary blood vessels.
  • Conditional screening can also be the state of the pulmonary blood vessels themselves, and in this embodiment, blocked pulmonary blood vessels can be excluded.
  • the conditional screening can also exclude shadowy and flocculent areas of the lung tissue.
  • the characteristic attribute parameters for identifying the object are selected from the first database 305 and the second database 306.
  • the first database 305 includes at least one of blood vessels, connective tissue, nerve cells, muscle tissue, collagen fibers, and adipose tissue.
  • the second database 306 collects feature attribute parameters of other specified structural organizations. In this embodiment, taking the extraction of the characteristic attribute parameters related to blood vessels and collagen fibers as examples, the diameter of a blood vessel, the thickness of a blood vessel wall, the length of collagen fibers, the location of collagen fibers, and the degree of accumulation of collagen fibers can be selected.
  • the selected feature attribute parameters are recorded in the first database 305 and the second database 306 in advance.
  • the geometric calculation module 302 is further configured to fuse at least two adjacent or overlapping levels of structural regions to facilitate extraction of geometric feature parameters of the corresponding regions.
  • the lung fiber region is taken as an example.
  • the geometric calculation module 302 fuses the lung fiber region and the pulmonary blood vessel region to form a joint region, and feeds back the fusion region to the processor 301.
  • the processor 301 transmits the fusion area data to the grayscale processing module 303, and instructs further to randomly select lung fiber pixels in the joint area as recognition objects, and the lung fiber pixels meet a preset grayscale value threshold.
  • the gray value threshold method is used to control the growth of the lung fiber pixels, the recognition object is grown into a collection of lung fibrous tissue, and the complete lung fibrous tissue structure image becomes the first child lung fiber region, and the first child lung fiber region Belongs to the first candidate structure region.
  • the obtained first candidate structure region is returned to the processor 301 again.
  • the processor 301 transmits the data related to the first child lung fiber region to the conditional screening module 304, and the lung fiber tissue structure in the first child lung fiber region is obtained by the conditional screening module 304 to obtain the second child lung fiber.
  • Region the second child lung fiber region belongs to the secondary structure region.
  • conditions such as selecting lung fibers connected between the identified vascular structures, and selecting lung fibers having a preset fiber length can be selected.
  • the lung fibers in the region of the second child lung fiber obtained have the characteristics of bridging between vascular structures.
  • the geometric calculation module 302 is further configured to extract quantization parameters for the identified structural regions, feature regions, or intermediate recognition regions at each level.
  • the quantization parameters include at least one parameter of a boundary, an area, a geometric size, and a quantity. More specifically, the geometric size includes at least the wall thickness value of the middle blood vessel, the length value of the collagen fibers, and the position of the center line and / or the position of the center of mass of each level of structural region, characteristic region.
  • the processor 301 instructs the geometric calculation module 302 to extract the quantization parameters according to the structural region, the characteristic region, or the intermediate recognition region of each level, the characteristics of the recognition object, and preset requirements.
  • the extraction of quantization parameters for different regions and recognition objects is different. For example, for the pulmonary blood vessel region, the centerline position and / or centroid position of the pulmonary blood vessel region, the region boundary, and the like can be extracted; for the collagen fibers in the pulmonary fiber region, the collagen fiber area and the collagen fiber length can be extracted.
  • the quantization parameters are classified according to the attributes of the parameters and a mathematical model is established. After normalization, the test is performed to adjust the accuracy of the quantization classification multiple times.
  • the mathematical model and the normalized processing result are input to Third database 307. Further, the normalization process performed by the processor 301 may apply the following formula:
  • h ⁇ (x) is a description function describing the properties of the biological tissue image
  • is a weight coefficient
  • x is a feature vector
  • e is a natural logarithm
  • m is a feature vector dimension.
  • the description function h ⁇ (x) will be performed by the processor 301.
  • the quantization parameters are classified by the processor 301, and the classified quantization parameters can be returned to the first database 305 and the second database 306 to enrich the database, a database with a large data volume, and reliability. It will improve with it.
  • the normalized processing result refers to the description function h ⁇ (x), which is one of the important system feedback results.
  • the classification results, mathematical models, and description function values can also be tested by the first database 305, the second database 306, and the third database 307, which is convenient for the system 3 to perform deep learning.
  • the system 3 further includes a color correction module 308 for performing color correction on the biological tissue image, a contrast processing module 309 for performing contrast adjustment, a ratio adjusting module 310 for performing proportional adjustment, and a deformation repair for performing deformation repair.
  • a color correction module 308 for performing color correction on the biological tissue image
  • a contrast processing module 309 for performing contrast adjustment
  • a ratio adjusting module 310 for performing proportional adjustment
  • a deformation repair for performing deformation repair.
  • the processor 301 needs to call the contrast processing module 309 to perform a group of sharper images 2 of contrast adjustment.
  • the processor 301 may also remove unnecessary color components of the image 2 through the color correction module 308 to reduce the processing load of the processor 301 and facilitate the subsequent processing by the grayscale processing module 303.
  • the processor 301 may also instruct the scale adjustment module 310 and the deformation repair module 311 to process the image 2 according to the quality and recognition requirements of the image 2.
  • the computer-readable storage medium includes a program for biological tissue image recognition.
  • the program for biological tissue image recognition is executed by the processor 301, the following steps are implemented:
  • At least one characteristic region is distinguished.
  • the computer storage medium refers to a hard disk built into the identification device, and the inspection device can call a program in the hard disk.
  • the computer storage medium may also be a server with a storage function.
  • the identification device needs to apply the program, the image 2 is sent to the server through the network and the identification is requested, or the program is transmitted. Load it into the recognition device for local recognition.
  • the present invention can at least partially overcome the problems in the prior art.
  • the invention can realize a multi-level structural area for biological tissue images, and finally obtain desired feature areas, and can provide quantified parameter indicators around the recognition intermediate process and recognition results.
  • digital recognition and mathematical quantification it can provide reliable, unified, and standardized recognition. result.
  • by restricting conditions and parameters such as feature attribute parameters and condition filtering it is possible to accurately define the area or object that is expected to be identified, avoid ambiguous identification and judgment, and make the identification process more accurate and reliable.
  • the present invention can give the method a function of deep learning by means of typical samples, historical comparison and the like, and improve the reliability of the recognition result.
  • the reliability of the recognition result can be further improved, and invalid recognition can be avoided, system resources can be saved, and recognition efficiency can be improved.
  • the present invention can also be applied to tissues such as stomach, skin, and muscle.
  • the invention can be applied to organs and tissues of humans and animals in laboratory experiments and inspection institutions.
  • the method and system 3 provided by the present invention can be applied to tissues and organs that require large-volume identification in the experiment, especially in the process of drug screening, which can improve recognition efficiency, reduce workload, and more importantly, improve recognition Unity and reproducibility.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种生物组织影像识别的方法,包括以下步骤:对生物组织图像进行面积计算,并区分出若干个一级结构区域;基于至少一个所述一级结构区域,区分出至少一个从属于任一所述一级结构区域的二级结构区域;基于至少一个所述二级结构区域,区分出至少一个特征区域。本发明还提供一种生物组织影像识别的系统包括:处理器、几何计算模块、灰度处理模块、条件筛选模块,所述系统用于生物组织图像。本发明还提供一种计算机储存介质,所述计算机存储介质包括一种生物组织影像识别的程序。本发明的生物组织影像识别的方法能够实现对生物组织图像的有效识别,识别结果具有较高可靠性和稳定性,识别准确率高。

Description

生物组织影像识别的方法及其系统、计算机储存介质 技术领域
本发明主要涉及影像识别领域,特别是涉及一种针对生物组织影像识别的方法及其系统、计算机储存介质。
背景技术
生物组织由于形态复杂、结构繁多,肉眼识别特定的结构、组织存在不少难题。生物组织的影像中经常出现多组织的交织、交叠等干扰判断识别的情况,为识别带来新的挑战。此外,生物组织的影像采集还经常由于采集环境、采集仪器等因素影响,采集的影像也会存在失真、色彩偏差等问题。更重要的是,目前的生物组织以人肉眼观察为主,主观性非常强,经常依赖于判断者的经验,出现遗漏判断、判断不清的情况常见。并且肉眼判断生物组织的性质、量化特征不准确,人对于尺度的把握具有主观性,难以把握统一的尺度和标准,重复再现性比较差。不同的判断者对于组织形态、相关位置的理解存在差异,不能形成统一的标准,即使是同一判断者判断的状态、修习专业知识的熟练程度都会影响判断的结论。
在某些情况下,识别生物组织还需要特殊的染色。染色的质量、时间、次序、样本的状态都会影响染色情况,染色的过程比较难以把控。进一步地,染色有时候会影响部分组织形态,影响判断。
发明内容
本发明提供一种生物组织影像识别的方法,所述方法包括以下步骤:对生物组织图像进行面积计算,并区分出若干个一级结构区域;基于至少一个所述一级结构区域,区分出至少一个从属于任一所述一级结构区域的二级结构区域;基于至少一个所述二级结构区域,区分出至少一个特征区域。
本发明还提供了一种生物组织影像识别的系统,其包括:处理器,用于向该识别系统的其他部分发送指令,执行预设的程序,以针对所输入的生物组织图像进行各级结构区域的识别区分;几何计算模块,用于接受所述处理器的指令,计算所输入的生物组织图像的几何特征参数,并且反馈 至所述处理器;灰度处理模块,用于接受所述处理器的指令,处理所输入的生物组织图像的灰度值,并且反馈至所述处理器;条件筛选模块,用于接受所述处理器的指令,选取指定的筛选条件以对进行了几何计算和/或灰度处理的所述生物组织图像进行进一步的识别,直至区分出至少一个特征区域。
本发明还提供了一种计算机储存介质,所述计算机存储介质中包括一种生物组织影像识别的程序,所述生物组织影像识别的程序被处理器执行时,实现如下步骤:对生物组织图像进行面积计算,并区分出若干个一级结构区域;基于所述至少一个一级结构区域,区分出至少一个从属于任一所述一级结构区域的二级结构区域;基于所述至少一个二级结构区域,区分出至少一个特征区域。
本发明提供了一种生物组织影像识别的方法及其系统、计算机储存介质能够较好解决部分技术问题,并具有下述优点:
(1)本发明所涉及的一种生物组织影像识别的方法及其系统,能对生物组织图像实现多级结构区域划分,最终获得期望的特征区域,并围绕识别中间过程、识别结果能提供量化参数指标,通过数字化识别和数学量化,能提供相对可靠、统一、标准化的识别结果;
(2)本发明所涉及的一种生物组织影像识别的方法及其系统,通过特征属性参数、条件筛选等条件和参数的限制,能相对准确界定期望识别的区域或对象,避免模糊不清的识别和判断,使识别过程更精确可靠;
(3)本发明所涉及的一种生物组织影像识别的方法及其系统,通过建立数据库,通过输入典型样本数据、统计历史样本数据,能够赋予所述方法以深度学习的功能,提升识别结果的可靠性;
(4)本发明所涉及的一种生物组织影像识别的方法及其系统,通过预处理手段或者增加预处理模块,能够进一步提升识别结果的可靠性,还可以避免无效识别,节约系统资源,提升识别效率。
附图说明
本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本发明一种生物组织影像识别的方法步骤展示图;
图2为本发明一种生物组织影像识别的方法识别的影像1;
图3为本发明一种生物组织影像识别的系统的结构展示图;
图4为本发明一种生物组织影像识别的方法识别的影像2。
具体实施方式
下面详细描述本发明的实施例,所述实施例的示例在附图中示出。下面通过参考附图描述的实施例是示例性的,仅用于解释本发明,而不能解释为对本发明的限制。为了便于展示所述一种生物组织影像识别的方法及其系统、计算机储存介质的实际应用,下述实施例将引入方法具体的操作步骤,使所述生物组织影像识别的方法及其系统、计算机储存介质的应用和效果展示更充分和便于理解,值得注意的是,本发明的保护范围不受所限。
实施例1
为展示一种生物组织影像识别的方法,在本实施例中引入一幅肝脏组织的图像(称为“影像1”,见图2)作为生物组织图像,进行影像识别。对影像1应用所述一种生物组织影像识别的方法执行识别步骤请参考图1,步骤包括:
步骤S1:对生物组织图像进行面积计算,并区分出若干个一级结构区域。具体地,在本实施例中,对影像1进行面积计算,并且区分出三个一级结构区域,一级结构区域在本实施例中具体包括细胞结构区域、血管结构区域、纤维结构区域。
在步骤S1中区分一级结构区域的依据可以是适用于黑白或彩色影像的影像波长值或波长值范围、明暗对比度等便于计算机识别的参数,也可以是根据明显的边界、特殊的形态学特征以区别所述一级结构区域。
步骤S2:基于至少一个所述一级结构区域,区分出至少一个从属于任一所述一级结构区域的二级结构区域。具体地在本实施例中,基于纤维结构区域区分得到桥连纤维区域。所述桥连纤维区域从属于所述纤维结构区域,即为二级结构区域之一。基于细胞结构区域、血管结构区域、纤维胶原结构区域三者的相互关系,区分得到同样是二级结构区域的汇管纤维 区域。
更具体地,所述从一级结构区域中区分得到二级结构区域的步骤:在每个一级结构区域中应用灰度值阀值法选取至少一个在预设阀值范围内的像素点作为识别对象,所述识别对象通过灰度值增长凸显成为第一候选结构区域,再通过条件筛选得到二级结构区域。
在本实施例中,在纤维结构区域随机或全面选取纤维像素点作为识别对象,所述像素点在预设的阀值范围内。针对选取的纤维像素点应用灰度值阀值法,具体地将所述纤维像素点通过灰度值增长,凸显一系列的纤维像素点集合并形成属于第一候选结构区域的初级桥连纤维区域。继续基于初级桥连纤维区域作为识别对象,进一步应用条件筛选,去除不符合预设条件筛选的区域或对象,修整后就得到了属于二级结构区域的桥连纤维区域。
进一步地在本实施例中,所述纤维结构区域结合血管结构区域应用灰度值阀值法。纤维结构区域的纤维像素点进行灰度值增长时,在所述血管结构区域之间进行。此举,将纤维的增长识别限制在血管结构区域的间隙间进行,识别得到的对象即为桥连纤维。
进一步地在本实施例中,所述条件筛选根据血管结构区域之间的位置关系识别得到桥连纤维区域,将不符合血管结构区域之间位置关系的初桥连纤维区域予以排除,其后得到的桥连纤维区域即符合预设条件筛选。
更进一步地,所述步骤S2之前,即区分二级结构区域的步骤之前,还包括步骤:融合至少两个相邻或交叠的所述一级结构区域。具体在本实施例中,所述一级结构区域预先融合,预先融合的一级结构区域包括在影像1中相互交织的血管结构区域、纤维结构区域,彼此相邻或者呈现交叠状态。融合的一级结构区域可以进一步通过区域间相互关系,继续进行识别。所述血管结构区域与纤维结构区域相互融合,形成初级管区纤维区域,所述初级管区纤维区域属于一级结构区域。
进一步对初级管区纤维区域以灰度值阀值法识别。具体地,随机或全面在初级管区纤维区域中选取符合阀值的纤维像素点作为识别对象,针对纤维像素点进行灰度值增长凸显成为第一子纤维域,所述第一子纤维域属 于第一候选结构区域。通过对纤维像素点的灰度值阀值法识别的到的第一子纤维域,已经去除未存在桥连纤维的区域。通过条件筛选,第一子纤维域进一步去除小于指定面积的区域,形成第二子纤维域。所述第二子纤维域属于第二结构区域。在本实施例中,所述指定面积为一个肝细胞的平均面积。
更进一步地,所述条件筛选在S2步骤中的区分可以通过调整、变换、多次重复得到同一层级的第一候选结构区域或不同层级的候选区域。在本实施例中,对血管结构区域应用灰度值阀值法,随机或全面选取血管结构区域中的血管像素点作为识别对象,针对血管像素点进行灰度值增长凸显成为第一子血管域,所述第一子血管域属于第一候选结构区域。通过应用灰度值阀值法,能够获得只存在血管的第一子血管域,同时能够通过血管像素点的识别,实现边缘提取,进一步计算出血管壁厚度值。针对第一子血管域继续应用条件筛选,所述条件筛选为除去血管壁厚度值大于或等于5μm血管对应的区域,即得到第二子血管域,所述第二子血管域属于第二结构区域。
更具体地,针对所述识别对象还可以提取特征属性参数,所述特征属性参数的集合,按照分类录入至第一数据库中。所述识别对象更具体地在本实施例中体现为生物组织或者细胞、细胞形成的细胞束,即包括血管组织、结缔组织、神经细胞、神经细胞形成的神经纤维、肌肉组织、胶原纤维、脂肪细胞、脂肪细胞形成的脂肪组织。针对单一或多个所述识别对象提取特征属性参数,所提取的特征属性参数可以为一项,也可以是多项。在本实施例中,所述特征属性参数可以是组织形态学、细胞形态学的属性参数,如细胞大小、细胞排布、细胞边界形态、组织形态等。
仅在本实施例中,所述第一数据库中还可以通过预设的特征属性参数与后续提取的特征属性参数进行比对,比对结果可以计算入容错率、可靠性等反映识别结果可靠性的统计学数据中,也可以作为辅助判断识别对象是否符合的依据之一。再次判断的检验机制或者提供统计学数据,可以帮助本领域技术人员更好地把握影像1的实际状态。
步骤S3:基于至少一个所述二级结构区域,区分出至少一个特征区 域。具体地在本实施例中,基于第二子纤维域以及第二血管域,进行位置映射即获得汇管纤维区域,所述汇管纤维区域属于特征区域。此处,结合的第二子纤维域以及第二血管域均属二级结构区域。
更具体地,区分特征区域至少包括步骤:基于至少一个二级结构区域,结合所述一级结构区域,再通过集合其他指定结构组织的特征属性参数的第二数据库的对应匹配,得到所述特征区域。具体地在本实施例中,基于所述桥连纤维区域结合细胞结构区域,进一步识别得到结节特征区域。所诉识别结节特征区域之前还通过第二数据库的执行对应匹配的识别步骤,确定结节特征区域符合既定的特征属性参数,所述特征属性参数反映其他指定结构组织的特征。
在本实施例中,基于特征区域还可以继续通过结合不同层级的区域进行进一步区分。例如,所述汇管纤维区域结合细胞结构区域可以进一步区分得到胞周纤维特征区域。
在本实施例中,所述特征区域还可以特定的条件进一步区分出至少一级区域或识别对象。例如,所述桥连纤维区域与肝血窦内皮细胞以及肝细胞之间的区域进行映射,得到窦周纤维特征区域。特定条件所指即肝血窦内皮细胞以及肝细胞之间区域的特征属性参数,即可用于筛选窦周纤维特征区域。所述结节特征区域、胞周纤维特征区域、窦周纤维特征区域属于所述特征区域。
进一步地,所述第一数据库和第二数据库中的特征属性参数,可以通过预设录入的方式,也可以通过收集历次识别的中间数据、最终结果等方式进行优化和深度学习。所述特征属性参数可以是组织形态学、细胞形态学的属性参数,如细胞大小、细胞排布、细胞边界形态、组织形态、组织学和细胞学染色等结构本身具有的属性参数,还可以是特定组织或细胞之间的相互关系,也可以包括由于采集影像而获得的在数据电文、计算机可识别载体上有区分性质的属性参数,如对比度、灰度、边界、色彩以及属性参数之间相互配合的特征。所述第一数据库以及第二数据库同样可以将特征属性参数应用在步骤S1以及其后的步骤中。
更具体地,所述生物组织影像识别的方法还包括步骤,针对识别得到 的一级结构区域、二级结构区域、特征区域、识别对象的任意一种进行量化参数的提取,所述量化参数包括边界、面积、几何尺寸、数量的至少一种参数。
在本实施例中,提取所述量化参数的时间点可以是识别后立刻提取,也可以在最后步骤统一提取。所述量化参数通过边界提取、微积分计算等方式可以获得。所述量化参数中所指数量是所述一级结构区域、二级结构区域、特征区域、识别对象之一的具体数量,例如所述影像1中存在多个隔离的血管结构区域、影像1中也存在多个结节特征区域,这些一级结构区域、二级结构区域、特征区域、识别对象如出现多个并且有计算需求时,需要计算具体数量。
进一步地,所述几何尺寸在本实施例中,针对不同的识别对象有不同的体现。以血管为例,所述几何尺寸指血管壁厚值、血管直径、血管区域、血管集中程度等反映血管几何特征的尺寸数值;以胶原纤维为例,所述几何尺寸值胶原纤维长度、形成胶原纤维束的密集程度、胶原纤维布满面积、胶原纤维边界等反映胶原纤维及其区域的几何特征的尺寸数值;以一级结构区域、二级结构区域、特征区域为例,所述几何尺寸指前述三者的中心线位置、质心位置、面积、边界、轮廓等反映三者几何方面特征的参数。
进一步地,对所述量化参数进行分类并建立数学模型,所述分类的依据是参数本身的属性或者根据识别的需要形成的分类逻辑。根据量化参数的特点,进行归一化处理再进行测试,以多次调整量化分类的正确率,继续优化分类逻辑或参数本身的属性设置。在本实施例中,所述与血管相关的量化参数即可归为血管类参数,以结节特征区域相关的量化参数可以归为结节参数等。在本实施例中,还可以根据是否具有形态学意义、病理学意义,分类为不同的量化参数序列,在量化参数序列中继续按照分类标准进一步细化。量化参数进行分类是进行数学模型化的重要前置步骤,分类的准确性关系到了后续数学模型建立的可靠性,最后数据库(包括第一数据库、第二数据库)的可靠性,进而影响其后的识别。在本实施例中,还可以通过技术人员添加确认定性、定量的样本,通过所述生物组织影像识别的方法,将相关量化参数写入数据库中,用作调试或者作为经典样本对 照识别。确定的样本通过所述生物组织影像识别的方法还可以判断,量化分类的正确率。
更进一步地,所述归一化处理可以应用以下公式:
Figure PCTCN2018092499-appb-000001
其中,h θ(x)是描述生物组织图像性质的描述函数,θ是权重系数,x是特征向量,e为自然对数,m是特征向量维度。所述描述函数是将影像1通过数学归一化和权重统计的方式,将最后以函数值的方式反映影像1的具体情形。所述权重系数θ根据量化参数本身的重要程度以及相互之间的影响,调整量化参数之间权重比例以获得能够反映影像1实际情况的描述函数值。所述特征向量x是指针对识别对象、一级结构区域、二级结构区域、特征区域的具有向量性质的数值。所述特征向量维度m是指反映所述特征向量x中维度特征的量。所述归一化处理能够将复杂以及数量繁多的参数简化为简单的数值,便于技术人员的快速高效利用所述描述函数,进行后续工作。
具体地,所述生物组织影像识别的方法还包括步骤:对所述生物组织图像进行色彩修正、对比度调节、比例调节、变形修复的至少一种预处理手段。在本实施例中,影像1有可能因为摄像头等扫描工具的质量、扫描环境等因素影响所获得影像的质量,具体可能存在影响识别的色彩失真、边界不明显、影像模糊、图像变形等情况。针对识别不同的影像还可能存在比例失真、由于调焦影响比例等问题。调整比例的预处理手段在本实施例中可以在扫描输入图像时就引入比例尺,后期通过识别、比例换算获得统一尺度的影像1。
本实施例中还提供了一种生物组织影像识别的系统3(下称“系统3”),请参考图3。所述系统3至少包括处理器301、几何计算模块302、灰度处理模块303、条件筛选模块304。所述处理器301用于向该识别系统3的其他部分发送指令,执行预设的程序,以针对所输入的生物组织图像进行各级结构区域的识别区分。所述几何计算模块302用于接受所述处理器301的指令,计算所输入的生物组织图像的几何特征参数,并且反馈至所述处理器301。所述灰度处理模块303用于接受所述处理器301的指令, 处理所输入的生物组织图像的灰度值,并且反馈至所述处理器301。所述条件筛选模块304用于接受所述处理器301的指令,选取指定的筛选条件以对进行了几何计算和/或灰度处理的所述生物组织图像进行进一步的识别,直至区分出至少一个特征区域。
具体在本实施例中,所述处理器301先接收影像1的数据,并针对影像1预先区分各级结构区域,尤其是一级结构区域。并将影像1整体和初分的一级结构区域通过几何计算模块302计算生物组织整体面积以及一级结构区域面积,几何计算模块302将计算结果反馈至所述处理器301。所述处理器301进一步根据一级结构区域区分得出从属于一级结构区域的二级结构区域,根据二级结构区域区分三级结构区域。所述灰度处理模块303处理经过处理器301处理的影像1,与处理器301之间发生交互,反馈处理结果至处理器301。
更具体地,所述条件筛选模块304与数据库(包括第一数据库305、第二数据库306)直接关联,与处理器301之间发生交互。条件筛选模块304接收处理器301传输的影像1中包含的一级结构区域,并通过与第一数据库305或第二数据库306的筛选条件比对,所述条件筛选模块304将条件筛选的结果输出至处理器301,条件筛选的结果可以是二级结构区域、特征区域、识别对象等。
所述条件筛选模块304的筛选条件选自于第一数据库305和第二数据库306。所述第一数据库305中集合了提取自组织的特征属性参数。本实施例中提取特征属性参数的组织体现为生物组织或者细胞、细胞形成的细胞束,即包括血管组织、结缔组织、神经细胞、神经细胞形成的神经纤维、肌肉组织、胶原纤维、脂肪细胞、脂肪细胞形成的脂肪组织。所述第二数据库306集合的是其他指定结构组织的特征属性参数。所述其他指定结构组织可以是肝脏相关的结节组织、汇管纤维组织、桥连纤维组织、窦周纤维组织、胞周纤维组织等,当然所述其他指定结构组织还包括其他组织器官相关的特定结构。所述特征属性参数指的是有关于所述组织的组织形态学、细胞形态学、边界特征等反映所述组织特征的参数。
具体地,所述灰度处理模块303应用灰度值阀值法在至少一个通过所 述几何计算模块302初步区分出的一级结构区域中,选取在预设阀值范围内的像素点,通过灰度值增长凸显出第一候选结构区域,再通过所述条件筛选模块304处理得到下一级结构区域。在本实施例中,灰度处理模块303接收来自处理器301的指令,对传输的影像1以及已经区分得到的一级结构区域执行灰度处理,去除跟色彩有关的参数。除了可以应用处理器301进行分区获得所述一级结构区域,也可以由几何计算模块302根据几何边界划分一级结构区域。以一个一级结构区域——纤维结构区域为例,所述灰度处理模块303在纤维结构区域中随机或全面扫面选取符合预设阀值范围内的纤维像素点,通过灰度值增长的手段在单个纤维像素点的基础上,将符合阀值的纤维像素点生长而结合形成第一子纤维域。第一子纤维域的数据通过条件筛选模块304,条件筛选模块304向第一数据库305请求相关条件数据并进一步匹配,确定在第一子纤维域中符合预设条件的第二子纤维域。所述第一子纤维域即第一候选结构区域,第二子纤维域即下一级结构区域。
具体地,所述条件筛选至少包括位置、距离、面积、边界、形态、对比度以及相互关系的任一数值范围确定的条件筛选。为了得到第二子纤维域所进行的条件筛选,在本实施例中,所述条件筛选可以是限制所述第二子纤维域的纤维像素点位于血管像素位点所形成的血管对象之间的位置条件。
具体地,所述几何计算模块302还用于融合至少两个相邻或交叠的各级结构区域,以便于对应区域的几何特征参数的提取。在本实施例中,所述第二结构区域通过几何计算模块302的数学运算功能,进行合集运算。更具体地,区分得到,交叠的第二子纤维域与第二血管域通过合集运算融合,融合后便于几何计算模块302的特定区域的几何特征参数的提取。
具体地,所述几何计算模块302还用于针对识别得到的各级结构区域、特征区域、或中间识别区域进行量化参数的提取,所述量化参数包括边界、面积、几何尺寸、数量的至少一种参数。在本实施例中,所述各级结构区域包括一级结构区域、二级结构区域以及后续识别得到的下一层级结构区域,所述中间识别区域指的是所述第一候选结构区域等为了辅助识别获得 各级结构区域的中间区域,还包括一些用于辅助识别的识别对象。
具体地,所述量化参数中,边界、面积、几何尺寸、数量可以是各级结构区域、中间识别区域、特征区域整体的边界、面积、几何尺寸、数量,还可以是中间的识别对象或者组成区域的对象的边界、面积、几何尺寸、数量。在本实施例中,以血管结构区域为例,量化参数可以是单个血管结构区域的边界轮廓、面积,整个影像1的血管结构区域的数量、总面积,组成血管结构区域的血管对象的数量、分布等量化参数。以纤维结构区域为例,量化参数可以是单个纤维结构区域纤维布满的区域、比例、边界轮廓,全部纤维结构区域的总面积、数量。
更具体地,所述几何尺寸至少包括中血管的壁厚值、胶原纤维的长度值以及各级结构区域、特征区域的中心线位置或质心位置。几何尺寸可以确定各级结构区域、特征区域、中间识别区域、识别对象的特征,是重要的量化参数。
进一步地,所述量化参数按照参数的属性进行分类并且建立数学模型,归一化处理后进行测试,以多次调整量化分类的正确率,所述数学模型、归一化处理结果反馈录入至第三数据库307。所述量化参数由于对应的对象不同,因此更系统、更科学地进行分类便于后续建立数学模型。在本实施例中,所述分类可以根据识别对象本身的属性、相似程度、工程需要进行分类。所述第三数据库307存储所述建立的数学模型,所述第三数据库307用于后续识别的数据匹配和比对,当识别并输入第三数据库307的数据逐渐增多时,识别的结果也会更加可靠。
更进一步地,所述处理器301进行归一化处理可以应用以下公式:
Figure PCTCN2018092499-appb-000002
其中,h θ(x)是描述生物组织影像性质的描述函数,θ是权重系数,x是特征向量,e为自然对数,m是特征向量维度。应用描述函数h θ(x)的主要思想是通过归一化处理将复杂的参数、特征属性等量化,以体现参数重要性的权重系数θ作为调整描述描述函数的因子,使描述函数h θ(x)的输出值能够全面、准确、有侧重地描述影像1本身。所述描述函数h θ(x)中自变量是代表特征向量的x,m表示的是作为特征向量x的向量 维度。
具体地,所述系统3还包括用于对所述生物组织图像进行色彩修正的色彩修正模块308、进行对比度调节的对比度处理模块309、进行比例调节的比例调节模块310、进行变形修复的变形修复模块311。在本实施例中,所述色彩修正模块308、对比度处理模块309、比例调节模块310、变形修复模块311分别与处理器301连接,所述影像1在输入处理器301之前将先经过色彩修正模块308、对比度处理模块309、比例调节模块310、变形修复模块311之一的调整。当处理器301判断影像1出现需要执行图像调整处理时,所述处理器301将反馈结果输入至对应处理的模块,对应的模块经过处理后将影像1和相关处理数据返回。相关处理数据最后会在整个影像1识别结果中得到体现,用于反映影像1的质量以及可靠程度。所述处理器301还可以根据预处理、后续对影像1的再处理次数、处理质量、处理程度,在影像1最后输出的识别结果中反馈,反馈结果可以是重新扫描图像识别、图像不可用、图像可靠度低等反馈建议,或建议直接中止识别,以避免输出不符合识别要求和可靠程度低的识别结果。
本实施例还提供了一种计算机储存介质,所述计算机可读存储介质中包括一种生物组织影像识别的程序,所述生物组织影像识别的程序被处理器301执行时,实现如下步骤:
对生物组织图像进行面积计算,并区分出若干个一级结构区域;
基于所述至少一个一级结构区域,区分出至少一个从属于任一所述一级结构区域的二级结构区域;
基于所述至少一个二级结构区域,区分出至少一个特征区域。
具体地,所述计算机储存介质在本实施例中指的是通过数据电文形式记载在计算机存储介质中,所述计算机存储介质在本实施例中指的是软盘、硬盘、磁带、孔带或是存在服务器等以计算机能够识别的方式的介质中。
实施例2
在本实施例中,应用一种生物组织影像识别的方法识别影像2,影像2请参见图4。具体地,请参考图1,所述方法包括以下步骤:
步骤S1:对影像2中的肺部组织进行面积计算,并区分出若干个一 级结构区域。在本实施例中区分的一级结构区域包括肺细胞区域、肺血管区域、肺气管区域、肺纤维区域等。所述一级结构区域可以通过明显的生物组织学染色、对比度、边界轮廓、状态等对影像2中的肺部组织进行初步细分。
步骤S2:基于至少一个所述一级结构区域,区分出至少一个从属于任一所述一级结构区域的二级结构区域。
所述步骤S2中区分二级结构区域至少包括步骤:在每个一级结构区域中应用灰度值阀值法选取至少一个在预设阀值范围内的像素点作为识别对象,所述识别对象通过灰度值增长凸显成为第一候选结构区域,再通过条件筛选得到二级结构区域。
在本实施例中具体地,以肺血管区域为例。在肺血管区域中全面选取在预设灰度值阀值内的肺血管像素点作为识别对象,进一步以灰度值阀值法控制增长所述肺血管像素点,将识别对象增长为完整的肺血管结构,并将完整的肺血管结构图像成为第一子肺血管区域,第一子肺血管区域属于第一候选结构区域。更进一步地,所述第一子肺血管区域中的肺血管结构通过条件筛选得到第二子肺血管区域,第二子肺血管区域属于二级结构区域。针对肺血管区域的识别,所述条件筛选除了包括位置、距离、面积、边界、形态、对比度之一的数值范围,条件筛选中,通过肺血管的形态排除血管直径较大的肺血管。条件筛选还可以是肺血管本身的状态,在本实施例中可以将堵塞的肺血管排除。所述条件筛选还可以将肺部组织存在阴影、絮状的区域进行排除。
具体地识别对象的特征属性参数提取自包括血管、结缔组织、神经细胞、肌肉组织、胶原纤维、脂肪组织的中至少一种组织,所述特征属性参数的集合预录入至第一数据库中。在本实施例中,以提取血管、胶原纤维相关的特征属性参数为例,可以提取血管直径、血管壁厚度值、胶原纤维长度、胶原纤维所处位置、胶原纤维堆积程度等。所提取的特征属性参数将集合并录入第一数据库中。同时根据以往的数据,可以预先将之前获得的特征属性参数录入第一数据库中。
更具体地,所述区分二级结构区域的步骤之前,还包括步骤:融合至 少两个相邻或交叠的所述一级结构区域。
在本实施例中再以肺纤维区域为例。首先融合所述肺纤维区域以及肺血管区域形成联合区域,进一步在联合区域中随机选取肺纤维像素点作为识别对象,所述肺纤维像素点符合预设的灰度值阀值。以灰度值阀值法控制增长所述肺纤维像素点,将识别对象增长为肺纤维组织的集合,并将完整的肺纤维组织结构图像成为第一子肺纤维区域,第一子肺纤维区域属于第一候选结构区域。更进一步地,所述第一子肺纤维区域中的肺纤维组织结构通过条件筛选得到第二子肺纤维区域,第二子肺纤维区域属于二级结构区域。针对肺纤维区域的条件筛选,可以是选取连接在识别的到的血管结构之间的肺纤维、选取纤维长度达到预设值的肺纤维等条件。通过区域融合以及条件筛选,获得第二子肺纤维区域中的肺纤维具有桥连于血管结构之间的特点。
步骤S3:基于至少一个所述二级结构区域,区分出至少一个特征区域。
具体地,所述区分特征区域至少包括步骤:基于所述至少一个二级结构区域,结合所述一级结构区域,再通过集合了其他指定结构组织的特征属性参数的第二数据库的对应匹配,得到所述特征区域。所述其他指定结构组织可以是肺部相关的结节组织、纤维组织、侵染区域等,所述其他指定结构组织还包括其他组织器官相关的特定结构,例如可以推广适用到小肠、胃、肾、皮肤等的影像识别。
更具体地在本实施例中,基于识别得到的所述第二子肺纤维区域结合肺细胞区域,通过位置映射等手段再通过集合了结节组织的特征属性参数的第二数据库的对应匹配,获得肺结节区域。所述结节组织的特征属性参数在本实施例中体现为纤维组织与肺细胞的连接关系参数、结节组织形态等能够反映结节组织特征并能将其与其他相似组织区分的特征数据。
具体地,所述生物组织影像识别的方法还包括针对识别得到的一级结构区域、二级结构区域、特征区域、识别对象的任一种进行量化参数的提取,所述量化参数包括边界、面积、几何尺寸、数量的至少一种参数。在本实施例中,可以提取所述肺血管区域、肺结节区域的边界、面积、几何 尺寸以及数量。值得注意的是,所述肺血管区域、肺结节区域、肺纤维区域以及其中识别对象的数量有可能是多个分散的区域或对象,区域或对象之间的位置关系可以是离散、交叠、邻接的一种或数种。
更具体地,所述几何尺寸可以具体体现为肺血管的壁厚值、胶原纤维的长度值、肺纤维组织覆盖范围以及一级结构区域、二级结构区域、特征区域的中心线位置以及质心位置,便于更好地勾勒边界和轮廓。
所述方法至少包括以下步骤:对所述量化参数按照参数的属性进行分类并且建立数学模型,归一化处理后进行测试,以多次调整量化分类的正确率。所述量化参数在本实施例中根据一定的分类规则进行分类,具体可以根据量化参数反映的组织、细胞的类别进行归类,也可以根据重要程度、权重比例进行分类。根据分类的量化参数建立的数学模型,进行归一化处理后测试正确率,若正确率不合要求,则重新调整分类规则。
具体地,所述归一化处理可以应用以下公式:
Figure PCTCN2018092499-appb-000003
其中,h θ(x)是描述生物组织图像性质的描述函数,θ是权重系数,x是特征向量,e为自然对数,m是特征向量维度。所述描述函数h θ(x)用于描述影像2中的肝组织的状态。描述函数h θ(x)具体描述的描述侧重点将通过控制权重系数θ控制,所述权重系数θ根据特征向量x的不同而变化,特征向量x的维度由特征向量维度m表示。
具体地,所述影像2还预先通过预处理手段进行预处理,针对影像2的具体特点,影像2中的肺组织由于结构比较复杂,可以通过对比度的调节、灰度的调节等预处理获得更清晰的影像2。除此之外,在其它可能的实施方式中,预处理手段还可以包括色彩修正、比例调节、变形修复等。针对比例调节可以引入标准尺,针对色彩修整可以引入标准色卡
请参考图3,本实施例还提供了一种生物组织影像识别的系统3,具体包括:
处理器301,用于向该识别系统3的其他部分发送指令,执行预设的程序,以针对所输入的影像2进行各级结构区域的识别区分;
几何计算模块302,用于接受所述处理器301的指令,计算所输入的 影像2的几何特征参数,并且反馈至所述处理器301;
灰度处理模块303,用于接受所述处理器301的指令,处理所输入的影像2的灰度值,并且反馈至所述处理器301;
条件筛选模块304,用于接受所述处理器301的指令,选取指定的筛选条件以对进行了几何计算和/或灰度处理的所述影像2进行进一步的识别,直至区分出至少一个特征区域。
在本实施例中,所述处理器301结合几何计算模块302预先区分出一级结构区域,区分的一级结构区域包括肺细胞区域、肺血管区域、肺气管区域、肺纤维区域等。处理器301可以通过明显的生物组织学染色、由灰度处理模块303识别的对比度、由几何计算模块302识别的边界轮廓、由条件筛选模块304确定匹配的状态等对影像2中的肺部组织进行初步细分。
所述系统3识别二级结构区域是在每个一级结构区域中,灰度处理模块303应用灰度值阀值法选取至少一个在预设阀值范围内的像素点作为识别对象,所述识别对象通过灰度值增长凸显成为第一候选结构区域,再通过条件筛选得到二级结构区域。在本实施例中具体地,以肺血管区域为例。灰度处理模块303在肺血管区域中全面选取在预设灰度值阀值内的肺血管像素点作为识别对象,进一步以灰度值阀值法控制增长所述肺血管像素点,将识别对象增长为完整的肺血管结构,并将完整的肺血管结构图像成为第一子肺血管区域,第一子肺血管区域属于第一候选结构区域,灰度处理模块303将该处理结果反馈至处理器301。
更进一步地,处理器301将指令并将需要处理的数据发送给条件筛选模块304,所述第一子肺血管区域中的肺血管结构通过条件筛选模块304的条件筛选得到第二子肺血管区域,第二子肺血管区域属于二级结构区域。针对肺血管区域的识别,所述条件筛选除了包括位置、距离、面积、边界、形态、对比度之一的数值范围,条件筛选中,通过肺血管的形态排除血管直径较大的肺血管。条件筛选还可以是肺血管本身的状态,在本实施例中可以将堵塞的肺血管排除。所述条件筛选还可以将肺部组织存在阴影、絮状的区域进行排除。
具体地识别对象的特征属性参数选取自第一数据库305和第二数据 库306,第一数据库305中包括血管、结缔组织、神经细胞、肌肉组织、胶原纤维、脂肪组织的中至少一种组织的特征属性参数,所述第二数据库306集合了其他指定结构组织的特征属性参数。在本实施例中,以提取血管、胶原纤维相关的特征属性参数为例,可以选取血管直径、血管壁厚度值、胶原纤维长度、胶原纤维所处位置、胶原纤维堆积程度等。所选取的特征属性参数预先录入第一数据库305、第二数据库306中。
更具体地,所述几何计算模块302还用于融合至少两个相邻或交叠的各级结构区域,以便于对应区域的几何特征参数的提取。
在本实施例中再以肺纤维区域为例。首先几何计算模块302融合所述肺纤维区域以及肺血管区域以形成联合区域,并将融合区域反馈至处理器301。处理器301将融合区域数据传至灰度处理模块303,并指令进一步在联合区域中随机选取肺纤维像素点作为识别对象,所述肺纤维像素点符合预设的灰度值阀值。以灰度值阀值法控制增长所述肺纤维像素点,将识别对象增长为肺纤维组织的集合,并将完整的肺纤维组织结构图像成为第一子肺纤维区域,第一子肺纤维区域属于第一候选结构区域。将得到的第一候选结构区域再次返回处理器301中。
更进一步地,处理器301将所述第一子肺纤维区域相关数据传至条件筛选模块304,第一子肺纤维区域中的肺纤维组织结构通过条件筛选模块304的筛选得到第二子肺纤维区域,第二子肺纤维区域属于二级结构区域。针对肺纤维区域的条件筛选,可以是选取连接在识别的到的血管结构之间的肺纤维、选取纤维长度达到预设值的肺纤维等条件。通过区域融合以及条件筛选,获得第二子肺纤维区域中的肺纤维具有桥连于血管结构之间的特点。
所述几何计算模块302还用于针对识别得到的各级结构区域、特征区域、或中间识别区域进行量化参数的提取,所述量化参数包括边界、面积、几何尺寸、数量的至少一种参数。更具体地,所述几何尺寸至少包括中血管的壁厚值、胶原纤维的长度值以及各级结构区域、特征区域的中心线位置和/或质心位置。
在本实施例中,所述处理器301指令几何计算模块302根据各级结构 区域、特征区域、或中间识别区域、识别对象的特点和预设的要求进行量化参数的提取。针对不同的区域、识别对象的量化参数提取有所不同。例如针对肺血管区域可以提取肺血管区域的中心线位置和/或质心位置、区域边界等;针对肺纤维区域中的胶原纤维可以提取胶原纤维面积、胶原纤维长度等。
更具体地,所述量化参数按照参数的属性进行分类并且建立数学模型,归一化处理后进行测试,以多次调整量化分类的正确率,所述数学模型、归一化处理结果反馈录入至第三数据库307。进一步地,所述处理器301进行归一化处理可以应用以下公式:
Figure PCTCN2018092499-appb-000004
其中,h θ(x)是描述生物组织影像性质的描述函数,θ是权重系数,x是特征向量,e为自然对数,m是特征向量维度。所述描述函数h θ(x)将由所述处理器301执行运算。
在本实施例中,所述量化参数经由处理器301的分类处理,可以将分类的量化参数回输至第一数据库305、第二数据库306中以充实数据库,数据体量大的数据库,可靠性也会随之提升。归一化处理结果所指是描述函数h θ(x),是重要的系统反馈结果之一。所述分类结果、数学模型、描述函数值还可以通过第一数据库305、第二数据库306、第三数据库307的检验,便于系统3进行深度学习。
具体地,所述系统3还包括用于对所述生物组织图像进行色彩修正的色彩修正模块308、进行对比度调节的对比度处理模块309、进行比例调节的比例调节模块310、进行变形修复的变形修复模块311中的至少一种。针对影像2的肺组织而言,由于肺组织的结构较为复杂,因此需要处理器301调用对比度处理模块309进行对比度的调节一伙的更清晰的影像2。处理器301还可以通过色彩修正模块308将影像2去除不必要的色彩构成,以减少处理器301的处理负担,并且便于后续灰度处理模块303的处理。所述处理器301还可以根据影像2的质量、识别要求,指令比例调节模块310、变形修复模块311对影像2进行处理。
本实施例还提供了一种计算机储存介质,所述计算机可读存储介质中 包括一种生物组织影像识别的程序,所述生物组织影像识别的程序被处理器301执行时,实现如下步骤:
对生物组织图像进行面积计算,并区分出若干个一级结构区域;
基于所述至少一个一级结构区域,区分出至少一个从属于任一所述一级结构区域的二级结构区域;
基于所述至少一个二级结构区域,区分出至少一个特征区域。
具体在本实施例中,所述计算机储存介质指的是内置于识别设备的硬盘,检验设备可以调用硬盘中的程序。在其它可能的实现方式中,所述计算机储存介质还可以是具有储存功能的服务器,当识别设备需要应用所述程序时,通过网络将影像2向服务器发送并请求识别,或是将所述程序载入至识别设备中,进行本地识别。
通过本发明以及两个典型的实施例,可以发现本发明能够至少部分克服现有技术中存在的问题。本发明能对生物组织图像实现多级结构区域,最终获得期望的特征区域,并围绕识别中间过程、识别结果能提供量化参数指标,通过数字化识别和数学量化,能提供可靠、统一、标准化的识别结果。进一步地,通过特征属性参数、条件筛选等条件和参数的限制,能准确界定期望识别的区域或对象,避免模糊不清的识别和判断,使识别过程更精确可靠。再者,本发明通过建立数据库,通过典型样本、历史对比等手段,能够赋予所述方法以深度学习的功能,提升识别结果的可靠性。此外,通过预处理手段、预处理模块,能够进一步提升识别结果的可靠性,还可以避免无效识别,节约系统资源,提升识别效率。本发明除了应用在两个典型实施例中涉及肝组织、肺组织之外,还可应用在胃、皮肤、肌肉等组织中。本发明可以应用在实验室实验、检验机构检验中人体、动物的器官、组织。对实验中的组织、器官需要大体量识别场合中,尤其是在药物筛选过程中,均可适用本发明所提供的方法和系统3,可以提高识别效率、减轻工作量,更重要的是提升识别的统一性和重复再现性。
以上所述仅是本发明的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (22)

  1. 一种生物组织影像识别的方法,其特征在于,所述方法包括以下步骤:
    对生物组织图像进行面积计算,并区分出若干个一级结构区域;
    基于至少一个所述一级结构区域,区分出至少一个从属于任一所述一级结构区域的二级结构区域;
    基于至少一个所述二级结构区域,区分出至少一个特征区域。
  2. 根据权利要求1所述生物组织影像识别的方法,其特征在于,所述区分二级结构区域至少包括步骤:在每个一级结构区域中应用灰度值阀值法选取至少一个在预设阀值范围内的像素点作为识别对象,所述识别对象通过灰度值增长凸显成为第一候选结构区域,再通过条件筛选得到二级结构区域。
  3. 根据权利要求2所述生物组织影像识别的方法,其特征在于,所述条件筛选至少包括位置、距离、面积、边界、形态、对比度之一的数值范围。
  4. 根据权利要求2所述生物组织影像识别的方法,其特征在于,所述区分二级结构区域的步骤之前,还包括步骤:融合至少两个相邻或交叠的所述一级结构区域。
  5. 根据权利要求2所述生物组织影像识别的方法,其特征在于,所述识别对象的特征属性参数提取自包括血管、结缔组织、神经细胞、肌肉组织、胶原纤维、脂肪组织的中至少一种组织,所述特征属性参数的集合预录入至第一数据库中。
  6. 根据权利要求5所述生物组织影像识别的方法,其特征在于,所述区分特征区域至少包括步骤:基于所述至少一个二级结构区域,结合所述一级结构区域,再通过集合了其他指定结构组织的特征属性参数的第二数据库的对应匹配,得到所述特征区域。
  7. 根据权利要求2所述生物组织影像识别的方法,其特征在于,所述方法还包括针对识别得到的一级结构区域、二级结构区域、特征区域、识 别对象的任一种进行量化参数的提取,所述量化参数包括边界、面积、几何尺寸、数量的至少一种参数。
  8. 根据权利要求7所述生物组织影像识别的方法,其特征在于,所述几何尺寸至少包括识别对象中血管的壁厚值、胶原纤维的长度值以及一级结构区域、二级结构区域、特征区域的中心线位置和/或质心位置。
  9. 根据权利要求7所述生物组织影像识别的方法,其特征在于,所述方法至少包括以下步骤:对所述量化参数按照参数的属性进行分类并且建立数学模型,归一化处理后进行测试,以多次调整量化分类的正确率。
  10. 根据权利要求9所述生物组织影像识别的方法,其特征在于,所述归一化处理应用以下公式:
    Figure PCTCN2018092499-appb-100001
    其中,h θ(x)是描述生物组织图像性质的描述函数,θ是权重系数,x是特征向量,e为自然对数,m是特征向量维度。
  11. 根据权利要求1所述生物组织影像识别的方法,其特征在于,所述方法至少还包括以下步骤:对所述生物组织图像进行色彩修正、对比度调节、比例调节、变形修复的至少一种预处理手段。
  12. 一种生物组织影像识别的系统,其特征在于,其包括:
    处理器,用于向该识别系统的其他部分发送指令,执行预设的程序,以针对所输入的生物组织图像进行各级结构区域的识别区分;
    几何计算模块,用于接受所述处理器的指令,计算所输入的生物组织图像的几何特征参数,并且反馈至所述处理器;
    灰度处理模块,用于接受所述处理器的指令,处理所输入的生物组织图像的灰度值,并且反馈至所述处理器;
    条件筛选模块,用于接受所述处理器的指令,选取指定的筛选条件以对进行了几何计算和/或灰度处理的所述生物组织图像进行进一步的识别,直至区分出至少一个特征区域。
  13. 根据权利要求12所述生物组织影像识别的系统,其特征在于,所述筛选条件的选取基于第一数据库和第二数据库,其中,所述第一数据库集合了提取自包括血管、结缔组织、神经细胞、肌肉组织、胶原纤维、脂 肪组织的中至少一种组织的特征属性参数;所述第二数据库集合了其他指定结构组织的特征属性参数。
  14. 根据权利要求12所述生物组织影像识别的系统,其特征在于,所述灰度处理模块应用灰度值阀值法在至少一个通过所述几何计算模块初步区分出的一级结构区域中,选取在预设阀值范围内的像素点,通过灰度值增长凸显出第一候选结构区域,再通过所述条件筛选模块处理得到下一级结构区域。
  15. 根据权利要求12所述生物组织影像识别的系统,其特征在于,所述筛选条件至少包括位置、距离、面积、边界、形态、对比度之一的数值范围。
  16. 根据权利要求12所述生物组织影像识别的系统,其特征在于,所述几何计算模块还用于融合至少两个相邻或交叠的各级结构区域,以便于对应区域的几何特征参数的提取。
  17. 根据权利要求12所述生物组织影像识别的系统,其特征在于,所述几何计算模块还用于针对识别得到的各级结构区域、特征区域、或中间识别区域进行量化参数的提取,所述量化参数包括边界、面积、几何尺寸、数量的至少一种参数。
  18. 根据权利要求17所述生物组织影像识别的系统,其特征在于,所述几何尺寸至少包括中血管的壁厚值、胶原纤维的长度值以及各级结构区域、特征区域的中心线位置和/或质心位置。
  19. 根据权利要求18所述生物组织影像识别的系统,其特征在于,所述量化参数按照参数的属性进行分类并且建立数学模型,归一化处理后进行测试,以多次调整量化分类的正确率,所述数学模型、归一化处理结果反馈录入至第三数据库。
  20. 根据权利要求19所述生物组织影像识别的系统,其特征在于,所述处理器进行归一化处理应用以下公式:
    Figure PCTCN2018092499-appb-100002
    其中,h θ(x)是描述生物组织影像性质的描述函数,θ是权重系数,x是特征向量,e为自然对数,m是特征向量维度。
  21. 根据权利要求12所述生物组织影像识别的系统,其特征在于,所述系统还包括用于对所述生物组织图像进行色彩修正的色彩修正模块、进行对比度调节的对比度处理模块、进行比例调节的比例调节模块、进行变形修复的变形修复模块中的至少一种。
  22. 一种计算机储存介质,其特征在于,所述计算机存储介质中包括一种生物组织影像识别的程序,所述生物组织影像识别的程序被处理器执行时,实现如下步骤:
    对生物组织图像进行面积计算,并区分出若干个一级结构区域;
    基于所述至少一个一级结构区域,区分出至少一个从属于任一所述一级结构区域的二级结构区域;
    基于所述至少一个二级结构区域,区分出至少一个特征区域。
PCT/CN2018/092499 2018-06-06 2018-06-22 生物组织影像识别的方法及其系统、计算机储存介质 WO2019232824A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810574640.1 2018-06-06
CN201810574640.1A CN108648193B (zh) 2018-06-06 2018-06-06 生物组织影像识别的方法及其系统、计算机储存介质

Publications (1)

Publication Number Publication Date
WO2019232824A1 true WO2019232824A1 (zh) 2019-12-12

Family

ID=63751950

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/092499 WO2019232824A1 (zh) 2018-06-06 2018-06-22 生物组织影像识别的方法及其系统、计算机储存介质

Country Status (2)

Country Link
CN (1) CN108648193B (zh)
WO (1) WO2019232824A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447964A (zh) * 2018-10-23 2019-03-08 上海鹰瞳医疗科技有限公司 眼底图像处理方法及设备
CN110210308B (zh) * 2019-04-30 2023-05-02 南方医科大学南方医院 生物组织图像的识别方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1891155A (zh) * 2006-05-26 2007-01-10 北京思创贯宇科技开发有限公司 一种基于ct图像的组织成分分析方法
US20110190626A1 (en) * 2010-01-31 2011-08-04 Fujifilm Corporation Medical image diagnosis assisting apparatus and method, and computer readable recording medium on which is recorded program for the same
CN102208105A (zh) * 2010-03-31 2011-10-05 富士胶片株式会社 医学图像处理技术
CN106023144A (zh) * 2016-05-06 2016-10-12 福建工程学院 在断层影像中分割股骨的方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256670A (zh) * 2008-03-20 2008-09-03 华南师范大学 序列图像三维可视化的方法及装置
CN104809730B (zh) * 2015-05-05 2017-10-03 上海联影医疗科技有限公司 从胸部ct图像提取气管的方法和装置
CN106296664B (zh) * 2016-07-30 2019-10-08 上海联影医疗科技有限公司 血管提取方法
CN107045721B (zh) * 2016-10-24 2023-01-31 东北大学 一种从胸部ct图像中提取肺血管的方法及装置
CN107038705B (zh) * 2017-05-04 2020-02-14 季鑫 视网膜图像出血区域分割方法、装置和计算设备
CN107403201A (zh) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 肿瘤放射治疗靶区和危及器官智能化、自动化勾画方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1891155A (zh) * 2006-05-26 2007-01-10 北京思创贯宇科技开发有限公司 一种基于ct图像的组织成分分析方法
US20110190626A1 (en) * 2010-01-31 2011-08-04 Fujifilm Corporation Medical image diagnosis assisting apparatus and method, and computer readable recording medium on which is recorded program for the same
CN102208105A (zh) * 2010-03-31 2011-10-05 富士胶片株式会社 医学图像处理技术
CN106023144A (zh) * 2016-05-06 2016-10-12 福建工程学院 在断层影像中分割股骨的方法

Also Published As

Publication number Publication date
CN108648193A (zh) 2018-10-12
CN108648193B (zh) 2023-10-31

Similar Documents

Publication Publication Date Title
US11842556B2 (en) Image analysis method, apparatus, program, and learned deep learning algorithm
JP7030423B2 (ja) 画像解析方法、装置、プログラムおよび深層学習アルゴリズムの製造方法
US10115191B2 (en) Information processing apparatus, information processing system, information processing method, program, and recording medium
US7949181B2 (en) Segmentation of tissue images using color and texture
CN108346145A (zh) 一种病理切片中非常规细胞的识别方法
JP6791245B2 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
CN108537751B (zh) 一种基于径向基神经网络的甲状腺超声图像自动分割方法
WO2017150194A1 (ja) 画像処理装置、画像処理方法及びプログラム
US20200134831A1 (en) Segmenting 3d intracellular structures in microscopy images using an iterative deep learning workflow that incorporates human contributions
CN110838094B (zh) 病理切片染色风格转换方法和电子设备
CN110310291A (zh) 一种稻瘟病分级系统及其方法
WO2019232824A1 (zh) 生物组织影像识别的方法及其系统、计算机储存介质
Oscanoa et al. Automated segmentation and classification of cell nuclei in immunohistochemical breast cancer images with estrogen receptor marker
CN113129281B (zh) 一种基于深度学习的小麦茎秆截面参数检测方法
WO2019171909A1 (ja) 画像処理方法、画像処理装置及びプログラム
AU2006237611B2 (en) Method of analyzing cell structures and their components
WO2017145172A1 (en) System and method for extraction and analysis of samples under a microscope
Ding et al. Classification of chromosome karyotype based on faster-rcnn with the segmatation and enhancement preprocessing model
Amitha et al. Developement of computer aided system for detection and classification of mitosis using SVM
JP2004199391A (ja) 画像解析におけるしきい値決定方法とその装置、二値化装置並びに画像解析装置、学習機能付き情報処理方法と学習機能付き画像解析装置並びにそれらのための記録媒体
CN113158996A (zh) 一种基于扫描电子显微镜图像和人工智能的硅藻两步识别和分类方法
US8712140B2 (en) Method of analyzing cell structures and their components
CN111401119A (zh) 细胞核的分类
EP4036850A1 (en) Computer vision based monoclonal quality control
GB2414074A (en) Determining an optimal intensity threshold for use in image analysis of a stained biological specimen

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18922017

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18922017

Country of ref document: EP

Kind code of ref document: A1