Disclosure of Invention
The technical problems to be solved by the invention are as follows: aiming at the problems in the prior art, the invention provides a cytoplasmic positive immunohistochemical intelligent identification method, a system and a medium, and the method, the system and the medium can realize intelligent identification of cytoplasmic positive immunohistochemical positive cells and negative cells and complete automatic calculation and analysis of an immunohistochemical positive rate accurate result. On the premise of ensuring the accuracy of each analysis and calculation index result, the method can well meet the requirement of reliable quantitative analysis of the cytoplasmic immunohistochemical index in clinical pathological work and scientific research, and can effectively assist doctors and scientific research personnel to finish the analysis of each immunohistochemical index of the cytoplasmic immunohistochemistry by subtracting the complicated work of manual calculation and analysis of medical personnel and scientific research personnel.
In order to solve the technical problems, the invention adopts the technical scheme that:
a cytoplasmic positive immunohistochemical intelligent identification method comprises the following steps:
1) identifying nuclei from the HE microscopy panoramagram; registering the IHC microscopic panoramic image and the HE microscopic panoramic image, and then carrying out positive and negative tissue region segmentation;
2) mapping the cell nucleus to positive and negative tissue areas, and dividing the cell nucleus into a tumor positive cell nucleus, a tumor negative cell nucleus and a non-tumor cell nucleus;
3) counting the total number of tumor-positive nucleiR p And total number of tumor-negative nucleiR n And calculating the cytoplasmic IHPositive rate of C tumorH rate 。
Optionally, the identifying nuclei from the HE microscopy panorama in step 1) comprises:
1.1A) extracting a local HE microscopic panorama of a target position area from the HE microscopic panorama;
1.2A) inputting the local HE microscopic panoramic image into a first depth convolution neural network to obtain a cell nucleus segmentation mask image;
1.3A) carrying out nucleus segmentation post-processing on the nucleus segmentation mask map;
1.4A) performing circle fitting on the nuclear segmentation mask map subjected to the noise reduction treatment to obtain the central position and the radius of a minimum fitting circle as the identified nucleus.
Optionally, the first deep convolutional neural network in step 1.2A) is a UNet deep convolutional neural network.
Optionally, the post-processing of the cell nucleus segmentation performed in step 1.3A) includes morphological erosion and boundary separation processing of the connected cell nuclei using edge detection and watershed algorithms.
Optionally, the step 1) of registering the IHC microscopic panorama and the HE microscopic panorama and then performing positive and negative tissue region segmentation includes:
1.1B) carrying out coarse registration of an organization level on upper small images of the IHC micro panoramic image and the HE micro panoramic image which both adopt a multi-resolution pyramid file storage format, and extracting coarse registration parameters; generating a new IHC digital microscopic panorama according to the bottom layer large image of the IHC microscopic panorama based on the coarse registration parameters;
1.2B) extracting a local IHC digital microscopic panorama of the target position area from the new IHC digital microscopic panorama;
1.3B) carrying out cell-level fine registration on the local IHC digital microscopic panoramic image and the local HE microscopic panoramic image, and extracting fine registration parameters;
1.4B) adjusting the local IHC digital microscopic panoramic image based on the fine registration parameters to obtain a fine-adjusted local IHC digital microscopic panoramic image;
1.5B) inputting the local IHC digital microscopic panoramic image after fine tuning into a second deep convolutional neural network to carry out positive and negative tissue area segmentation on the IHC microscopic panoramic image after registration to obtain a positive tissue area segmentation mask image, wherein the positive tissue area segmentation mask image distinguishes positive and negative tissue areas by black color.
Optionally, the second deep convolutional neural network in step 1.5B) is a UNet deep convolutional neural network.
Optionally, step 2) comprises: and mapping the central position and the radius of the identified cell nucleus to a positive tissue region segmentation mask map, judging all points in a fitting circle corresponding to each cell nucleus, if more than half of the points are located in the positive tissue region, judging the cell nucleus to be a tumor positive cell nucleus, if more than half of the points are located in the negative tissue region, judging the cell nucleus to be a tumor negative cell nucleus, and if not, judging the cell nucleus to be a non-tumor cell nucleus.
Optionally, the calculating of the positivity of the cytoplasmic IHC tumorH rate The functional expression of (a) is:
H rate =R p /(R p +R n ) ,
in the above formula, the first and second carbon atoms are,R p the total number of tumor-positive nuclei,R n total number of tumor-negative nuclei.
In addition, the invention also provides a cytoplasmic positive immunohistochemical intelligent recognition system which comprises a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the steps of the cytoplasmic positive immunohistochemical intelligent recognition method.
Furthermore, the present invention also provides a computer-readable storage medium, in which a computer program is stored, the computer program being used for being executed by a computer device to implement the steps of the intelligent identification method for cytoplasmic positive immunohistochemistry.
Compared with the prior art, the invention has the following advantages: the invention takes the cells in the HE microscopic panorama of the HE stained section as reference, positions the cell positions in the cytoplasm positive Immunohistochemistry (IHC) stained section, can realize the intelligent identification of the cytoplasm type immunohistochemical positive cells and negative cells, and completes the automatic calculation and analysis of the precise result of the immunohistochemical plasma positive rate. On the premise of ensuring the accuracy of each analysis and calculation index result, the invention can well meet the requirement of reliable quantitative analysis of the cytoplasmic immunohistochemical index in clinical pathological work and scientific research, and can effectively assist doctors and scientific research personnel to finish the analysis of each immunohistochemical index of the cytoplasmic immunohistochemistry by subtracting the complicated work of manual calculation and analysis of medical personnel and scientific research personnel.
Detailed Description
As shown in fig. 1, the intelligent identification method for cytoplasmic positive immunohistochemistry in this embodiment includes:
1) identifying nuclei from the HE microscopy panoramagram; registering the IHC microscopic panoramic image and the HE microscopic panoramic image, and then carrying out positive and negative tissue region segmentation;
2) mapping the cell nucleus to positive and negative tissue areas, and dividing the cell nucleus into a tumor positive cell nucleus, a tumor negative cell nucleus and a non-tumor cell nucleus;
3) counting the total number of tumor-positive nucleiR p And total number of tumor-negative nucleiR n And calculating the positive rate of the cytoplasmic IHC tumorH rate 。
The IHC and HE microscopic panoramas in step 1) are images obtained for the same target location area. For example, a HE section and a cytoplasmic immunohistochemical section in the corresponding serial sections are obtained by scanning with a digital section scanner, and a HE microscopic digital panorama and an IHC microscopic digital panorama are respectively obtained.
Referring to fig. 2, the identification of cell nuclei from HE microscopic panorama in step 1) of the present embodiment includes:
1.1A) extracting a local HE microscopic panorama of a target position area from the HE microscopic panorama;
1.2A) inputting the local HE microscopic panoramic image into a first depth convolution neural network to obtain a cell nucleus segmentation mask image;
the target position region R is defined by the doctor, and the range is adjusted as follows:
R={(x R ,y R )| x l ≤x R ≤x l +R width ,y t ≤y R ≤x l +R height },
wherein,x l andy t representing the upper left-hand coordinates of the target position region R,R width andR height respectively representing target position areasRWidth and height of (2); (x R ,y R ) Representing coordinate points within a rectangular area. Due to the target position area defined by the doctorRDifferent sizes, possibly very large sizes, and the whole target position area cannot be directly usedRAs an input to the second deep convolutional neural network. Therefore, the whole rectangular area needs to be further divided into a plurality of tile maps with the same size, and then the plurality of tile maps are sequentially input into the first deep convolutional neural network. Therefore, the temperature of the molten metal is controlled,the step 1.2A) of inputting the local HE microscopic panoramic image into the first deep convolutional neural network to obtain the cell nucleus segmentation mask image specifically comprises the following steps: dividing a local HE microscopic panorama into a plurality of tile maps with the same size, sequentially inputting the tile maps into a first deep convolutional neural network to obtain a cell nucleus division mask map, and splicing the cell nucleus division mask maps into a target position areaRThe nuclear segmentation mask map of (1).
1.3A) carrying out nucleus segmentation post-processing on the nucleus segmentation mask map;
1.4A) performing circle fitting on the nuclear segmentation mask map subjected to the noise reduction treatment to obtain the central position and the radius of a minimum fitting circle as the identified nucleus. In this embodiment, finally, circle fitting is performed on each cell nucleus to obtain the central coordinate position of the minimum fitting circle (cx rc ,y rc ) And radiusrAnd expressing the cell nucleus therewith.
The first deep convolutional neural network in step 1.2A) is a UNet deep convolutional neural network. For the sake of distinction, the first deep convolutional neural network is named as HE nucleus UNet segmentation network in the present embodiment, and its network structure is shown in fig. 3. Wherein "2 × 2 maxporoling" shown by the arrow is a maximum pooling layer of 2 × 2, "3 × 3 Conv2d + Batch Normalization + ReLU" shown by the arrow indicates 3 × 3 two-dimensional convolution, Batch Normalization, and ReLU activation function processing, "2 × 2 convtassposed 2dConv2d + Batch Normalization + ReLU" shown by the arrow indicates 2 × 2 two-dimensional transpose convolution, Batch Normalization, and ReLU activation function processing, the rest of the boxes are convolutional layers, the upper number is a convolution output image size, and the lower side is a convolution kernel size. The HE cell nucleus UNet segmentation network is an UNet deep convolution neural network which is trained in advance and establishes a mapping relation between a local HE microscopic panoramic image and a cell nucleus segmentation mask image. During training, a large number of pictures marked by a professional doctor and containing HE cell nucleus marks are used for cell nucleus segmentation detection data set production to obtain a training, verification and test data set; training the HE cell nucleus UNet segmentation network by using the training and verifying data set, testing the segmentation network performance of the HE cell nucleus UNet segmentation network by using the testing data set, and performing repeated iterative training to finally obtain the optimized HE cell nucleus UNet segmentation network.
The segmentation result on the cell nucleus segmentation detection mask image obtained by the HE cell nucleus UNet segmentation network is not completely accurate, further processing is needed, results of segmentation errors are reduced, and meanwhile, the cell nucleus boundaries connected together are further segmented, namely, cell nucleus segmentation post-processing is carried out. In this embodiment, the post-processing of the cell nucleus segmentation performed in step 1.3A) includes morphological erosion and boundary separation processing of the connected cell nuclei by using edge detection and watershed algorithm. After morphological erosion, boundary separation processing is carried out on the connected cell nucleuses by adopting an edge detection and watershed algorithm, and each single cell nucleus is further obtained; and finally, performing circle fitting on each cell nucleus to obtain the central position and the radius of the minimum fitting circle, and expressing the cell nucleus by using the central position and the radius.
Referring to fig. 2, the step 1) of performing segmentation of positive and negative tissue regions after registering the IHC microscopic panorama and the HE microscopic panorama includes:
1.1B) carrying out coarse registration of the organization level on the upper small images of the IHC micro panoramic image and the HE micro panoramic image which both adopt the multi-resolution pyramid file storage format, extracting coarse registration parameters, and generating a new IHC digital micro panoramic image according to the bottom large image of the IHC micro panoramic image based on the coarse registration parameters.
The multi-resolution pyramid file storage format (pyramid) is shown in fig. 4, and the multi-resolution pyramid file storage format includes a plurality of small graphs with different resolutions distributed from small to large. As shown in fig. 5, the step of performing coarse registration at the tissue level includes: firstly, respectively taking an upper small image (the small image with the minimum resolution) of an IHC (interactive IHC) microscopic panorama and an HE (HE e-microscopic) microscopic panorama in a multi-resolution pyramid file storage format, registering the upper small image by adopting an SIFT (scale invariant feature transform) feature matching algorithm (or adopting other feature matching algorithms according to needs), and extracting coarse registration parameters (including a rotation angle, a similarity proportion, a translation amount and the like); then taking outAdjusting the bottom layer large image (the small image with the maximum resolution) of the IHC micro-panoramic image in the multi-resolution pyramid file storage format according to the coarse registration parameters to obtain the registered bottom layer large image, and then generating new small images with various resolutions distributed from small to large according to the registered bottom layer large image to obtain a new IHC digital micro-panoramic image. By the coarse registration mode, the IHC micro panoramic image and the HE micro panoramic image can be quickly registered, and the calculation amount can be reduced. The coarse registration parameters include the amount of translation (P x ,P y ) Rotation angle (radian)αAnd scalingSWhereinP x Is the amount of translation in the horizontal direction,P y is the amount of translation in the vertical direction. Then the amount of translation already obtained is used (P x ,P y ) Angle of rotation andαscalingSWhen the registration parameters are equal, and a new IHC digital microscopic panorama is regenerated, the coordinate relationship of the newly generated IHC digital microscopic panorama and the coordinate relationship of the original IHC are as follows:
x new = S×cosα×x ori + S×sinα×y ori + P x ,
y new = -S×sinα×x ori + S×cosα×y ori + P y ,
F(x new ,y new ) = I(x ori ,y ori ) ,
wherein,x ori andy ori respectively the coordinates of the pixel points of the original IHC microscopic panorama,x new andy new respectively representing pixel point coordinates of the newly generated IHC microscopic panoramic image;I(x ori ,y ori ) To representOriginal IHC micro-panorama at pixel coordinates (x ori ,y ori ) A pixel value of (c);F(x new ,y new ) Representing the pixel coordinates of the newly generated IHC microscopic panorama (c) ((x new ,y new ) The pixel values on the table.
1.2B) extracting a local IHC digital microscopic panorama of the target position area from the new IHC digital microscopic panorama.
1.3B) carrying out cell-level fine registration on the local IHC digital microscopic panoramic image and the local HE microscopic panoramic image, and extracting fine registration parameters.
In this embodiment, the SIFT feature matching algorithm is also adopted when performing cell-level fine registration, and in addition, other feature matching algorithms may be adopted as needed. The coarse registration is the basis of the fine registration and is used for reliably and quickly extracting the target position region for the fine registration, so that the two-stage registration of thickness combination is realized, and both the precision and the efficiency are realized. When cell-level fine registration is performed, extracting fine registration parameters also includes translation amount, rotation angle, scaling and the like. The registration is more refined on the basis of the primary registration, so that fine adjustment is performed on the basis of the original primary registration, the corresponding registration parameters are smaller, and after the fine adjustment, the specific region can finish the fine registration between cells in the region, and the mapping from the position of the cell nucleus detected on the HE to the corresponding region on the newly generated IHC is finished.
1.4B) adjusting the local IHC digital microscopic panoramic image based on the fine registration parameters to obtain a fine-adjusted local IHC digital microscopic panoramic image; the extracted fine registration parameters in this embodiment include: amount of translation (P1 x , P1 y ) Angle of rotationβAnd scalingS1The registration is more refined on the basis of the primary registration, so that the fine adjustment is performed on the basis of the original primary registration, the correspondingly obtained registration parameters are smaller, and after the fine adjustment, the cells and the fine details in the specific region can be obtained in the regionAnd (4) performing fine registration among cells to complete the mapping of the positions of the detected cell nuclei on the HE to the corresponding regions on the newly generated IHC. On a newly generated IHC, firstly extracting an image on a corresponding target position region R, then carrying out tile map division on the region, then sequentially inputting the tile maps which are divided into equal parts into a second deep convolutional neural network, obtaining a network output re-mask map, then splicing the tile mask maps into mask maps which are detected and divided into IHC positive tissue regions and IHC negative tissue regions, wherein the pixel point coordinate value on the mask map is 0 or 255, if the pixel value is 0, the pixel point is located in the negative tissue region, and if the pixel value is 255, the pixel point is located in the positive tissue region; and mapping the coordinate position of the result of the tissue region detection segmentation according to the registration parameter number, wherein the function expression mapped into the coordinate position corresponding to the HE actual cell position one by one is as follows:
x new = S1×cosβ×x R + S1×sinβ×y R + P1 x ,
y new = -S1×sinβ×x R + S1×cosβ×y R + P1 y ,
F’R(x R_new ,y R_new ) = FR(x R ,y R ) ,
wherein,x R_new andy R_new representing the pixel coordinates, F, corresponding to the target location area R' after fine-tuningR(x R ,y R ) Coordinates on the mask map representing segmentation detection of positive tissue and negative tissue regions on the target location region R by the newly generated IHC are expressed (x R ,y R ) A pixel value of (a); f'R(x R_new ,y R_new ) Representation is fine-tunedThe subsequent target position region R' corresponds to coordinates (c:)x R_new ,y R_new ) The pixel value of (c).
1.5B) inputting the local IHC digital microscopic panoramic image after fine tuning into a second deep convolutional neural network to carry out positive and negative tissue area segmentation on the IHC microscopic panoramic image after registration to obtain a positive tissue area segmentation mask image, wherein the positive tissue area segmentation mask image distinguishes positive and negative tissue areas by black color.
In this embodiment, the second deep convolutional neural network in step 1.5B) is a UNet deep convolutional neural network. For the sake of convenience of distinction, the second deep convolutional neural network is named as IHC positive region UNet segmentation network in the present embodiment, and its network structure is shown in fig. 6. Wherein "2 × 2 maxporoling" shown by the arrow is a maximum pooling layer of 2 × 2, "3 × 3 Conv2d + Batch Normalization + ReLU" shown by the arrow indicates 3 × 3 two-dimensional convolution, Batch Normalization, and ReLU activation function processing, "2 × 2 convtassposed 2dConv2d + Batch Normalization + ReLU" shown by the arrow indicates 2 × 2 two-dimensional transpose convolution, Batch Normalization, and ReLU activation function processing, the rest of the boxes are convolutional layers, the upper number is a convolution output image size, and the lower side is a convolution kernel size. For the finely adjusted local IHC digital microscopic panoramic image, the segmentation detection of the positive region and the negative region by the organization hierarchy is required, so that a large number of marked pictures containing the IHC positive tissue and the IHC negative tissue are used for data set production to obtain a training, verifying and testing data set; and then, carrying out deep convolution UNet network training by utilizing a training and verifying data set, and then testing the UNet segmentation network performance by using a testing data set to finally obtain another optimized UNet network, which is called as an IHC positive area UNet segmentation network.
Fig. 7 shows a partial IHC digital microscopic panorama image after fine adjustment input to the IHC positive area UNet segmentation network, (a) and fig. 7 shows a positive tissue area segmentation mask image output from the IHC positive area UNet segmentation network, and as can be seen from fig. 7, the positive tissue area segmentation mask image is divided into a positive tissue area and a negative tissue area by black and white.
Because the cells on the newly generated IHC are in one-to-one correspondence with the cells on the HE through the primary registration, on the basis of the primary registration, a more detailed registration operation needs to be performed on the specific region to be subjected to the segmentation detection of the positive cells and the negative cells, which is framed by the doctor. In this embodiment, step 2) includes: and mapping the central position and the radius of the identified cell nucleus to a positive tissue region segmentation mask map, judging all points in a fitting circle corresponding to each cell nucleus, if more than half of the points are located in the positive tissue region, judging the cell nucleus to be a tumor positive cell nucleus, if more than half of the points are located in the negative tissue region, judging the cell nucleus to be a tumor negative cell nucleus, and if not, judging the cell nucleus to be a non-tumor cell nucleus.
When the center positions and radii of the identified nuclei are mapped to the positive tissue region segmentation mask map in step 2), (for each center positionx rc ,y rc ) Judging all points in a fitting circle with the radius r to obtain all points in the fitting circle, and expressing as { Rr: (R)x Rr ,y Rr )}:
In the above formula, theRrAll points on the region are corresponding to the target position region R' after fine adjustment, and whether the points are located on the positive region or the negative region is judged for all the points, namely:
x Rr_new = S1×cosβ×x Rr + S1×sinβ×y Rr + P1 x ,
y Rr_new = -S1×sinβ×x Rr + S1×cosβ×y Rr + P1 y ,
if it corresponds toTo point F ' on target position region R ' after trimming 'R(x Rr_new ,y Rr_new ) If the gray value of (1) is 255, the point is located in the positive area, otherwise, the point is located in the negative area; if more than half of the points are positioned in the positive area, the cell nucleus is judged to be a tumor positive cell nucleus, if more than half of the points are positioned in the negative area, the cell nucleus is judged to be a tumor negative cell nucleus, otherwise, the cell nucleus is a non-tumor cell nucleus.
In this example, the positive rate of cytoplasmic IHC tumors was calculatedH rate The functional expression of (a) is:
H rate =R p /(R p +R n ) ,
in the above formula, the first and second carbon atoms are,R p the total number of tumor-positive nuclei,R n total number of tumor-negative nuclei.
In summary, in the method of the present embodiment, a deep convolutional neural network cell nucleus segmentation detection technology, a deep convolutional neural network positive region segmentation detection technology, a double-SIFT fast feature matching registration technology, an image morphology processing and image segmentation technology, an image circle fitting technology, and other technologies are used, the cell positions of the HE micro-panorama and the cell positions of the cytoplasmic IHC micro-panorama are in one-to-one correspondence, the nuclear positions of the cytoplasmic IHC cells are mapped into the HE for statistical calculation, the calculation and analysis of the cytoplasmic IHC tumor cell positive rate are fully automatically and fast completed, the problem of inaccuracy of results caused by complex observation and manual calculation of the cytoplasmic IHC tumor cell positive rate by a doctor is solved, and the doctor is effectively assisted to solve the problem of the calculation and analysis of the positive rate in the scene. The method adopts a double-time deep convolution neural segmentation network method, adopts a nucleus segmentation network for an HE microscopic panorama, adopts a positive area segmentation network for IHC, combines a double-time SIFT registration algorithm to complete the mapping of cytoplasmic IHC positive nuclei and negative nuclei to HE, blurs the IHC, maps the non-statistical cell amount to the distinguishable and statistical HE, thereby completing the identification of cytoplasmic IHC positive cells and negative cells and calculating the tumor positive rate. After the HE cell nucleus is segmented and detected, in order to further reduce impurity interference and distinguish the connected cell nuclei, the method adopts methods such as edge detection, watershed segmentation algorithm and final circle fitting to complete accurate segmentation of the HE cell nuclei. In the method, the cells in the HE microscopic panorama of the HE stained section are used as reference, and the positions of the cells in the cytoplasmic positive Immunohistochemistry (IHC) stained section are located. On the premise of ensuring accurate results of various analysis and calculation indexes, the method can well meet the requirement of reliable quantitative analysis of the cytoplasmic immunohistochemical indexes in clinical pathological work and scientific research, and can efficiently assist doctors and scientific research personnel to finish the analysis of the cytoplasmic immunohistochemical indexes by subtracting the complicated work of manual calculation and analysis of medical personnel and scientific research personnel.
In addition, the present embodiment also provides a intelligent recognition system for cytoplasmic positive immunohistochemistry, which includes a microprocessor and a memory connected to each other, wherein the microprocessor is programmed or configured to execute the steps of the intelligent recognition method for cytoplasmic positive immunohistochemistry.
In addition, the present embodiment also provides a computer-readable storage medium, in which a computer program is stored, the computer program being used for being executed by a computer device to implement the steps of the aforementioned intelligent identification method for cytoplasmic positive immunohistochemistry.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.