CN112270662A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN112270662A
CN112270662A CN202011147699.6A CN202011147699A CN112270662A CN 112270662 A CN112270662 A CN 112270662A CN 202011147699 A CN202011147699 A CN 202011147699A CN 112270662 A CN112270662 A CN 112270662A
Authority
CN
China
Prior art keywords
image
tissue
reflected light
light image
transmitted light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011147699.6A
Other languages
Chinese (zh)
Inventor
廖俊
姚建华
刘月平
张勐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Fourth Hospital of Hebei Medical University Hebei Cancer Hospital
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011147699.6A priority Critical patent/CN112270662A/en
Publication of CN112270662A publication Critical patent/CN112270662A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The application provides an image processing method, an image processing device, an image processing apparatus and a storage medium, wherein the method comprises the following steps: obtaining at least one reflected light image of the tissue cut block, wherein the reflected light image comprises an image of the tissue cut block after the reflected light of the first light source is converted by the camera device, and the first light source and the camera device are positioned on the same side of the tissue cut block; obtaining at least one transmitted light image of the tissue cut block, wherein the transmitted light image comprises an image of the tissue cut block after the transmitted light of the second light source is converted by the image pickup equipment, and the second light source and the image pickup equipment are positioned on different sides of the tissue cut block; and determining a lesion region in the tissue section according to the learned light absorption characteristics of the pathological tissue and the light absorption characteristics of different positions in the tissue section, which are respectively characterized by the at least one reflected light image and the at least one transmitted light image of the tissue section. The scheme of the application can identify the focus area in the tissue cutting block more accurately.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a storage medium.
Background
Pathological examination (clinical examination) has been largely applied to clinical work and scientific research. For example, post-operative tissue analysis of tumor patients may be useful for accurate diagnosis of tumor conditions.
Here, the pathological examination refers to cutting a tissue of a certain part of a human or animal body, dividing the cut tissue into a plurality of tissue blocks having appropriate volumes, selecting a tissue block including a lesion by a doctor to prepare a pathological section, and observing the form of the pathological section under a microscope to examine the lesion. In order to obtain accurate lesion information, the process of selecting a tissue block containing a lesion to prepare a pathological section is particularly important.
However, doctors mainly distinguish focal regions in tissue blocks by observing and touching the tissue blocks with the naked eyes, which is a highly demanding way for doctors to experience and is prone to erroneous lesion identification. Moreover, if the focus is hidden in the tissue block, the focus in the tissue block cannot be identified only by naked eye observation of a doctor, and the doctor can identify the focus by touching the hand feeling of the tissue block, so that the subjectivity is high, and the accuracy is not high.
Disclosure of Invention
In view of the above, the present application provides an image processing method, apparatus, device and storage medium to identify a lesion area in a tissue slice more accurately.
In order to achieve the purpose, the application provides the following technical scheme:
in one aspect, the present application provides an image processing method, including:
obtaining at least one reflected light image of a tissue cut, wherein the reflected light image comprises an image of the tissue cut after the reflected light of a first light source is converted by an image pickup device, and the first light source and the image pickup device are positioned on the same side of the tissue cut;
obtaining at least one transmitted light image of the tissue cut, wherein the transmitted light image comprises an image formed by the transmitted light of the tissue cut transmitted by a second light source after being converted by the image pickup device, and the second light source and the image pickup device are positioned on different sides of the tissue cut;
determining a lesion area in the tissue section according to the learned light absorption characteristics of the lesion tissue and the light absorption characteristics at different positions in the tissue section, which are respectively characterized by at least one reflected light image and at least one transmitted light image of the tissue section.
In one possible implementation, the determining a lesion region in the tissue section according to the learned light absorption characteristics of the lesion tissue and the light absorption characteristics at different positions in the tissue section characterized by at least one reflected light image and at least one transmitted light image of the tissue section comprises:
determining a lesion area in the tissue section by using a lesion identification model according to at least one reflected light image and at least one transmitted light image of the tissue section;
the lesion recognition model is obtained by deep learning training by using at least one reflected light image and at least one transmitted light image of each of a plurality of tissue section samples marked with actual lesion areas, and the lesion recognition model learns the light absorption characteristics of lesion tissues.
In yet another possible implementation, the obtaining at least one reflected light image of the tissue section includes:
obtaining at least one first reflected light image and at least one second reflected light image of a tissue cut, wherein the first reflected light image comprises an image of the tissue cut after conversion of reflected light of a first light source by a first camera device, the second reflected light image comprises an image of the tissue cut after conversion of the reflected light of the first light source by a second camera device, the first camera device is a camera device with a photosensitive range including a visible light spectrum, the second camera device is a camera device with a photosensitive range including a short-wave infrared band spectrum, and the first camera device, the second camera device and the first light source are located on the same side of the tissue cut;
the obtaining at least one transmitted light image of the tissue section comprises:
and obtaining at least one first transmitted light image and at least one second transmitted light image of the tissue cut, wherein the first transmitted light image comprises an image of the tissue cut transmitted by a second light source after being converted by the first image pickup device, the second transmitted light image comprises an image of the tissue cut transmitted by a second light source after being converted by the second image pickup device, and the first image pickup device and the second image pickup device are both positioned on different sides of the tissue cut from the second light source.
In another aspect, the present application also provides an image processing apparatus, including:
the device comprises a first image obtaining unit, a second image obtaining unit and a control unit, wherein the first image obtaining unit is used for obtaining at least one reflected light image of a tissue cut block, the reflected light image comprises an image of the tissue cut block after the reflected light of a first light source is converted by an image pickup device, and the first light source and the image pickup device are positioned on the same side of the tissue cut block;
a second image obtaining unit, configured to obtain at least one transmitted light image of the tissue cut, where the transmitted light image includes an image of the tissue cut after being converted by the imaging device through transmitted light transmitted by a second light source, and the second light source and the imaging device are located on different sides of the tissue cut;
and the focus determining unit is used for determining a focus area in the tissue section according to the learned light absorption characteristics of the pathological tissue and the light absorption characteristics at different positions in the tissue section, which are characterized by the at least one reflected light image and the at least one transmitted light image of the tissue section.
In yet another aspect, the present application further provides a computer device comprising a memory and a processor;
wherein the memory is used for storing programs;
the processor is configured to execute the program, and the program, when executed, is specifically configured to implement the image processing method as described in any one of the above.
In yet another aspect, the present application further provides a storage medium storing a program for implementing the image processing method as described in any one of the above when the program is executed.
As is apparent from the above description, the reflected light image of the tissue section reflects the light absorption characteristics of the surface of the tissue section, and the transmitted light image of the tissue section reflects the light absorption characteristics of the interior of the tissue section, based on the imaging methods of the reflected light image and the transmitted light image of the tissue section. On the basis, the absorption characteristics of different types of tissues such as pathological change tissues and normal tissues to light are different, the light absorption characteristics of the pathological change tissues are combined, the region with the focus in the tissue section can be analyzed comprehensively according to the light absorption characteristics of the surface and different positions in the tissue section, the condition that the focus cannot be identified due to the fact that the focus is hidden in the tissue section is reduced, and therefore the accuracy of identifying the focus region in the tissue section is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on the provided drawings without creative efforts.
FIG. 1 is a diagram illustrating a system architecture to which the present application is applicable;
FIG. 2 is a flow chart illustrating an embodiment of an image processing method provided by the present application;
FIG. 3 is a schematic flow chart diagram illustrating an image processing method according to another embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an implementation principle of the image processing method of the present application;
FIG. 5 is a schematic diagram illustrating an implementation principle of increasing the resolution of the second reflected light image in the image processing method according to the present application;
FIG. 6 is a schematic flow chart of the method for training a lesion recognition model according to the present application;
FIG. 7 is a schematic diagram illustrating an exemplary embodiment of an image processing apparatus;
fig. 8 shows a schematic diagram of a component structure of a computer device provided by the present application.
Detailed Description
The scheme of the application is suitable for analyzing and processing the image of the tissue cut block of the human body or the animal in a personal computer, a server or a cloud platform so as to determine the focus area in the tissue cut block and improve the accuracy of determining the focus area in the tissue cut block.
In order to improve the accuracy of lesion area determination in the present application, images of tissue slices are obtained separately under reflected and transmitted illumination of the tissue slices, and the images of both cases are analyzed.
For ease of understanding, the system architecture to which the solution of the present application is applicable is described below.
Fig. 1 is a schematic diagram illustrating a component structure of a system architecture to which the solution of the present application is applicable.
In the system architecture of fig. 1, includes: an image acquisition system 10 and an image analysis system 20;
wherein, image acquisition system 10 includes: an imaging device 11, a first light source 12, a second light source 13, and a glass plate 14 for carrying a tissue section.
As can be seen from fig. 1, the first light source 12 and the camera device 11 are located on one side of the glass plate 14, while the second light source 13 is located on the other side of the glass plate 14. It can be seen that in the case where a tissue section is placed on the glass block, the first light source 12 and the imaging device 11 are located on one side of the tissue section, while the second light source 13 is located on the other side of the tissue section.
Wherein the tissue section may be a tissue piece removed from a human or animal body. Of course, although referred to herein as a tissue mass, the tissue mass may actually be thinner and may therefore also be referred to as a tissue slice.
One or more image capturing devices 11 may be provided, and the image capturing device may be a camera, a smart camera, or the like. For example, the imaging device may be any imaging device capable of acquiring a hyperspectral image.
In the case where there are more image pickup apparatuses, the plurality of image pickup apparatuses may be the same. Alternatively, the plurality of image pickup apparatuses include at least two image pickup apparatuses having different light sensing ranges. For example, the plurality of image pickup apparatuses may include at least one image pickup apparatus whose light sensing range includes a visible light spectral range and at least one image pickup apparatus whose light sensing range includes a short-wavelength infrared band range. For example, an image pickup apparatus whose light sensing range includes a visible light spectrum range may be a hyperspectral camera whose light sensing range is 400nm to 1000 nm; the image pickup apparatus having a light sensing range including a short wave infrared band range may be a hyperspectral camera having a light sensing range of 900nm to 1700 nm.
The first light source and the second light source may be light sources output by the same light emitting device, but the light emitting devices corresponding to the first light source and the second light source need to be disposed on different sides of the tissue section.
For example, the first and second light sources may be halogen lamps, i.e. light sources generated by halogen lamps, wherein, when the tissue section is irradiated with the halogen lamps, no loss of the tissue section and no ionizing radiation are generated. Of course, the first and second light sources may be incandescent or LED light sources.
In one possible implementation, the light output by the first light source is not perpendicular to the surface of the tissue section (e.g., the glass plate in fig. 1), i.e., the light output by the first light source is at an angle of less than 90 degrees to the surface of the tissue section, so that the light from the first light source impinges on the tissue section and reflected light (reflection of the light) is generated at the surface of the tissue section. For example, the first light source outputs light at an angle of 45 degrees to the surface of the tissue section.
In one possible implementation, the light output by the second light source is perpendicular to the surface of the tissue section and the light of the second light source can be projected onto the tissue section such that the light of the second light source can produce transmitted light (transmission of light) within the tissue section. E.g., the second light source is directly below the tissue section in fig. 1.
In this application, the glass plate is a highly transparent material that can be used for light diffusion.
E.g., may be translucent frosted glass in one implementation. After the light penetrates through the ground glass, more uniform illumination can be provided for the tissue cut blocks on the ground glass, and the second light source can be facilitated to provide more uniform illumination for the tissue cut blocks.
Of course, the glass plate may be a high-transmittance material for light diffusion, such as a frosted acrylic plate, a polycarbonate plate (PC plate), or a white paper.
It will be appreciated that a glass plate may also be unnecessary if the tissue section can be otherwise secured such that the first and second light sources are located on either side of the tissue section.
In the embodiment of the present application, in order to obtain the texture information of the surface of the tissue section, the present application may employ a first light source to irradiate the tissue section, and obtain the reflected light image of the tissue section by an imaging device. The reflected light image of the tissue section is an image of the tissue section, which is obtained by converting the reflected light of the first light source by the imaging device.
Since the light from the first light source will reflect light from the surface of the tissue section after the tissue section is illuminated by the first light source, the reflected light image is actually an image of the tissue section taken when the tissue section is illuminated by the first light source and the light from the first light source is reflected from the tissue section.
When there are multiple first light sources, multiple first light sources can be used to illuminate the tissue section simultaneously.
It will be appreciated that, to avoid interference, the second light source may be turned off so that the second light source does not emit light while the first light source is illuminating the tissue section.
It can be understood that, since the reflected light image of the tissue cut is the condition of the tissue cut collected by the light of the first light source under the condition that the light is reflected by the surface of the tissue cut, the reflected light image can reflect the absorption condition of the light at different positions on the surface of the tissue cut. The tissue of different types of tissues has different light absorption conditions, so that the texture information of the surface of the tissue section can be reflected by the reflected light image.
On the other hand, in order to be able to obtain internal information of the tissue section, the tissue section may be irradiated with a second light source, and a transmitted light image of the tissue section may be obtained by an imaging apparatus. Wherein the transmitted light image comprises an image of the tissue section which is converted by the image pickup device through the transmitted light transmitted by the second light source.
It will be appreciated that since the second light source is located below the tissue section, which is typically between 2mm and 8mm thick, some of the light will be transmitted through the tissue section when illuminated by the second light source. Specifically, in the case where the second light source irradiates the tissue cut, the light from the second light source is incident on the surface of the tissue cut, and thus there is an emergence phenomenon, i.e., a transmission phenomenon, in which the incident light is refracted and passes through the tissue cut. It can be seen that in the case where the tissue section is irradiated with the second light source and the light from the second light source is transmitted through the tissue section, the image of the tissue section obtained by the imaging apparatus is a transmitted light image of the tissue section.
It will be appreciated that in order to avoid interference when the second light source illuminates the tissue section, the first light source may be switched off so that the first light source does not emit light.
It will be appreciated that different types of tissue absorb light differently, and that the transmitted light image may reflect the absorption of light at different points within the tissue section, and thus the transmitted light image may reflect information about the interior of the tissue section. For example, the sparkling and translucent portions of the transmitted light image are typically fat in the tissue section.
It should be noted that, since there may be one or more image capturing devices, the present application may obtain at least one reflected light image of the tissue section by at least one image capturing device; accordingly, at least one transmitted light image of the tissue section may be obtained.
After obtaining at least one reflected light image and at least one transmitted light image of the tissue section, the at least one reflected light image and the at least one transmitted light image may be analyzed by the image analysis system 20 to determine a lesion area in the tissue section.
The image analysis system 20 includes at least one computer device 21, for example, the image analysis system may be an independent personal computer or server, or a cluster or cloud platform formed by multiple servers.
In one implementation, in the embodiment of the present application, in order to determine the lesion region of the tissue section, the image analysis system performs image analysis and machine learning of the model based on an artificial intelligence technique.
Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
In the embodiment of the application, at least the training of the model is performed based on machine learning. Machine Learning (ML) is a multi-domain cross subject, and relates to multiple subjects such as probability theory, statistics, approximation theory, convex analysis and algorithm complexity theory. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
It will be appreciated that the transmitted light image and the reflected light image of the tissue section obtained by image acquisition system 20 may be input by a user into the computer device of the image analysis system, or may be transmitted by a user to the computer device of the image analysis system via a user terminal.
Of course, in the case where the image pickup apparatus of the image analysis system has a communication function, it may be a computer apparatus to which the image pickup apparatus transmits the picked-up transmitted light image and reflected light image to the image analysis system.
In conjunction with the above, the image processing method of the present application is described below with reference to a flowchart.
The image processing method of the present application may be executed by a computer device, such as a personal computer, a server, or a cloud platform in the above-mentioned image analysis system.
As shown in fig. 2, which shows a schematic flowchart of an embodiment of an image processing method according to the present application, the method of the present embodiment may include:
s201, at least one reflected light image of the tissue section is obtained.
As described above, the reflected light image includes an image of the tissue section obtained by converting the reflected light of the first light source by the imaging device. For example, when there are a plurality of cameras, a plurality of reflected light images can be obtained.
Wherein the first light source and the camera device are located on the same side of the tissue section. If there are multiple imaging devices, then these multiple imaging devices are located on the same side of the tissue section as the first light source.
Reference may be made, inter alia, to the description of the previous embodiments with regard to the reflected light image, the first light source and the image pickup apparatus.
S202, at least one transmitted light image of the tissue section is obtained.
Wherein the transmitted light image comprises an image of the tissue section which is converted by the imaging device through the transmitted light transmitted by the second light source. The second light source and the imaging device are located on different sides of the tissue section.
It is understood that reference may also be made to the related description of the previous embodiments with respect to the transmitted light image and the second light source, which are not described herein again.
S203, determining a focus area in the tissue cut block according to the learned light absorption characteristics of the pathological tissue and the light absorption characteristics of different positions in the tissue cut block respectively represented by the at least one reflected light image and the at least one transmitted light image of the tissue cut block.
The light absorption characteristics of a tissue (a diseased tissue or a normal tissue of a human body or an animal body) refer to characteristics such as the degree of absorption of light of different wavelength bands by the tissue.
The lesion tissue refers to a human or animal body tissue in which a lesion occurs, that is, a lesion tissue in which a lesion exists.
In the present application, the light absorption characteristic of the lesion tissue may be obtained based on the light absorption characteristics of a plurality of lesion tissue slices. For example, the light absorption characteristics of the lesion tissue can be extracted by performing characteristic analysis on the transmitted light image and the reflected light image of a plurality of lesion tissue slices. In another example, the light absorption characteristics of the lesion tissue are determined from transmitted and reflected light images of a plurality of lesion tissue slices using machine learning or depth learning techniques.
For different diseases, the light absorption characteristics of the lesion tissue under each disease can be learned, and the light absorption characteristics of the lesion tissue suitable for different diseases can also be learned, which is not limited.
As can be seen from the foregoing description of the transmitted light image and the reflected light image, the reflected light image can reflect the light absorption characteristics at different positions on the surface of the tissue slice, and actually reflects the texture information of the surface of the tissue slice; the transmitted light image reflects the light absorption characteristics at different locations within the tissue section, i.e., the tissue structure characteristics at different locations within the tissue section.
It is understood that the light absorption characteristics of different types of tissues in normal animal tissue where no lesion exists are different, and that the light absorption characteristics of animal tissue where a lesion exists are also significantly different from those of normal animal tissue. Based on this, in one possible implementation, in combination with the learned light absorption characteristics of the diseased tissue and the light absorption characteristics at various locations of the tissue section, a lesion area with abnormal light absorption characteristics (e.g., consistent with the light absorption characteristics of the diseased tissue) may be determined from the tissue section.
In one possible implementation, the present application may pre-train a lesion recognition model for recognizing a lesion region of a tissue section. Based on the image, at least one reflected light image and at least one transmitted light image of the tissue section can be used for determining a focus area in the tissue section by utilizing the focus identification model.
The focus identification model is obtained by utilizing at least one reflected light image and at least one transmitted light image of each of a plurality of tissue section samples marked with actual focus areas through deep learning training.
The lesion recognition model may be trained by network models such as Convolutional Neural Networks (CNN), Generative Adaptive Networks (GAN), or U-net, without limitation. The application does not limit the specific process of training the focus model by adopting deep learning, and a possible mode is taken as an example for introduction in the following.
It can be understood that by training the lesion recognition model, the lesion recognition model can learn the light absorption characteristics of the lesion tissue.
Wherein determining the lesion region of the tissue section may be outputting location region information of the lesion region in the tissue section. Determining the lesion area may also be determining the lesion area in a reflected light image or a transmitted light image of the tissue section. For example, the lesion area is marked by means of thermodynamic diagram, specified color filling or contour line drawing, so that the user can visually check the position of the lesion area. Of course, it is also possible to generate a two-dimensional or three-dimensional image of the tissue section based on the transmitted light image and the reflected light image, and to determine the lesion area in the two-dimensional or three-dimensional image of the tissue section.
As is apparent from the above description, the reflected light image of the tissue section reflects the light absorption characteristics of the surface of the tissue section, and the transmitted light image of the tissue section reflects the light absorption characteristics of the interior of the tissue section, based on the imaging methods of the reflected light image and the transmitted light image of the tissue section. On the basis, the absorption characteristics of different types of tissues such as pathological change tissues and normal tissues to light are different, the light absorption characteristics of the pathological change tissues are combined, the region with the focus in the tissue section can be analyzed comprehensively according to the light absorption characteristics of the surface and different positions in the tissue section, the condition that the focus cannot be identified due to the fact that the focus is hidden in the tissue section is reduced, and therefore the accuracy of identifying the focus region in the tissue section is improved.
It is understood that there is little difference in the absorption of light by human body or some tissue in living beings in the visible light, whereas there may be a more pronounced difference in intensity or color of light in the short-wave infrared band.
Based on this, in order to make the reflected light image and the transmitted light image of the tissue section can reflect the light absorption characteristics of the tissue section more intuitively, the present application may set the first image pickup apparatus in which at least one of the sensing ranges includes the visible light spectrum and the second image pickup apparatus in which at least one of the sensing ranges includes the short-wave infrared band spectrum.
For example, the first image pickup device is a hyperspectral camera with a light sensing range of 400nm-1000 nm; the second camera device is a hyperspectral camera with a photosensitive range of 900nm-1700 nm.
Wherein the first camera device and the second camera device are also located on the same side of the tissue section as the first light source. The first camera device and the second camera device are both located on different sides of the tissue section than the second light source.
In this case, the present application obtains at least one first reflected light image, at least one second reflected light image, at least one first transmitted light image, and at least one second transmitted light image of the tissue section.
Specifically, in the case where the tissue section is irradiated by the first light source (the second light source does not emit light), the image of the tissue section is captured by the at least one first imaging device and the at least one second imaging device.
Since there is reflected light on the surface of the tissue section when the first light source irradiates the tissue section, the first and second imaging devices capture images of the reflected light of the tissue section from the first light source, but the first and second imaging devices capture reflected light images of the tissue section in different light wave bands.
For the sake of convenience of distinction, the reflected light image of the tissue section acquired by the first imaging device is referred to as a first reflected light image, and the reflected light image of the tissue section acquired by the second imaging device is referred to as a second reflected light image. As can be seen, the first reflected light image may include an image of the reflected light of the first light source by the tissue section in the visible range (i.e., converted within the first imaging device). The second reflected light image may include an image of the tissue section of the reflected light of the first light source in the short wave infrared band range (i.e., converted within the second imaging device).
As shown in fig. 1, it can be assumed that one of the two cameras in fig. 1 is a hyperspectral camera of 400nm to 1000nm, and the other is a hyperspectral camera of 900nm to 1700nm, after at least one first light source 12 is turned on, a hyperspectral image of a tissue cut piece with reflected light within a wavelength band of 400nm to 1000nm can be collected by the hyperspectral camera of 400nm to 1000nm, and a first reflected light image is obtained; meanwhile, a hyperspectral camera of 900nm-1700nm is used for collecting a hyperspectral image of the reflected light of the tissue cut block in a waveband of 900nm-1700nm to obtain a second reflected light image.
In order to enable the 900-1700nm camera equipment to obtain a better shooting effect, an infrared long-pass filter can be added in front of the camera equipment.
Similarly, in the case where the second light source is used for illuminating the tissue section (the first light source does not emit light), the first camera can be used for acquiring the first transmitted light image of the tissue section, and the second camera can be used for acquiring the second transmitted light image of the tissue section. As can be seen, the second transmitted light image includes: imaging the tissue slice in the visible light band range by the transmitted light transmitted by the second light source; the second transmitted light image includes an image of the transmitted light flux of the tissue section transmitted by the second light source in the short wave infrared band.
The first reflected light image and the second reflected light image can respectively reflect the reflection condition of the surface of the tissue cutting block to light in the visible light and short infrared wave band light wave band ranges, so that the light absorption condition of the surface of the tissue cutting block can be comprehensively reflected, and the texture information of the surface of the tissue cutting block can be more comprehensively reflected through the first reflected light image and the second reflected light image.
Correspondingly, the light absorption condition inside the tissue cutting block can be reflected under the condition that visible light and short infrared band light are transmitted through the tissue cutting block through the first transmitted light image and the second transmitted light image respectively, so that the tissue structure characteristics inside the tissue cutting block can be reflected more comprehensively.
On the basis, the lesion area of the tissue section can be determined more accurately according to the first reflected light image, the second reflected light image, the first transmitted light image and the second transmitted light image of the tissue section.
It is understood that, in order to obtain the above-mentioned several kinds of hyperspectral images, a first image pickup device and a second image pickup device with built-in grating push-broom structures may be adopted. If the first camera device and the second camera device do not have built-in grating push-broom structures, a liquid crystal tunable filter can be added in front of the two camera devices, or the camera devices are externally provided with push-broom structures, and the mobile station where the glass plate for placing the tissue blocks is located is moved to take a picture.
The lens of the image pickup device can be a fixed-focus lens, and under the condition, in order to ensure that the tissue cut block is completely shot, the lifting range of the optical lifting support can be utilized to select a proper focal length. Alternatively, the present application may select an image pickup apparatus having a zoom lens.
It is noted that the above is exemplified in that the reflected light image and the transmitted light image are obtained by the first image pickup apparatus whose light sensing range includes the visible light range, such as a hyperspectral camera of 400nm to 1000nm, and the second image pickup apparatus whose light sensing range includes the short-wave infrared band range, such as a hyperspectral camera of 900nm to 1700 nm.
However, if the light sensing range of the image capturing device covers both the visible light range and the short-wave infrared range, then the present application may also be configured with only one or more of such image capturing devices, and when the tissue section is illuminated by the first light source or the second light source, the reflected light image or the transmitted light image of the tissue section is captured by the image capturing device, and the reflected light image may include an image of the tissue section in the visible light range and the short-wave infrared range of the reflected light of the first light source; and the transmitted light image comprises an image of the tissue section in the visible range and the short wave infrared range of the transmitted light of the second light source.
In the present application, in order to further improve the accuracy of determining the lesion area in the tissue section, on the premise that the reflected light image and the transmitted light image of the tissue section are obtained, machine learning may also be applied to the identification of the lesion in the determined tissue section. As described above, the present application may further improve the accuracy of determining the lesion region by combining with a lesion recognition model trained by machine learning.
For the sake of understanding, the image processing method of the present application will be described below by taking the example of determining a lesion region of a tissue section using a trained lesion recognition model.
As shown in fig. 3, which shows a schematic flowchart of another embodiment of the image processing method of the present application, the method of the present embodiment is applied to a computer device in an image analysis system, and the present embodiment may include:
s301, at least one first reflected light image and at least one second reflected light image of the tissue section are obtained.
S302, at least one first transmitted light image and at least one second transmitted light image of the tissue section are obtained.
For example, in an alternative mode, the first reflected light image is a first hyperspectral image acquired by a 400nm to 1000nm hyperspectral camera under the condition that the tissue cut piece is irradiated by a first light source and the light of the first light source is reflected on the surface of the tissue cut piece. The second reflected light image is a second hyperspectral image acquired by a hyperspectral camera at 900nm-1700nm under the condition that the tissue section is irradiated by the first light source and the first light source reflects on the surface of the tissue section.
Similarly, the first transmitted light image is a third hyperspectral image acquired by a 400nm-1000nm hyperspectral camera under the condition that a second light source is adopted to vertically irradiate the tissue cut block and the light of the second light source transmits the tissue cut block. And the second transmitted light image is a fourth hyperspectral image acquired by a hyperspectral camera of 900nm-1700nm under the condition that the tissue cut block is vertically irradiated by a second light source and the light of the second light source transmits the tissue cut block.
Wherein the hyperspectral camera of 400nm-1000nm and the hyperspectral camera of 900nm-1700nm are positioned on the same side of the tissue section as the first light source and are positioned on different sides of the tissue section as the second light source.
The above reflected light image and the transmitted light image can be obtained by referring to the related description of the previous embodiment, and are not described again here.
It should be noted that the present embodiment is described by taking two reflected light images and two transmitted light images of a tissue section as an example, and the same applies to the present embodiment if only one reflected light image and one transmitted light image are obtained.
And S303, inputting the first reflected light image, the second reflected light image, the first transmitted light image and the second transmitted light image of the tissue section into a lesion recognition model to obtain a lesion prediction image of the tissue section output by the lesion recognition model.
The focus prediction image is an image of a tissue section, and a focus region in the predicted tissue section is marked in the focus prediction image.
In the embodiment of the present application, in order to visually know the position of the lesion area in the tissue slice, the present application utilizes the trained lesion recognition model to output the tissue slice image marked with the lesion area, i.e., a lesion prediction image.
Wherein, the lesion prediction image output by the lesion identification model may be a reference image selected from the at least one reflected light image (e.g., the first reflected light image and the second reflected light image) and the at least one transmitted light image (e.g., the first transmitted light image and the second transmitted light image), and a predicted lesion region is marked in the reference image.
The lesion identification model may also be a model in which an image of a tissue slice is synthesized from the input reflected light image and transmitted light image, and a lesion prediction image is obtained by marking the lesion region in the synthesized tissue slice. Of course, there may be other ways to obtain a predictive image of the lesion, and this is not a limitation.
The method for marking the lesion region in the lesion prediction image can be various, for example, the contour line of the lesion region in the tissue section can be marked in the lesion prediction image; the lesion area is filled with a specific color in a lesion prediction image. Alternatively, the lesion region or the like is marked in a lesion prediction image by a thermodynamic diagram.
For example, referring to fig. 4, the image input into the lesion recognition model of the present application includes: a 400nm to 1000nm reflected light image (first reflected light image) 401 of the tissue section acquired by a 400nm to 1000nm hyperspectral camera, a 900nm to 1700nm hyperspectral camera reflected light image (second reflected light image) 402 of the tissue section acquired by a 900nm to 1700nm hyperspectral camera, a 400nm to 1000nm transmitted light image (first transmitted light image) 403 of the tissue section acquired by a 400nm to 1000nm hyperspectral camera, and a 900nm to 1700nm transmitted light image (second transmitted light image) 404 of the tissue section acquired by a 900nm to 1700nm hyperspectral camera.
After the four hyperspectral images are input to a lesion recognition model 405 obtained by machine learning, a lesion prediction image 406 of the tissue section is output by the lesion recognition model 405. Wherein a lesion region 407 of the tissue section is marked in the lesion prediction image 406.
In a possible implementation manner, considering that the resolution of an image captured by some current image capturing apparatuses with a short-wave infrared band as a sensing range may be low, in order to improve the resolution of the second reflected light image, before step S303, the present application may further convert, for each second reflected light image of the tissue cut, the second reflected light image into a synthesized second reflected light image according to the second reflected light image and the first reflected light image of the tissue cut by using the first image synthesis model. Accordingly, the first reflected light image and the synthesized second reflected light image are input to the lesion recognition model in step S303.
The content of the synthesized second reflected light image is the same as that of the second reflected light image before synthesis, and the resolution of the synthesized second reflected light image is higher than that of the second reflected light image before synthesis. The resolution of imaging of the reflected light of the tissue cutting block to the first light source in the short-wave infrared band is improved through the first image synthesis model, and therefore accuracy of recognizing a focus area in the tissue cutting block by the focus recognition model is improved.
The first image synthesis model is obtained by training according to the high-resolution reflected light images corresponding to the plurality of tissue cutting samples and by using the first reflected light images and the second reflected light images of the plurality of tissue cutting samples.
The high-resolution reflected light image is a reflected light image in a short infrared band range corresponding to the tissue section sample, and the resolution of the high-resolution reflected light image is higher than that of the second reflected light image of the tissue section sample. That is, the high resolution reflected light image characterizes the imaging of the tissue section on the short infrared band for the reflected light of the first light source in case the first light source illuminates the tissue section sample.
For example, the high-resolution reflected light image of the tissue section sample may be an image of the tissue section collected by an imaging device having a light sensing range in a short infrared band and pixels exceeding a set value in a case where the tissue section sample is illuminated with the first light source.
As another example, a high resolution reflected light image of a tissue section sample can be stitched from reflected light images of multiple sections of the tissue section sample. Wherein, the tissue section sample can be divided into the plurality of block areas. The reflected light image of each block area is an image of the block area acquired by the first camera device under the condition that the first light source is adopted to illuminate the tissue cutting sample.
It can be understood that, for the same object, reducing the imaging range of the imaging device for the object can improve the resolution of the object part in the captured image, and therefore, the resolution of the image formed by splicing the reflected light images of the respective blocks of the tissue section sample is higher than the reflected light image of the tissue section sample directly captured by the first imaging device.
The first image synthesis model may be a GAN network model or other network models, and the training of the first image synthesis model takes the synthesized second reflected light image obtained by the first image synthesis model for the tissue section sample as a training target, and the high-resolution reflected light image of the tissue section sample is similar to the synthesized second reflected light image.
As shown in FIG. 5, the hyperspectral images 501 and 502 of the tissue section sample with the reflected light of the first light source in the range of 400nm-1000nm and 900nm-1700nm can be obtained. The two hyperspectral images are reflected light images obtained by a 400-1000 nm hyperspectral camera and a 900-1700nm hyperspectral camera respectively under the condition that a tissue slice sample is irradiated by a first light source.
The resolution of the hyperspectral camera of 400nm-1000nm is generally high, so that the hyperspectral image 501 in the range of 400nm-1000nm has high resolution. While the resolution of the hyperspectral camera of 900nm-1700nm is somewhat relatively low, the resolution of the hyperspectral image 502 in the range of 900nm-1700nm is relatively low. On the basis, the hyperspectral images 501 and 502 can be input into the first image synthesis model 503 obtained through machine learning training, so that the hyperspectral images 504 within the range of 900nm to 1700nm corresponding to the tissue section sample can be obtained, and the resolution of the hyperspectral images 504 within the range of 900nm to 1700nm is higher than that of the hyperspectral images 504 within the range of 900nm to 1700 nm.
Similarly, in order to improve the resolution of the second transmitted light image, before step S303, the present application may further convert, for each second transmitted light image of the tissue section, the second transmitted light image into a synthesized second transmitted light image according to the second transmitted light image and the first transmitted light image of the tissue section by using a second image synthesis model. Accordingly, the first transmitted light image and the synthesized second transmitted light image are input to the lesion recognition model in step S303.
And the content of the synthesized second transmitted light image is the same as that of the second transmitted light image before synthesis, and the resolution of the synthesized second transmitted light image is higher than that of the second transmitted light image before synthesis.
The second image synthesis model is obtained by training according to the high-resolution transmitted light images corresponding to the plurality of tissue cutting samples and by using the first transmitted light images and the second transmitted light images of the plurality of tissue cutting samples.
The high resolution transmitted light image is a transmitted light image within a short infrared band range corresponding to the tissue section sample, and the resolution of the high resolution transmitted light image is higher than the resolution of the second transmitted light image of the tissue section sample.
The manner of obtaining the high-resolution transmitted light image is similar to the process of obtaining the high-resolution reflected light image, and the process of training the second image synthesis model is similar to the process of training the first image synthesis model, which is not repeated herein.
In order to facilitate understanding of the training process of the lesion recognition model of the present application, a training process is described as an example. As shown in fig. 6, which shows a schematic flow chart of the present application for training a lesion recognition model, the flow chart may include:
s601, obtaining at least one reflected light image and at least one transmitted light image of each of a plurality of tissue section samples.
Wherein, each tissue cutting sample corresponds to a cutting sample image marked with an actual focus area.
Wherein, the tissue cutting sample is a tissue cutting used for training a focus identification model. Accordingly, the reflected light image of the tissue section sample refers to the reflected light image of the tissue section as the training sample, and the transmitted light image of the tissue section sample is the transmitted light image of the tissue section as the training sample. Based on this, the reflected light image and the transmitted light image of the tissue section sample can be obtained in the manner of the reflected light image and the transmitted light image of the front tissue section.
In one possible implementation, the present application may obtain at least one first reflected light image, at least one second reflected light image, at least one first transmitted light image, and at least one second transmitted light image of a tissue section sample.
The image of the tissue section sample is also an image of the tissue section sample, and the actual lesion area in the tissue section is marked in the image only by manual work. Therefore, the way of obtaining the image of the diced sample can also be various.
In one possible implementation, to improve training accuracy, the slice sample image may be a whole-slide digital scanning (WSI) image of the tissue slice sample, where WSI is an image of the tissue slice sample acquired under a digital scanner (actually an electric microscope structure).
S602, registering the cut sample image of the tissue cut sample with each reflected light image and each transmitted light image of the tissue cut sample respectively aiming at each tissue cut sample, and marking the corresponding actual focus area in the cut sample image in the reflected light image and the transmitted light image respectively by combining the registration result.
The different image registration refers to matching and overlaying different images, and it can be understood that the position area of the actual lesion area in the tissue cutting sample in the reflected light image can be determined by registering the tissue cutting sample image with the reflected light image of the tissue cutting sample, so that the actual lesion area of the tissue cutting sample is marked in the reflected light image. Similarly, the actual lesion area of the tissue section sample may be marked in the transmitted light image.
S603, aiming at each tissue cutting sample, inputting at least one reflected light image and at least one transmitted light image which are marked with an actual focus area and correspond to the tissue cutting sample into a network model to be trained, and obtaining a focus prediction image of the tissue cutting sample output by the network model.
Wherein, the focus prediction image of the tissue cutting sample is marked with an actual focus area and a prediction focus area.
It can be understood that, considering that the resolution of the image acquired by the imaging device with the short infrared band as the light sensing range may be low, in the case that at least one reflected light image of the tissue cutting sample includes the first reflected light image and the second reflected light image of the tissue cutting sample, the application may further use the aforementioned first image synthesis model to convert the second reflected light image of the tissue cutting sample into the synthesized second reflected light image, and then input the synthesized second reflected light image corresponding to the tissue cutting sample to the lesion identification model to be trained.
Similarly, when the at least one transmitted light image of the tissue section sample includes the first transmitted light image and the second transmitted light image of the tissue section sample, the second transmitted light image can be converted into the synthesized second transmitted light image by using the second image synthesis model, so as to obtain the second transmitted light image with relatively high resolution. Correspondingly, the synthesized second transmitted light image corresponding to the tissue section sample can be input into the lesion identification model to be trained.
S604, judging whether the prediction accuracy of the network model meets the training requirement or not based on the actual focus area and the predicted focus area in the focus prediction image of each tissue block sample, if so, determining the network model obtained by training as a focus identification model, and finishing the training; if not, after adjusting the internal parameters of the network model, returning to step S603 until the prediction accuracy of the network model meets the training requirement.
There are many possible ways to determine whether the prediction accuracy of the network model meets the training requirements. E.g., calculating the accuracy of the network model to predict lesion areas based on actual and predicted lesion areas in the lesion prediction image of each tissue section sample; for example, the similarity between the actual lesion area and the predicted lesion area is calculated, and the average value of the similarities of the tissue section samples is used as the accuracy of predicting the lesion area by the network model. The similarity between the actual focal region and the predicted focal region mainly focuses on whether the position regions of the actual focal region and the predicted focal region in a focal prediction image are the same or not, and the similarity between the two regions can be calculated by combining with an algorithm of any image similarity, for example, the similarity can be calculated by adopting a cosine similarity algorithm or a histogram algorithm, and the like, which is not limited. Correspondingly, if the accuracy of the network model for predicting the lesion area exceeds a set threshold, the training requirement is determined to be met.
For another example, the loss function value may be calculated based on a set loss function in combination with an actual lesion region and a prediction model region in a lesion prediction image of each tissue section sample, and it may be confirmed that the training requirement is satisfied if the loss function value converges. Of course, there may be other possibilities whether the prediction accuracy of the network model meets the training requirement, which is not limited.
It should be noted that fig. 6 is only one implementation of the present application for training the lesion recognition model, and the present application is also applicable to the training of the lesion recognition model by other methods.
The application also provides an image processing device corresponding to the image processing method.
As shown in fig. 7, which shows a schematic structural diagram of an embodiment of an image processing apparatus according to the present application, the apparatus may include:
a first image obtaining unit 701, configured to obtain at least one reflected light image of a tissue cut, where the reflected light image includes an image of the tissue cut after conversion of reflected light of a first light source by an image capturing apparatus, and the first light source and the image capturing apparatus are located on the same side of the tissue cut;
a second image obtaining unit 702, configured to obtain at least one transmitted light image of the tissue cut, where the transmitted light image includes an image of the tissue cut converted by the imaging device from transmitted light transmitted by a second light source, and the second light source and the imaging device are located on different sides of the tissue cut;
a lesion determining unit 703 for determining a lesion region in the tissue slice according to the learned light absorption characteristics of the lesion tissue and the light absorption characteristics at different positions in the tissue slice respectively represented by the at least one reflected light image and the at least one transmitted light image of the tissue slice.
In one possible implementation, the lesion determination unit includes:
a focus model processing unit for determining a focus area in the tissue section according to at least one reflected light image and at least one transmitted light image of the tissue section and by using a focus identification model; the lesion recognition model is obtained by deep learning training by using at least one reflected light image and at least one transmitted light image of each of a plurality of tissue section samples marked with actual lesion areas, and the lesion recognition model learns the light absorption characteristics of lesion tissues.
As an alternative, the lesion model processing unit is specifically configured to input at least one reflected light image and at least one transmitted light image of the tissue section into a lesion recognition model, and obtain a predicted lesion image of the tissue section output by the lesion recognition model, where the predicted lesion image is an image of the tissue section and indicates a predicted lesion region in the tissue section.
In yet another possible implementation manner, the first image obtaining unit is specifically configured to obtain at least one first reflected light image and at least one second reflected light image of the tissue cut, where the first reflected light image includes an image of the tissue cut with respect to reflected light of the first light source after being converted by a first imaging device, the second reflected light image includes an image of the tissue cut with respect to reflected light of the first light source after being converted by a second imaging device, the first imaging device is an imaging device whose photosensitive range includes a visible light spectrum, the second imaging device is an imaging device whose photosensitive range includes a short-wave infrared band spectrum, and the first imaging device and the second imaging device are located on the same side of the tissue cut;
the second image obtaining unit is specifically configured to obtain at least one first transmitted light image and at least one second transmitted light image of the tissue cut, where the first transmitted light image includes an image of the tissue cut transmitted by the second light source after being converted by the first imaging device, the second transmitted light image includes an image of the tissue cut transmitted by the second light source after being converted by the second imaging device, and the first imaging device and the second imaging device are both located on different sides of the tissue cut from the second light source.
In an alternative, the apparatus further comprises:
a first resolution improving unit, configured to, before the lesion identification model determines a lesion region in the tissue cut, convert, for each second reflected light image of the tissue cut, the second reflected light image into a synthesized second reflected light image according to the second reflected light image and the first reflected light image of the tissue cut by using the first image synthesis model;
the content of the synthesized second reflected light image is the same as that of the second reflected light image before synthesis, and the resolution of the synthesized second reflected light image is higher than that of the second reflected light image before synthesis;
the first image synthesis model is obtained by training according to the high-resolution reflected light images corresponding to the plurality of tissue cutting samples and by using the first reflected light images and the second reflected light images of the plurality of tissue cutting samples, the high-resolution reflected light images are reflected light images in the short infrared band range corresponding to the tissue cutting samples, and the resolution of the high-resolution reflected light images is higher than that of the second reflected light images of the tissue cutting samples.
In yet another alternative, the apparatus may further include:
a second resolution improving unit, configured to, before the lesion identification model determines a lesion region in the tissue cut block, convert, for each second transmitted light image of the tissue cut block, the second transmitted light image into a synthesized second transmitted light image according to the second transmitted light image and the first transmitted light image of the tissue cut block, and using a second image synthesis model;
the content of the synthesized second transmitted light image is the same as that of the second transmitted light image before synthesis, and the resolution of the synthesized second transmitted light image is higher than that of the second transmitted light image before synthesis;
the second image synthesis model is obtained by training according to the high-resolution transmitted light images corresponding to the plurality of tissue cutting samples and by using the first transmitted light images and the second transmitted light images of the plurality of tissue cutting samples, the high-resolution transmitted light images are transmitted light images in the short infrared band range corresponding to the tissue cutting samples, and the resolution of the high-resolution transmitted light images is higher than that of the second transmitted light images of the tissue cutting samples.
In yet another possible implementation manner, the apparatus further includes: the recognition model training unit is used for training the focus recognition model in the following modes:
obtaining at least one reflected light image and at least one transmitted light image of each of a plurality of tissue cutting samples, wherein each tissue cutting sample corresponds to a cutting sample image marked with an actual focus area;
respectively registering a cut sample image of the tissue cut sample with each reflected light image and each transmitted light image of the tissue cut sample aiming at each tissue cut sample, and respectively marking a corresponding actual focus area in the cut sample image in the reflected light image and the transmitted light image by combining a registration result;
aiming at each tissue block sample, inputting at least one reflected light image and at least one transmitted light image which are marked with an actual focus area and correspond to the tissue block sample into a network model to be trained to obtain a focus prediction image of the tissue block output by the network model, wherein the focus prediction image is marked with the actual focus area and a predicted focus area;
judging whether the prediction accuracy of the network model meets the training requirement or not based on the actual focus area and the predicted focus area in the focus prediction image of each tissue block sample;
if the prediction accuracy of the network model meets the training requirement, determining the network model obtained by training as a focus identification model;
if the prediction accuracy of the network model does not meet the training requirement, the internal parameters of the network model are adjusted, and then the operation that at least one reflected light image and at least one transmitted light image which are marked with the actual focus area and correspond to the tissue cutting sample are input into the network model to be trained is returned to be executed until the prediction accuracy of the network model meets the training requirement.
In yet another aspect, the present application further provides a computer device, which may be a personal computer in an image processing system, a server in a server cluster, or a node in a cloud platform, and so on. Fig. 8 is a schematic diagram illustrating an architecture of a computer device provided in the present application. In fig. 8, the computer device 800 may include: a processor 801 and a memory 802.
Optionally, the computer device may further include: a communication interface 803, an input unit 804, and a display 805 and a communication bus 806.
The processor 801, the memory 802, the communication interface 803, the input unit 804 and the display 805 all communicate with each other via a communication bus 806.
In the embodiment of the present application, the processor 801 may be a central processing unit, an application specific integrated circuit, or the like.
The processor may invoke a program stored in the memory 802, and in particular, the processor may perform the operations performed by the computer device in the image analysis system in the above embodiments.
The memory 802 is used for storing one or more programs, which may include program codes including computer operation instructions, and in this embodiment, at least a program for implementing the image processing method in any one of the above embodiments is stored in the memory.
In one possible implementation, the memory 802 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, the above-mentioned programs, and application programs required for image processing functions, etc.; the storage data area may store data created during use of the computer device.
The communication interface 803 may be an interface of a communication module.
The present application may further include an input unit 804, which may include a touch sensing unit, a keyboard, and the like.
The display 805 includes a display panel, such as a touch display panel or the like.
Of course, the computer device structure shown in fig. 8 does not constitute a limitation of the computer device in the embodiment of the present application, and in practical applications, the computer device may include more or less components than those shown in fig. 8, or some components may be combined.
In another aspect, the present application further provides a storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are loaded and executed by a processor, the image processing method in any one of the above embodiments is implemented.
The present application also proposes a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instruction from the computer-readable storage medium, and executes the computer instruction, so that the computer device executes the method provided in the various optional implementation manners in the aspect of the image processing method or the aspect of the image processing apparatus.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. Meanwhile, the features described in the embodiments of the present specification may be replaced or combined with each other, so that those skilled in the art can implement or use the present application. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that it is obvious to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should also be considered as the protection scope of the present invention.

Claims (12)

1. An image processing method, comprising:
obtaining at least one reflected light image of a tissue cut, wherein the reflected light image comprises an image of the tissue cut after the reflected light of a first light source is converted by an image pickup device, and the first light source and the image pickup device are positioned on the same side of the tissue cut;
obtaining at least one transmitted light image of the tissue cut, wherein the transmitted light image comprises an image formed by the transmitted light of the tissue cut transmitted by a second light source after being converted by the image pickup device, and the second light source and the image pickup device are positioned on different sides of the tissue cut;
determining a lesion area in the tissue section according to the learned light absorption characteristics of the lesion tissue and the light absorption characteristics at different positions in the tissue section, which are respectively characterized by at least one reflected light image and at least one transmitted light image of the tissue section.
2. The method of claim 1, wherein determining a lesion area in the tissue section based on the learned light absorption characteristics of the diseased tissue and the light absorption characteristics at different locations in the tissue section characterized by each of the at least one reflected light image and the at least one transmitted light image of the tissue section comprises:
determining a lesion area in the tissue section by using a lesion identification model according to at least one reflected light image and at least one transmitted light image of the tissue section;
the lesion recognition model is obtained by deep learning training by using at least one reflected light image and at least one transmitted light image of each of a plurality of tissue section samples marked with actual lesion areas, and the lesion recognition model learns the light absorption characteristics of lesion tissues.
3. The method of claim 2, wherein determining a lesion area in the tissue section from the at least one reflected light image and the at least one transmitted light image of the tissue section using a lesion recognition model comprises:
and inputting at least one reflected light image and at least one transmitted light image of the tissue section into a lesion identification model to obtain a lesion prediction image of the tissue section output by the lesion identification model, wherein the lesion prediction image is the image of the tissue section and is marked with a predicted lesion region in the tissue section.
4. The method of claim 1 or 2, wherein said obtaining at least one reflected light image of a tissue section comprises:
obtaining at least one first reflected light image and at least one second reflected light image of a tissue cut, wherein the first reflected light image comprises an image of the tissue cut after conversion of reflected light of a first light source by a first camera device, the second reflected light image comprises an image of the tissue cut after conversion of the reflected light of the first light source by a second camera device, the first camera device is a camera device with a photosensitive range including a visible light spectrum, the second camera device is a camera device with a photosensitive range including a short-wave infrared band spectrum, and the first camera device, the second camera device and the first light source are located on the same side of the tissue cut;
the obtaining at least one transmitted light image of the tissue section comprises:
and obtaining at least one first transmitted light image and at least one second transmitted light image of the tissue cut, wherein the first transmitted light image comprises an image of the tissue cut transmitted by a second light source after being converted by the first image pickup device, the second transmitted light image comprises an image of the tissue cut transmitted by a second light source after being converted by the second image pickup device, and the first image pickup device and the second image pickup device are both positioned on different sides of the tissue cut from the second light source.
5. The method of claim 4, further comprising, prior to said determining a focal region in said tissue section:
for each second reflected light image of the tissue cut block, converting the second reflected light image into a synthesized second reflected light image according to the second reflected light image and the first reflected light image of the tissue cut block by using a first image synthesis model;
wherein the synthesized second reflected light image has the same content as the second reflected light image before synthesis, and the resolution of the synthesized second reflected light image is higher than the resolution of the second reflected light image before synthesis;
the first image synthesis model is obtained by training according to the high-resolution reflected light images corresponding to the plurality of tissue cutting samples and by using the first reflected light images and the second reflected light images of the plurality of tissue cutting samples, the high-resolution reflected light images are reflected light images in the short infrared band range corresponding to the tissue cutting samples, and the resolution of the high-resolution reflected light images is higher than that of the second reflected light images of the tissue cutting samples.
6. The method of claim 4, further comprising, prior to said determining a focal region in said tissue section:
for each second transmitted light image of the tissue section, converting the second transmitted light image into a synthesized second transmitted light image according to the second transmitted light image and the first transmitted light image of the tissue section by using a second image synthesis model;
wherein the synthesized second transmitted light image has the same content as the second transmitted light image before synthesis, and the resolution of the synthesized second transmitted light image is higher than that of the second transmitted light image before synthesis;
the second image synthesis model is obtained by training according to the high-resolution transmitted light images corresponding to the plurality of tissue cutting samples and by using the first transmitted light images and the second transmitted light images of the plurality of tissue cutting samples, the high-resolution transmitted light images are transmitted light images in the short infrared band range corresponding to the tissue cutting samples, and the resolution of the high-resolution transmitted light images is higher than that of the second transmitted light images of the tissue cutting samples.
7. The method of claim 2 or 3, wherein the lesion recognition model is trained by:
obtaining at least one reflected light image and at least one transmitted light image of each of a plurality of tissue cutting samples, wherein each tissue cutting sample corresponds to a cutting sample image marked with an actual focus area;
respectively registering a cut sample image of the tissue cut sample with each reflected light image and each transmitted light image of the tissue cut sample aiming at each tissue cut sample, and respectively marking a corresponding actual focus area in the cut sample image in the reflected light image and the transmitted light image by combining a registration result;
aiming at each tissue cut sample, inputting at least one reflected light image and at least one transmitted light image which are marked with an actual focus area and correspond to the tissue cut sample into a network model to be trained to obtain a focus prediction image of the tissue cut sample, which is output by the network model, wherein the actual focus area and a predicted focus area are marked in the focus prediction image;
judging whether the prediction accuracy of the network model meets the training requirement or not based on the actual focus area and the predicted focus area in the focus prediction image of each tissue block sample;
if the prediction accuracy of the network model meets the training requirement, determining the trained network model as a focus identification model;
and if the prediction accuracy of the network model does not meet the training requirement, returning to execute the operation of inputting the at least one reflected light image and the at least one transmitted light image which are marked with the actual focus area and correspond to the tissue cutting sample into the network model to be trained after adjusting the internal parameters of the network model until the prediction accuracy of the network model meets the training requirement.
8. An image processing apparatus characterized by comprising:
the device comprises a first image obtaining unit, a second image obtaining unit and a control unit, wherein the first image obtaining unit is used for obtaining at least one reflected light image of a tissue cut block, the reflected light image comprises an image of the tissue cut block after the reflected light of a first light source is converted by an image pickup device, and the first light source and the image pickup device are positioned on the same side of the tissue cut block;
a second image obtaining unit, configured to obtain at least one transmitted light image of the tissue cut, where the transmitted light image includes an image of the tissue cut after being converted by the imaging device through transmitted light transmitted by a second light source, and the second light source and the imaging device are located on different sides of the tissue cut;
and the focus determining unit is used for determining a focus area in the tissue section according to the learned light absorption characteristics of the pathological tissue and the light absorption characteristics of different positions in the tissue section, which are respectively characterized by the at least one reflected light image and the at least one transmitted light image of the tissue section.
9. The apparatus of claim 8, wherein the lesion determination unit comprises:
the focus model processing unit is used for determining a focus area in the tissue cut block according to at least one reflected light image and at least one transmitted light image of the tissue cut block and by using a focus identification model; the lesion recognition model is obtained by deep learning training by using at least one reflected light image and at least one transmitted light image of each of a plurality of tissue section samples marked with actual lesion areas, and the lesion recognition model learns the light absorption characteristics of lesion tissues.
10. The apparatus according to claim 8 or 9, wherein the first image obtaining unit is specifically configured to obtain at least one first reflected light image and at least one second reflected light image of the tissue cut, wherein the first reflected light image includes an image of the tissue cut through a first imaging device after conversion of reflected light of a first light source by the tissue cut, the second reflected light image includes an image of the tissue cut through a second imaging device after conversion of reflected light of the first light source by the tissue cut, the first imaging device is an imaging device whose sensing range includes a visible light spectrum, the second imaging device is an imaging device whose sensing range includes a short-wave infrared band spectrum, and the first imaging device and the second imaging device are located on the same side of the tissue cut;
the second image obtaining unit is specifically configured to obtain at least one first transmitted light image and at least one second transmitted light image of the tissue cut, where the first transmitted light image includes an image of the tissue cut transmitted by the second light source after being converted by the first imaging device, the second transmitted light image includes an image of the tissue cut transmitted by the second light source after being converted by the second imaging device, and the first imaging device and the second imaging device are both located on different sides of the tissue cut from the second light source.
11. A computer device comprising a memory and a processor;
wherein the memory is used for storing programs;
the processor is configured to execute the program, which when executed is particularly configured to implement the image processing method of any of claims 1 to 7.
12. A storage medium storing a program for implementing the image processing method according to any one of claims 1 to 7 when executed.
CN202011147699.6A 2020-10-23 2020-10-23 Image processing method, device, equipment and storage medium Pending CN112270662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011147699.6A CN112270662A (en) 2020-10-23 2020-10-23 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011147699.6A CN112270662A (en) 2020-10-23 2020-10-23 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112270662A true CN112270662A (en) 2021-01-26

Family

ID=74342621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011147699.6A Pending CN112270662A (en) 2020-10-23 2020-10-23 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112270662A (en)

Similar Documents

Publication Publication Date Title
Wang et al. Smartphone-based wound assessment system for patients with diabetes
US20190133514A1 (en) System and method for optical detection of skin disease
US11452455B2 (en) Skin reflectance and oiliness measurement
JP6900581B1 (en) Focus-weighted machine learning classifier error prediction for microscope slide images
CN107106020A (en) For analyzing and transmitting the data relevant with mammal skin damaged disease, image and the System and method for of video
CN112232155B (en) Non-contact fingerprint identification method and device, terminal and storage medium
CN112232163B (en) Fingerprint acquisition method and device, fingerprint comparison method and device, and equipment
WO2014172033A1 (en) Estimating bilirubin levels
CN110637224A (en) Information search system and program
WO2015199067A1 (en) Image analysis device, imaging system, surgery assistance system, image analysis method, and image analysis program
CN109766876A (en) Contactless fingerprint acquisition device and method
US20230368379A1 (en) Image processing method and apparatus
JP2012508423A (en) Video infrared retinal image scanner
JP6567850B2 (en) Imaging device
US20220095998A1 (en) Hyperspectral imaging in automated digital dermoscopy screening for melanoma
CN112232159B (en) Fingerprint identification method, device, terminal and storage medium
US20230058876A1 (en) Image processing method and apparatus based on image processing model, electronic device, storage medium, and computer program product
CN115153397A (en) Imaging method for endoscopic camera system and endoscopic camera system
WO2024074921A1 (en) Distinguishing a disease state from a non-disease state in an image
CN116322486A (en) Acne severity grading method and apparatus
CN117064311B (en) Endoscopic image processing method and endoscopic imaging system
CN112232157B (en) Fingerprint area detection method, device, equipment and storage medium
CN209401042U (en) Contactless fingerprint acquisition device
CN112270662A (en) Image processing method, device, equipment and storage medium
CN114076637B (en) Hyperspectral acquisition method and system, electronic equipment and coded broad spectrum imaging device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40037747

Country of ref document: HK

TA01 Transfer of patent application right

Effective date of registration: 20220419

Address after: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Applicant after: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

Applicant after: FOURTH HOSPITAL OF HEBEI MEDICAL University (TUMOR HOSPITAL OF HEBEI PROVINCE)

Address before: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Applicant before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TA01 Transfer of patent application right
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination