WO2020238626A1 - 图像状态确定方法、装置、设备、系统及计算机存储介质 - Google Patents

图像状态确定方法、装置、设备、系统及计算机存储介质 Download PDF

Info

Publication number
WO2020238626A1
WO2020238626A1 PCT/CN2020/090007 CN2020090007W WO2020238626A1 WO 2020238626 A1 WO2020238626 A1 WO 2020238626A1 CN 2020090007 W CN2020090007 W CN 2020090007W WO 2020238626 A1 WO2020238626 A1 WO 2020238626A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
state
evaluated
pathological
definition
Prior art date
Application number
PCT/CN2020/090007
Other languages
English (en)
French (fr)
Inventor
韩宝昌
韩骁
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP20814634.0A priority Critical patent/EP3979194A4/en
Publication of WO2020238626A1 publication Critical patent/WO2020238626A1/zh
Priority to US17/373,416 priority patent/US11921278B2/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • G02B21/0052Optical details of the image generation
    • G02B21/006Optical details of the image generation focusing arrangements; selection of the plane to be imaged
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • G02B21/241Devices for focusing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • This application relates to the field of artificial intelligence, and in particular to a method, device, equipment, system and computer storage medium for determining an image state based on pathological images.
  • Pathological examinations have been widely used in clinical work and scientific research.
  • the main way for medical staff to make pathological diagnosis is to observe the slices. After magnifying them 40 to 400 times, observe the cell morphology and tissue structure to make a diagnosis.
  • Smart microscopes and digital pathology scanners are the most commonly used tools by medical staff.
  • Smart microscopes usually have their own cameras, which can continuously collect images of the microscope field of view.
  • the collected images are used in a variety of microscope tasks, such as automatic image saving tasks, image-based real-time artificial intelligence (AI) assisted diagnosis, etc.
  • AI artificial intelligence
  • the camera of the smart microscope is usually a high-resolution high-speed industrial camera, the frame rate of image acquisition is high (can reach several to dozens of frames per second), and the image volume is large (the total pixels of a single image can reach 4 million Above), a large amount of image data will be generated in a short time. If the state of the image can be evaluated, the images collected by the smart microscope can be screened based on the image state, thereby improving the processing efficiency of the microscope task, but there is no related technology The scheme for determining the state of the image.
  • the embodiments of the application provide a method, device, equipment, system and computer storage medium for determining the image state based on pathological images, which can evaluate the movement state and the sharpness state of the collected image, so that the determined image state can be evaluated As the basis for image screening, it adapts to the needs of different microscope tasks and improves task processing efficiency.
  • an embodiment of the present application provides a method for determining an image state based on a pathological image, including:
  • the pathological image set includes at least an image to be evaluated and an associated image, and the associated image and the image to be evaluated are continuous frame images;
  • the second state corresponding to the image to be evaluated is determined according to the set of pathological images, wherein the second state is used to indicate the change in the definition of the image to be evaluated .
  • An embodiment of the present application also provides an image state determination device, including:
  • An acquiring module configured to acquire a pathological image set through a microscope, wherein the pathological image set includes at least an image to be evaluated and an associated image, and the associated image and the image to be evaluated are continuous frame images;
  • a determining module configured to determine a first state corresponding to the image to be evaluated according to the set of pathological images acquired by the acquiring module, wherein the first state is used to indicate a movement change of the image to be evaluated;
  • the determining module is further configured to determine a second state corresponding to the image to be evaluated according to the pathological image set if the first state is a static state, wherein the second state is used to indicate the Changes in the sharpness of the image to be evaluated.
  • the embodiment of the application also provides an intelligent microscope system, which includes an image acquisition module, an image processing analysis module, a pathology analysis module, a storage module, and a transmission module;
  • the image acquisition module is configured to acquire a pathological image set, wherein the pathological image set includes at least an image to be evaluated and an associated image, and the associated image and the image to be evaluated are continuous frame images;
  • the image processing and analysis module is configured to determine a first state corresponding to the image to be evaluated according to the set of pathological images, wherein the first state is used to indicate a movement change of the image to be evaluated;
  • the second state corresponding to the image to be evaluated is determined according to the set of pathological images, wherein the second state is used to indicate the change in the definition of the image to be evaluated ;
  • the storage module is configured to store the image to be evaluated if the first state is transitioning from a moving state to a static state;
  • the second state is the transition from the blur state to the clear state, storing the image to be evaluated
  • the pathological analysis module is configured to perform pathological analysis on the image to be evaluated if the first state is transitioning from a moving state to a static state;
  • the second state is the transition from the blur state to the clear state, perform the pathological analysis on the image to be evaluated;
  • the transmission module is configured to transmit the waiting state if the first state is a moving state to a stationary state, or the first state is a stationary state to a moving state, or the first state is a moving state.
  • the second state is a transition from a blurred state to a clear state, or the second state is transitioned from a clear state to a blurred state, the image to be evaluated is transmitted.
  • the embodiment of the present application also provides a terminal device, including: a memory and a processor;
  • the memory is configured to store a computer program
  • the processor is configured to execute the method for determining the image state based on the pathological image provided in the embodiment of the present application when the computer program in the memory is executed.
  • An embodiment of the present application also provides a computer storage medium in which computer-executable instructions are stored, and the computer-executable instructions are used to execute the pathological image-based image state determination method provided in the embodiments of the present application.
  • a method for determining an image state based on pathological images is provided.
  • a pathological image set is acquired.
  • the pathological image set includes at least an image to be evaluated and a related image, and the related image is the previous image adjacent to the image to be evaluated.
  • Frame images and then determine the first state corresponding to the image to be evaluated according to the pathological image set, where the first state is used to indicate the movement and change of the image to be evaluated. If the first state is a static state, the pathological image set is used to determine the The second state corresponding to the evaluation image, where the second state is used to indicate the change in the definition of the image to be evaluated.
  • the pathological images collected by the microscope camera can be screened according to the image status and task type to assist in completing the task objectives, reducing the difficulty of image processing, and improving task processing efficiency.
  • FIG. 1 is a schematic diagram of an architecture of an image evaluation system in an embodiment of the application
  • FIG. 2 is a schematic diagram of a process of the image evaluation system in an embodiment of the application
  • FIG. 3 is a schematic flowchart of a method for determining an image state based on a pathological image in an embodiment of the application
  • FIG. 4 is a schematic diagram of a process of image movement evaluation in an embodiment of the application.
  • FIG. 5 is a schematic diagram of a coordinate of the image center of the source area image in an embodiment of the application.
  • FIG. 6 is a schematic diagram of a comparison between a source area image and a target area image in an embodiment of the application
  • FIG. 7 is a schematic flowchart of a method for image clarity evaluation in an embodiment of the application.
  • FIG. 8 is a schematic flowchart of a processing method based on pathological images in an embodiment of the application.
  • FIG. 9 is a schematic diagram of a flow chart of the task of automatically saving images in an embodiment of the application.
  • FIG. 10 is a schematic flowchart of a method for processing pathological images in an embodiment of the application.
  • FIG. 11 is a schematic flowchart of a real-time artificial intelligence assisted diagnosis task in an embodiment of the application.
  • FIG. 12 is a schematic flowchart of a processing method based on pathological images in an embodiment of the application.
  • FIG. 13 is a schematic flowchart of a remote sharing task of the microscope field of view in an embodiment of the application
  • FIG. 14 is a schematic diagram of the composition structure of an image state determination device in an embodiment of the application.
  • 15 is a schematic diagram of the composition structure of an image state determining device in an embodiment of the application.
  • FIG. 16 is a schematic diagram of the composition structure of an image state determining device in an embodiment of the application.
  • FIG. 17 is a schematic diagram of the composition structure of an image state determination device in an embodiment of the application.
  • FIG. 18 is a schematic diagram of the composition structure of a terminal device in an embodiment of the application.
  • the embodiments of the application provide a method, device and system for determining the image state based on pathological images, which can evaluate the movement state and the sharpness state of the collected images, so that the images can be operated reasonably based on different image states. , Which reduces the difficulty of image processing and improves task processing efficiency.
  • the image state determination method based on pathological images and the processing method based on pathological images provided by this application can be applied to the field of Artificial Intelligence (AI), and can be applied to the field of AI-based medicine and video surveillance.
  • AI Artificial Intelligence
  • the microscope automatically adjusts the focus, fixes the current field of view to remain unchanged, uses the microscope's built-in camera for continuous image acquisition, and automatically turns the focus button according to the change of the image clarity to realize the microscope automatic focus .
  • road traffic video can be monitored, and static images can be automatically removed, reducing the workload of subsequent video analysis.
  • the most common medical images in the medical field include, but are not limited to, vascular photography images, cardiovascular imaging images, and computerized tomography (CT). ) Images, B-ultrasound images, and pathological images.
  • the pathological images are usually collected by smart microscopes.
  • the pathological images include the appearance images of biopsy tissues and cell structure images.
  • the doctor When reading the film, the doctor only needs to give voice instructions, and the AI can automatically read the film, automatically collect images, and assist the doctor in diagnosis; after the doctor reads the film, he gives the "generate report” instruction, and the smart microscope can take screenshots of the microscope and The diagnosis results are filled into the report template, and the report is automatically generated, allowing the doctor to review the results and release the report, so that the original most troublesome report generation link becomes fast and worry-free. Smart microscopes play an important role in mitotic cell detection, immunohistochemical quantitative analysis, cancer area monitoring, and auxiliary diagnosis procedures.
  • FIG. 1 is a schematic diagram of the structure of the image evaluation system in an embodiment of the application
  • FIG. 2 is a schematic flowchart of the image evaluation system in an embodiment of the application, which will be described with reference to FIG. 1 and FIG. 2.
  • step S1 multiple continuous images are collected by the camera
  • the image can be collected by the camera built into the terminal, or by a camera independent of the terminal.
  • step S2 it is determined whether the current image is the first image, if it is, step S3 is executed, otherwise, step S4 is executed;
  • the terminal device determines whether the current image is the first image, or the terminal device sends the collected image to the server, and the server determines whether the current image is the first image. If it is the first image, perform the steps S3, if it is not the first image, perform step S4;
  • step S3 it is determined that the current image belongs to a moving state
  • step S4 the movement state of the current image is evaluated
  • step S5 if it is detected that the current image is moving, it is determined that the current image belongs to the moving state;
  • step S6 if it is detected that the current image stops moving, it is determined that the current image belongs to a moving state and turns to a static state;
  • step S7 if it is detected that the current image starts to move, it is determined that the current image belongs to a static state and turns to a moving state;
  • step S8 if it is detected that the current image is in a static state, then the sharpness state of the current image is evaluated;
  • step S9 if it is detected that the sharpness of the current image has not changed, it is determined that the current image is in a static state
  • step S10 if it is detected that the current image becomes clearer, it is determined that the current image is in the focusing state, and it is changed from the blurry state to the clear state;
  • step S11 if it is detected that the current image becomes more blurred, it is determined that the current image belongs to the focusing state, and the current image is changed from the clear state to the blurred state.
  • terminal devices include, but are not limited to, smart microscopes, tablet computers, notebook computers, handheld computers, mobile phones, voice interaction devices, and personal computers (PC), which are not limited here.
  • the intelligent microscope incorporates AI's vision, voice and natural language processing technology.
  • the doctor can easily input voice commands, and the AI can automatically recognize, detect, quantify and generate reports, and display the detection results in the eyepieces of the doctor in real time, reminding in time Without interrupting the doctor's reading process, the doctor's diagnosis efficiency and accuracy can be improved.
  • FIG. 3 is a flowchart of the method for determining the image state based on the pathological image provided by the embodiment of the application. Please refer to FIG. 3, including:
  • the pathological image set includes at least an image to be evaluated and an associated image
  • the associated image is a continuous frame image with the image to be evaluated.
  • a terminal device collects a pathological image collection through a camera, and the image state determining device obtains a pathological image collection.
  • the pathological image collection includes multiple continuous images, that is, the associated images can have multiple frames ,
  • the related image can be the adjacent previous few frames of images collected before the image to be evaluated.
  • the image state determination device can be deployed on a terminal device, for example, on a smart microscope, or on a server, which is not limited here.
  • the image state determining device evaluates the movement state of the image to be evaluated, thereby obtaining the first state, which is used to indicate the movement change of the image to be evaluated. It is understandable that at least three consecutive frames of images in the pathological image set need to be used to assess the movement state, that is, the associated image is two consecutive frames of images, and the image to be evaluated constitutes a continuous three frame of images, such as the pathological image set includes the image to be evaluated The image, the previous frame of the image to be evaluated, and the previous frame of the image to be evaluated.
  • the first state is a static state
  • the image state determining device determines that the first state is a static state, then continue to evaluate the sharpness state of the image to be evaluated, thereby obtaining the second state, which represents the sharpness change of the image to be evaluated. It is understandable that at least two consecutive images in the pathological image set need to be used to assess the sharpness state, that is, the associated image is the previous image of the image to be assessed.
  • Table 1 shows the definition and description of image states based on microscope operation.
  • the first state includes four types of image states, namely a static state, a moving state, a moving state to a static state, and a static state to a moving state
  • the second state includes two types of states, respectively focusing (The clear state switches to the blur state) state and the focus (the blur state switches to the clear state) state.
  • the above six image states can truly reflect the doctor's operation on the microscope and the changes in the microscope field of view, so that the images can be evaluated in real time.
  • the pathological images collected by the microscope camera can be screened to assist in completing the task objectives, reducing the difficulty of image processing and improving the efficiency of task processing.
  • the associated image includes two consecutive frames of images, which are the first associated image and the second associated image, respectively. Accordingly, the first associated image corresponding to the image to be evaluated can be determined in the following manner. status:
  • the similarity between the image to be evaluated and the first associated image is greater than the similarity threshold, the similarity between the first associated image and the second associated image is acquired, where the second associated image belongs to the pathological image set, and the second associated image is the first associated image.
  • the similarity between the first associated image and the second associated image is less than or equal to the similarity threshold, it is determined that the first state is the transition from the moving state to the static state.
  • the first state corresponding to the image to be evaluated can be determined in the following manner:
  • the similarity between the image to be evaluated and the first related image is less than or equal to the similarity threshold, the similarity between the first related image and the second related image is obtained, where the second related image is the previous adjacent image of the first related image image;
  • the similarity between the first associated image and the second associated image is greater than the similarity threshold, it is determined that the first state is a static state and transitions to a moving state;
  • the similarity between the first associated image and the second associated image is less than or equal to the similarity threshold, it is determined that the first state is a moving state.
  • FIG. 4 is a schematic diagram of a flowchart of image movement evaluation in an embodiment of this application. Please refer to FIG. 4, including:
  • step A1 multiple frames of continuous images are acquired by the microscope camera.
  • step A2 determine whether the current image (ie, the image to be evaluated) has moved relative to the previous image (the first associated image), if it has moved, go to step A6; if it does not move, go to step A3.
  • the judgment method is to obtain the similarity between the image to be evaluated and the first associated image, and if the similarity between the image to be evaluated and the first associated image is greater than the similarity threshold, it is determined that the image to be evaluated and the first associated image have not moved , Step A3 is executed at this time.
  • step A3 it is determined whether the previous image (ie the first associated image) has moved relative to the previous image (the second associated image), if it has moved, step A5 is executed; if no movement has occurred, step A4 is executed.
  • the method of judgment is to obtain the similarity between the first related image and the second related image, and if the similarity between the first related image and the second related image is greater than the similarity threshold, determine the first related image and the second related image No movement occurs, and step A4 is executed at this time. If the similarity between the first associated image and the second associated image is less than or equal to the similarity threshold, it is determined that the first associated image and the second associated image have moved, and step A5 is executed at this time.
  • step A4 it is determined that the first state of the current image (ie, the image to be evaluated) is the static state.
  • step A5 it is determined that the first state of the current image (ie, the image to be evaluated) is a moving state to a static state.
  • the similarity threshold can be set to 0.9 or other parameters, which is not limited this time.
  • step A6 determine whether the previous image (ie, the first related image) has moved relative to the previous image (the second related image), if it has moved, go to step A8; if it does not move, go to step A7 .
  • step A7 it is determined that the first state of the current image (ie, the image to be evaluated) is the static state and the moving state.
  • step A8 it is determined that the first state of the current image (ie, the image to be evaluated) is the moving state.
  • the correlation between the two images can be evaluated based on the similarity, so as to provide a reasonable and reliable implementation method for the solution.
  • the similarity between the image to be evaluated and the first associated image can be obtained in the following manner:
  • the target area pathological image set includes M target area images, and the size of the target area image is smaller than the size of the source area image;
  • both the first source area image and the first target area image belong to the background image, then extract the second source area image from the source area pathological image set, and extract the second target area image from the target area pathological image set , And detect whether the second source area image and the second target area image belong to the background image;
  • first source area image and the first target area image do not both belong to the background image, calculate the similarity between the first source area image and the first target area image, and use the calculated similarity as the image to be evaluated and the The similarity of the first associated image.
  • FIG. 5 is a schematic diagram of a coordinate of the image center of the source area image in the embodiment of the application. Please refer to FIG.
  • M target area images are selected from the first associated image, and M target area images form the target area pathological image set. It is assumed that 9 target area images are selected for the target area image, and each source area image is relative to the entire waiting area.
  • the i-th loop needs to start from the image to be evaluated according to the size of the regional image and the i-th center coordinates.
  • the first source area image is extracted from the first associated image
  • the first target area image is extracted from the first associated image for detection.
  • the template matching method is used to calculate the similarity between the two area images. If the calculated similarity is greater than the similarity threshold, it is considered that the two images before and after have not moved. If the calculated similarity is less than or equal to the similarity threshold, it is considered that the two images before and after have moved. In either case, stop Traverse.
  • the two frames of images are considered to be background images and no relative movement occurs.
  • the calculation method of the similarity between the first associated image and the second associated image is similar to the calculation method of the similarity between the image to be evaluated and the first associated image, and will not be repeated this time.
  • the image is divided into several regions, and the similarity calculation is performed on the regions, instead of directly performing the similarity calculation on the entire image, so that the similarity can be ensured as much as possible. If all regions are background images, then the entire image has a high probability of not including useful information. On the other hand, the size of the region is much smaller than the size of the entire image, even if the time complexity of the template matching method is relatively high. High, but the evaluation can also be completed in a short time.
  • whether the second source area image and the second target area image belong to the background image can be detected in the following manner:
  • the standard deviation of the pixel values of the second source area image is less than or equal to the standard deviation threshold, it is determined that the second source area image belongs to the background image;
  • the standard deviation of the pixel values of the second target area image is less than or equal to the standard deviation threshold, it is determined that the second target area image belongs to the background image.
  • the judgment method of the background image will be described. If the source area image and the target area image are red green blue (RGB) images, they need to be converted to grayscale images first. Based on the grayscale image, the standard deviation of the pixel value of the target area image and the standard deviation of the pixel value of the source area image are calculated respectively. If the pixel value standard deviation is less than or equal to the given standard deviation threshold, the region image is a background image.
  • the standard deviation of pixel values is calculated as follows:
  • represents the standard deviation of the pixel value
  • M ⁇ N represents the size of the area image
  • P(i,j) represents the pixel value of the i-th row and j-th column in the area image
  • represents the mean value
  • the standard deviation of pixel values can better indicate the change of the image, and truly reflect the degree of dispersion between pixels in the image, thereby improving the accuracy of detection.
  • the similarity between the first source area image and the first target area image can be calculated in the following manner:
  • an image matrix is calculated, where the image matrix includes multiple elements
  • the similarity between the first source area image and the first target area image is determined according to the image matrix, where the similarity between the first source area image and the first target area image is the maximum value of the elements in the image matrix.
  • each source area image and each target area image can be used The same approach.
  • the size of the target area image (w*h) is smaller than the size of the source area image (W*H), and the target area image needs to slide across the entire source area image. Therefore, it needs to slide (W-w+1) times in the horizontal direction and (H-h+1) times in the vertical direction, so the result of template matching is a size (W-w+1) *The image matrix of (H–h+1), denoted as R, can be calculated as follows:
  • R(x,y) represents the element value of matrix R at (x,y)
  • I 1 represents the source area image
  • I′ 1 represents the normalized source area image
  • I 2 represents the target area image
  • I '2 represents a normalized target area of the processed image
  • the range of x is greater than or equal to 0 and less than or equal to (Ww) is an integer in the range of y is greater than or equal to 0 and less than or equal to (Hh) is an integer
  • the value range of y'is an integer greater than or equal to 0 and less than or equal to h only for the target area In the image, the starting point is (x, y) and the size of w*h area is operated, and the entire image of the source area image is operated.
  • the value range of the elements in the image matrix is 0 to 1, and the largest value is selected as the similarity of the two images. The greater the similarity, the more similar the two images are.
  • the template matching algorithm used in this application is the normalized correlation coefficient matching method (TM_CCOEFF_NORMED).
  • the square difference matching method (CV_TM_SQDIFF) and the normalized square difference matching method (CV_TM_SQDIFF_NORMED) can also be used.
  • Correlation matching method (CV_TM_CCORR), normalized correlation matching method (CV_TM_CCORR_NORMED) or correlation coefficient matching method (CV_TM_CCOEFF).
  • the template matching algorithm can effectively distinguish the movement and jitter of the microscope field of view.
  • the shaking of the ground or desktop will cause the microscope field of view to shake, causing a slight deviation between two consecutive images, and the deviation caused by man-made movement is usually very large. Therefore, when using the template matching method, you need to set W*H and w*h reasonably. It can be approximately considered that the offset in the horizontal direction less than (Ww)/2 and the vertical direction less than (Hh)/2 belongs to jitter, while the horizontal direction An offset greater than or equal to (Ww)/2 or greater than or equal to (Hh)/2 in the vertical direction belongs to movement.
  • the second state corresponding to the image to be evaluated can be determined in the following manner:
  • the definition of the reference image is acquired
  • the second state is determined to be the focus state, where the focus state is the clear state transitions to the blurred state, or the fuzzy state transitions to the clear state .
  • FIG. 7 is a schematic flowchart of a method of image definition evaluation in an embodiment of this application. Please refer to FIG. 7, including:
  • step B1 multiple frames of continuous images are collected by the microscope camera.
  • step B2 it is determined whether the sharpness of the current image (ie the image to be evaluated) relative to the previous image (the first associated image) has changed, if there is a change, step B3 is executed; if there is no change, step B4 is executed.
  • the judgment method is to first obtain the definition of the image to be evaluated and the definition of the first associated image, and then determine whether the definition of the image to be evaluated and the definition of the first associated image meet the first preset condition, if so
  • the first preset condition determines that the definition of the image to be evaluated and the definition of the first associated image have changed, and step B3 is executed at this time. Conversely, if the first preset condition is not met, it is determined that the definition of the image to be evaluated and the definition of the first associated image have not changed, and step B4 is executed at this time.
  • step B3 it is determined whether the sharpness of the current image (ie, the image to be evaluated) relative to the reference image has changed.
  • the judgment method is to first obtain the definition of the reference image, and then determine whether the definition of the reference image and the definition of the image to be evaluated meet the second preset condition, and if the second preset condition is satisfied, determine the second state It is the focus state.
  • step B5 is executed to determine that the focus state is the clear state and transition to the blurred state.
  • step B6 is executed, and it is determined that the focus state is changed from the blur state to the clear state. If the second preset condition is not met, then the index step B7 is to determine that the second state is a static state.
  • step B4 it is determined that the current image (ie, the image to be evaluated) is in a static state.
  • step B5 it is determined that the focus state is the clear state and the state is switched to the blurred state.
  • step B6 it is determined that the focus state is changed from the blur state to the clear state.
  • step B7 it is determined that the second state is a stationary state.
  • step B8 can be performed.
  • step B8 the reference image is updated to the current image (that is, the image to be evaluated).
  • the problem of image clarity being sensitive to changes in the external environment can be solved through the reference image and dual thresholds, so that it can be more reliably inferred whether the device is focusing.
  • the following operations may be performed:
  • the definition of the reference image and the definition of the image to be evaluated do not meet the second preset condition, the definition of the reference image is updated;
  • the following operations may be performed:
  • image definition is more sensitive to changes in the external environment, and device shake or camera self-adjustment (such as automatic exposure or automatic white balance, etc.) will bring about major changes in image definition.
  • device shake or camera self-adjustment such as automatic exposure or automatic white balance, etc.
  • the standard deviation of the Laplacian matrix of the image is used as the image definition.
  • the Laplacian matrix describes the contour information of the image.
  • the processing uses the standard deviation of the Laplacian matrix as the image clarity, and other indicators, such as the average value of the Laplacian matrix or information entropy, can also be used.
  • the Laplacian matrix extracts the edge information of the image. The clearer the image, the sharper the edge of the image, the greater the fluctuation of the value of the element in the Laplacian matrix (the greater the value of the element at the boundary), the greater the standard deviation.
  • the benchmark image is updated Is the image to be evaluated.
  • Use Benchmark image and two consecutive images for evaluation when the sharpness difference between the image to be evaluated and the reference image is less than the given sharpness threshold (that is, the sharpness of the reference image and the sharpness of the image to be evaluated do not meet the second preset condition)
  • possible situations include the doctor’s failure to adjust the focus, the doctor’s focus adjustment is too small, the microscope shakes, or the camera self-adjusts.
  • the way of accumulation is sharpness+a or sharpness-b, a and b are positive numbers.
  • the first preset condition may be that the difference between the definition of the image to be evaluated and the definition of the first associated image is greater than or equal to the definition threshold
  • the second preset condition may be that the definition of the reference image is compared with the definition of the first associated image.
  • the sharpness of the image is greater than or equal to the sharpness threshold.
  • the reference image and two consecutive images are used for evaluation.
  • possible situations include the doctor’s failure to adjust the focus, If the focusing range is too small, the microscope shakes, or the camera is self-adjusting, it is not necessary to update the reference image at this time, but continue to accumulate the difference in definition of the reference image, which is conducive to obtaining more accurate detection results.
  • the following operations may be performed:
  • the difference between the sharpness of the image to be evaluated and the sharpness of the first associated image is greater than or equal to the first sharpness threshold, it is determined that the sharpness of the image to be evaluated and the sharpness of the first associated image meet the first preset condition;
  • the difference between the sharpness of the image to be evaluated and the sharpness of the first associated image is less than the first sharpness threshold, it is determined whether the difference between the sharpness of the reference image and the sharpness of the image to be evaluated is greater than or equal to the second sharpness threshold, Wherein, the second clarity threshold is greater than the first clarity threshold;
  • the difference between the definition of the reference image and the definition of the image to be evaluated is greater than or equal to the second definition threshold, it is determined that the definition of the reference image and the definition of the image to be evaluated meet the second preset condition.
  • dual thresholds are introduced, that is, the first sharpness threshold and the second sharpness threshold are introduced, and the first sharpness threshold is used when comparing the sharpness of the current image and the previous image, that is, the sharpness of the image to be evaluated is judged Whether the difference between the sharpness of the first associated image and the sharpness of the first associated image is greater than or equal to the first sharpness threshold, if the difference between the sharpness of the image to be evaluated and the sharpness of the first associated image is greater than or equal to the first sharpness threshold, it is determined to be The definition of the evaluation image and the definition of the first associated image satisfy the first preset condition. Otherwise, the first preset condition is not met.
  • a high threshold is used, that is, it is judged that the difference between the sharpness of the image to be evaluated and the sharpness of the reference image is greater than or equal to the second sharpness threshold. If the difference between the sharpness is greater than or equal to the second sharpness threshold, it is determined that the sharpness of the reference image and the sharpness of the image to be evaluated meet the second preset condition. Otherwise, the second preset condition is not met.
  • the first sharpness threshold can be set to 0.02
  • the first sharpness threshold can be set to a low threshold
  • the second sharpness threshold can be set to 0.1
  • the second sharpness threshold is a high threshold. In practical applications, it can also be Set to other parameters, this time there is no limit.
  • the low threshold is used when comparing the sharpness of the current image and the previous image
  • the high threshold is used when comparing the sharpness of the current image and the reference image
  • the low threshold can be used It is inferred that the doctor is not adjusting the focus of the microscope, and a high threshold is used to infer that the doctor is adjusting the focus of the microscope, thereby improving the reliability of the sharpness detection.
  • FIG. 8 is a schematic flowchart of a pathological image-based processing method provided by an embodiment of this application. Please refer to FIG. 8, including:
  • the terminal acquires a pathological image set, where the pathological image set includes an image to be evaluated, a first associated image, and a second associated image.
  • the first associated image is the previous image adjacent to the image to be evaluated, and the second associated image is the first associated image.
  • the pathological image collection is collected by the smart microscope through the camera, thereby obtaining the pathological image collection.
  • the pathological image collection includes multiple continuous pathological images, that is, at least one image to be evaluated and multiple related images.
  • the image refers to the previous few frames that are adjacent to the image to be evaluated.
  • the smart microscope can also send a collection of pathological images to the server, and the server can determine the image state corresponding to the image to be evaluated.
  • the smart microscope or the server evaluates the movement state of the image to be evaluated, thereby obtaining the first state, which is used to indicate the movement change of the image to be evaluated. It is understandable that at least three frames of pathological images in the pathological image set need to be used to assess the movement status, that is, the image to be assessed, the previous pathological image (ie, the first associated image) of the image to be assessed, and the upper frame of the image to be assessed Pathological image (ie, second associated image).
  • If the first state is the transition from the moving state to the static state, store the image to be evaluated.
  • the image to be evaluated will be stored.
  • the first state is a static state
  • the first state of the image to be evaluated is a static state
  • at least two frames of pathological images in the pathological image set need to be used to assess the sharpness state, that is, including the image to be assessed and the previous pathological image (ie, the first associated image) of the image to be assessed.
  • the image to be evaluated will be stored.
  • Figure 9 is a schematic flow chart of a method for automatically saving image tasks provided by an embodiment of the application.
  • a large number of pathological images are collected through the camera of the smart microscope, and then the image state of these pathological images is evaluated, that is, evaluation Moving state and sharpness state, based on the evaluation results, can get pathological images in six image states, including static state, moving state, moving state to static state, static state to moving state, focusing (clear state to Blur state) state and focus (transition from blurred state to clear state) state.
  • static state moving state, moving state to static state, static state to moving state, focusing (clear state to Blur state) state and focus (transition from blurred state to clear state) state.
  • focusing clear state to Blur state
  • focus transition from blurred state to clear state
  • the pathological images collected by the smart microscope can be automatically saved for subsequent pathological reports, communication, and backup. Based on the task of automatically saving images, the pathological image collection is screened. It is only necessary to save the images from the moving state to the static state and the images from the blurred state to the clear state, because the images in other states are redundant or low-quality.
  • the above method on the one hand, there is no need for medical staff to manually collect images, which improves work efficiency, and on the other hand, reduces the storage space occupied by images.
  • FIG. 10 The schematic flow chart of the pathological image-based processing method provided in the embodiment of the present application, please refer to FIG. 10, including:
  • 301 Acquire a pathological image set, where the pathological image set includes an image to be evaluated, a first associated image, and a second associated image.
  • the first associated image is the previous image adjacent to the image to be evaluated, and the second associated image is the first associated image.
  • the pathological image collection is collected by the smart microscope through the camera, thereby obtaining the pathological image collection.
  • the pathological image collection includes multiple continuous pathological images, that is, at least one image to be evaluated and multiple related images.
  • the image refers to the previous images that are adjacent to the image to be evaluated, that is, the first related image is the previous image adjacent to the image to be evaluated, and the second related image is the previous image adjacent to the first related image.
  • the smart microscope can also send a collection of pathological images to the server, and the server can determine the image state corresponding to the image to be evaluated.
  • the smart microscope or the server evaluates the movement state of the image to be evaluated, thereby obtaining the first state, which is used to indicate the movement change of the image to be evaluated. It is understandable that at least three frames of pathological images in the pathological image set need to be used to assess the movement status, that is, the image to be assessed, the previous pathological image (ie, the first associated image) of the image to be assessed, and the upper frame of the image to be assessed Pathological image (ie, second associated image).
  • If the first state is the transition from the moving state to the static state, perform artificial intelligence diagnosis on the image to be evaluated.
  • AI-assisted diagnosis is performed on the image to be evaluated.
  • the diagnostic decision support system (clinical decision support system) is a support system used to assist doctors in making decisions during diagnosis.
  • the system analyzes patient data and gives doctors recommendations for diagnosis.
  • the doctors combine their own specialties. Make judgments to make the diagnosis faster and more accurate.
  • the application of AI in the field of diagnosis is mainly aimed at the fact that the growth rate of radiologists is not as fast as that of imaging data, the uneven distribution of medical personnel resources, and the high rate of misdiagnosis.
  • AI can be used to analyze case data, provide more reliable diagnosis recommendations for patients, and save time for physicians.
  • the first state is a static state
  • the first state of the image to be evaluated is a static state
  • at least two frames of pathological images in the pathological image set need to be used to assess the sharpness state, that is, including the image to be assessed and the previous pathological image (ie, the first associated image) of the image to be assessed.
  • AI-assisted diagnosis is performed on the image to be evaluated.
  • FIG. 11 is a schematic diagram of a flow chart of the real-time artificial intelligence-assisted diagnosis task in the embodiment of this application.
  • a large number of pathological images are collected through the camera of the smart microscope, and then these The pathological image is evaluated for the image state, that is, the moving state and the sharpness state are evaluated.
  • the pathological image in 6 image states can be obtained, including the static state, the moving state, the moving state to the static state, and the static state to Movement state, focus (transition from clear state to blurred state) state and focus (transition from blurred state to clear state) state.
  • AI-assisted diagnosis is only performed on the pathological images in the moving state transitioned to the static state and the pathological images in the focusing (fuzzy state transitioned to the clear state) state.
  • image-based real-time AI-assisted diagnosis means that when the doctor uses the pathology microscope, the image collected by the camera is sent to the AI-assisted diagnosis module in real time, and the result of the AI-assisted diagnosis is fed back to the doctor to improve the doctor's work efficiency.
  • the embodiment of this application can filter the images sent to the AI-assisted diagnosis module, and only select the images that are converted from the moving state to the static state and the images that are converted from the blurred state to the clear state. This is because the images in these two states are required by the doctor Observe carefully and are really interested, which greatly reduces the throughput pressure of the AI-assisted diagnosis module.
  • FIG. 12 is a schematic diagram of the flow chart of the pathological image-based processing method provided by the embodiment of the application. Please refer to FIG. 12, including:
  • the pathological image set includes an image to be evaluated, a first associated image, and a second associated image
  • the first associated image is the previous image adjacent to the image to be evaluated
  • the second associated image is the first associated image. The previous image adjacent to the image.
  • the pathological image collection is collected by the smart microscope through the camera, thereby obtaining the pathological image collection.
  • the pathological image collection includes multiple continuous pathological images, that is, at least one image to be evaluated and multiple related images.
  • the image refers to the previous images that are adjacent to the image to be evaluated, that is, the first related image is the previous image adjacent to the image to be evaluated, and the second related image is the previous image adjacent to the first related image.
  • the smart microscope can also send a collection of pathological images to the server, and the server can determine the image state corresponding to the image to be evaluated.
  • the smart microscope or the server evaluates the movement state of the image to be evaluated, thereby obtaining the first state, which is used to indicate the movement change of the image to be evaluated. It is understandable that at least three frames of pathological images in the pathological image set need to be used to assess the movement status, that is, the image to be assessed, the previous pathological image (ie, the first associated image) of the image to be assessed, and the upper frame of the image to be assessed Pathological image (ie, second associated image).
  • the first state is a transition from a moving state to a stationary state, or the first state is a transition from a stationary state to a moving state, or the first state is a moving state, transmit the image to be evaluated.
  • the image to be evaluated is transmitted.
  • the first state is a static state
  • the first state of the image to be evaluated is a static state
  • at least two frames of pathological images in the pathological image set need to be used to assess the sharpness state, that is, including the image to be assessed and the previous pathological image (ie, the first associated image) of the image to be assessed.
  • the image to be evaluated is transmitted.
  • the image to be evaluated is transmitted.
  • Figure 13 is a schematic diagram of a flowchart of the remote sharing task of the microscope field in the embodiment of this application.
  • a large number of pathological images are collected through the camera of the smart microscope, and then these pathological images are collected.
  • the image is evaluated for the image state, that is, the moving state and the sharpness state are evaluated.
  • pathological images in 6 image states can be obtained, including the static state, the moving state, the moving state to the static state, and the static state to moving. State, focus (transition from clear state to blurred state) state and focus (transition from blurred state to clear state) state.
  • any pathological image in a non-stationary state can be transmitted.
  • FIG. 14 is a schematic diagram of the composition structure of the image state determination device provided by the embodiment of the application.
  • the image state determination device 50 includes:
  • the acquiring module 501 is configured to acquire a pathological image set, where the pathological image set includes at least an image to be evaluated and an associated image, and the associated image and the image to be evaluated are continuous frame images;
  • the determining module 502 is configured to determine the first state corresponding to the image to be evaluated according to the set of pathological images acquired by the acquiring module 501, wherein the first state is used to indicate the movement change of the image to be evaluated happening;
  • the determining module 502 is further configured to determine a second state corresponding to the image to be evaluated according to the pathological image set if the first state is a static state, wherein the second state is used to indicate the Describe the changes in the definition of the image to be evaluated.
  • the related image includes two consecutive frames of images, and the two consecutive frames of images are respectively a first related image and a second related image;
  • the determining module 502 is further configured to obtain the similarity between the image to be evaluated and a first associated image, where the first associated image is the previous image adjacent to the image to be evaluated;
  • the similarity between the image to be evaluated and the first associated image is greater than the similarity threshold, the similarity between the first associated image and the second associated image is acquired, where the second associated image belongs to the pathological image set , And the second associated image is the previous image adjacent to the first associated image;
  • the similarity between the first associated image and the second associated image is less than or equal to the similarity threshold, it is determined that the first state is a transition from a moving state to a static state.
  • the correlation between the two images can be evaluated based on the similarity, so as to provide a reasonable and reliable implementation method for the solution.
  • the related image includes two consecutive frames of images, and the two consecutive frames of images are respectively a first related image and a second related image;
  • the determining module 502 is further configured to obtain the similarity between the image to be evaluated and a first associated image, where the first associated image is the previous image adjacent to the image to be evaluated;
  • the similarity between the image to be evaluated and the first associated image is less than or equal to the similarity threshold, the similarity between the first associated image and the second associated image is acquired, where the second associated image belongs to the pathology An image collection, and the second related image is the previous image adjacent to the first related image;
  • the similarity between the first associated image and the second associated image is greater than the similarity threshold, determining that the first state is a static state and transitioning to a moving state;
  • the similarity between the first associated image and the second associated image is less than or equal to the similarity threshold, it is determined that the first state is a moving state.
  • the correlation between the two images can be evaluated based on the similarity, so as to provide a reasonable and reliable implementation method for the solution.
  • the determining module 502 is further configured to determine a source region pathological image set according to the image to be evaluated, wherein the source region pathological image set includes M source region images, and M is greater than 1. Integer
  • the target area pathological image set includes M target area images, and the size of the target area image is smaller than the size of the source area image;
  • both the first source area image and the first target area image belong to the background image, then extract the second source area image from the source area pathological image set, and extract the second target area image from the target area pathological image set , And detect whether the second source area image and the second target area image belong to the background image;
  • first source area image and the first target area image do not both belong to the background image, calculating the similarity between the first source area image and the first target area image.
  • the image is divided into several regions, and the similarity calculation is performed on the regions, instead of directly performing the similarity calculation on the entire image, so that the similarity can be ensured as much as possible. If all regions are background images, then the entire image has a high probability of not including useful information. On the other hand, the size of the region is much smaller than the size of the entire image, even if the time complexity of the template matching method is relatively high. High, but the evaluation can also be completed in a short time.
  • the determining module 502 is further configured to calculate the pixel value standard deviation of the second source region image
  • the standard deviation of the pixel values of the second target area image is less than or equal to the standard deviation threshold, it is determined that the second target area image belongs to the background image.
  • the standard deviation of the pixel value can be used to better represent the change of the image and truly reflect the degree of dispersion between the pixels in the image, thereby improving the accuracy of detection.
  • the determining module 502 is further configured to calculate an image matrix according to the first source area image and the first target area image, where the image matrix includes multiple elements;
  • the similarity between the first source area image and the first target area image is determined according to the image matrix, wherein the similarity between the first source area image and the first target area image is the image matrix The maximum value of the element in.
  • the application of the calculation method of the similarity between the regional images provided by the embodiment of the present application provides a specific operation method for the realization of the solution, thereby improving the feasibility and operability of the solution.
  • the determining module 502 is further configured to obtain the definition of the image to be evaluated and the definition of the first associated image, wherein the first associated image belongs to the pathological image set, and The first associated image is the previous image adjacent to the image to be evaluated;
  • the second state is determined to be the focus state, wherein the focus state is the clear state and transitions to the blurred state , Or the fuzzy state switches to the clear state.
  • the problem that the image definition is sensitive to changes in the external environment can be solved through the reference image and the double threshold, so that it can be more reliably inferred whether the device is focusing.
  • the determining module 502 is further configured to obtain the definition of the image to be evaluated and the definition of the first associated image, if the definition of the image to be evaluated is the same as that of the first associated image If the definition does not meet the first preset condition, update the reference image to the image to be evaluated;
  • the reference image is updated to the image to be evaluated.
  • the reference image and two consecutive images are used for evaluation.
  • possible situations include the doctor’s not adjusting the focus, the doctor’s adjusting the focus is too small, and the microscope is shaking Or the camera is self-adjusting and so on.
  • the doctor’s not adjusting the focus is too small
  • the microscope is shaking Or the camera is self-adjusting and so on.
  • the determining module 502 is further configured to, after acquiring the definition of the image to be evaluated and the definition of the first associated image, determine that the definition of the image to be evaluated is related to the first associated image Whether the difference in sharpness of is greater than or equal to the first sharpness threshold;
  • the sharpness of the image to be evaluated is greater than or equal to the first sharpness threshold, it is determined that the sharpness of the image to be evaluated is greater than the sharpness of the first associated image.
  • the definition satisfies the first preset condition
  • the difference between the definition of the image to be evaluated and the definition of the first associated image is less than the first definition threshold, determine the difference between the definition of the reference image and the definition of the image to be evaluated Whether it is greater than or equal to a second clarity threshold, where the second clarity threshold is greater than the first clarity threshold;
  • the difference between the definition of the reference image and the definition of the image to be evaluated is greater than or equal to the second definition threshold, it is determined that the definition of the reference image and the definition of the image to be evaluated meet the first 2. Preset conditions.
  • the low threshold is used when comparing the sharpness of the current image and the previous image
  • the high threshold is used when comparing the sharpness of the current image and the reference image.
  • FIG. 15 is a schematic diagram of the composition structure of the image state determining apparatus 50 provided by an embodiment of the application. Please refer to FIG. 15. On the basis of FIG. 14, the image state determining apparatus 50 further includes a storage module 503;
  • the storage module 503 is configured to, after the determining module 502 determines the second state corresponding to the image to be evaluated according to the pathological image set, if the first state is a transition from a moving state to a static state, then storing The image to be evaluated;
  • the image to be evaluated is stored.
  • the pathological images collected by the smart microscope can be automatically saved for use in subsequent pathological reports, communication, and backup. Based on the task of automatically saving images, the pathological image collection is screened. It is only necessary to save the images from the moving state to the static state and the images from the blurred state to the clear state, because the images in other states are redundant or low-quality. Through the above method, on the one hand, there is no need for medical staff to manually collect images, which improves work efficiency, and on the other hand, reduces the storage space occupied by images.
  • FIG. 16 is a schematic diagram of the composition structure of the image state determining apparatus 50 provided by an embodiment of the present application. Please refer to FIG. 16. Based on FIG. 14, the image state determining apparatus 50 further includes a diagnosis module 504;
  • the diagnosis module 504 is configured to, after the determining module 502 determines the second state corresponding to the image to be evaluated according to the pathological image set, if the first state is the transition from the moving state to the stationary state, Pathological analysis of the image to be evaluated;
  • the pathological analysis is performed on the image to be evaluated.
  • image-based real-time AI-assisted diagnosis means that when the doctor uses the pathology microscope, the image collected by the camera is sent to the AI-assisted diagnosis module in real time, and the result of the AI-assisted diagnosis is fed back to the doctor to improve the doctor’s work effectiveness.
  • This application can filter the images sent to the AI-assisted diagnosis module, and only select the images that are converted from the moving state to the static state and the images that are converted from the blurred state to the clear state. This is because the images in these two states need to be carefully observed by the doctor And really interested, which greatly reduces the throughput pressure of the AI-assisted diagnosis module.
  • FIG. 17 is a schematic diagram of the composition structure of the image state determining apparatus 50 provided by an embodiment of the application. Please refer to FIG. 17. Based on FIG. 14, the image state determining apparatus 50 further includes a transmission module 505;
  • the transmission module 505 is configured to, after the determining module 502 determines the second state corresponding to the image to be evaluated according to the pathological image set, if the first state is a moving state to a static state, or If the first state is a static state transitioning to a moving state, or the first state is a moving state, the image to be evaluated is transmitted;
  • the second state is a transition from a blurred state to a clear state, or the second state is transitioned from a clear state to a blurred state, the image to be evaluated is transmitted.
  • the doctor operating the microscope needs to remotely share the microscope field of view with other doctors.
  • pathological images can be screened before network transmission, and static pathological images can be excluded, because pathological images in this state are redundant, thus reducing the amount of data that needs to be transmitted over the network.
  • FIG. 18 is a schematic diagram of the structure of the terminal device provided by the embodiment of the application. As shown in FIG. 18, for ease of description, only For the parts related to the embodiments of the present application and the specific technical details are not disclosed, please refer to the method section of the embodiments of the present application.
  • the terminal device can be any terminal device including a mobile phone, a tablet computer, a personal digital assistant (PDA), a point of sales (POS), a vehicle-mounted computer, etc.
  • PDA personal digital assistant
  • POS point of sales
  • vehicle-mounted computer etc.
  • the terminal device is a mobile phone as an example:
  • FIG. 18 shows a block diagram of a part of the structure of a mobile phone related to a terminal device provided in an embodiment of the application.
  • the mobile phone includes: a radio frequency (RF) circuit 910, a memory 920, an input unit 930, a display unit 940, a sensor 950, an audio circuit 960, a wireless fidelity (Wireless Fidelity, WiFi) module 970, and a processor 980 , And power supply 990 and other components.
  • RF radio frequency
  • the structure of the mobile phone shown in FIG. 18 does not constitute a limitation on the mobile phone, and may include more or less components than those shown in the figure, or a combination of some components, or different component arrangements.
  • the RF circuit 910 may be configured to receive and send signals during information transmission and reception or during a call. In particular, after receiving the downlink information of the base station, it is processed by the processor 980; in addition, the designed uplink data is sent to the base station.
  • the RF circuit 910 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
  • the RF circuit 910 can also communicate with the network and other devices through wireless communication.
  • the above wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile Communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile Communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • E-mail Short Messaging Service
  • the memory 920 may be configured to store software programs and modules.
  • the processor 980 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 920.
  • the memory 920 may mainly include a storage program area and a storage data area.
  • the storage program area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data (such as audio data, phone book, etc.) created by the use of mobile phones.
  • the memory 920 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the memory 920 is further configured to store a computer program, and the computer program is used to execute the method for determining the image state based on the pathological image provided in the embodiment of the present application.
  • the input unit 930 may be configured to receive input digital or character information, and generate key signal input related to user settings and function control of the mobile phone.
  • the input unit 930 may include a touch panel 931 and other input devices 932.
  • the touch panel 931 also called a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 931 or near the touch panel 931. Operation), and drive the corresponding connection device according to the preset program.
  • the touch panel 931 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 980, and can receive and execute the commands sent by the processor 980.
  • the touch panel 931 can be realized in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the input unit 930 may also include other input devices 932.
  • other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick.
  • the display unit 940 may be configured to display information input by the user or information provided to the user and various menus of the mobile phone.
  • the display unit 940 may include a display panel 941.
  • the display panel 941 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc.
  • the touch panel 931 can cover the display panel 941. When the touch panel 931 detects a touch operation on or near it, it transmits it to the processor 980 to determine the type of the touch event, and then the processor 980 responds to the touch event. The type provides corresponding visual output on the display panel 941.
  • the touch panel 931 and the display panel 941 are used as two independent components to implement the input and input functions of the mobile phone, but in some embodiments, the touch panel 931 and the display panel 941 can be integrated Realize the input and output functions of mobile phones.
  • the mobile phone may also include at least one sensor 950, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor can include an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 941 according to the brightness of the ambient light.
  • the proximity sensor can close the display panel 941 when the mobile phone is moved to the ear. And/or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when stationary, and can be used to identify mobile phone posture applications (such as horizontal and vertical screen switching, related Games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, percussion), etc.; as for other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which can be configured in mobile phones, we will not here Repeat.
  • mobile phone posture applications such as horizontal and vertical screen switching, related Games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, percussion), etc.
  • vibration recognition related functions such as pedometer, percussion
  • other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which can be configured in mobile phones, we will not here Repeat.
  • the audio circuit 960, the speaker 961, and the microphone 962 can provide an audio interface between the user and the mobile phone.
  • the audio circuit 960 can transmit the electric signal converted from the received audio data to the speaker 961, and the speaker 961 converts it into a sound signal for output; on the other hand, the microphone 962 converts the collected sound signal into an electric signal, and the audio circuit 960 After being received, it is converted into audio data, and then processed by the audio data output processor 980, and sent to, for example, another mobile phone via the RF circuit 910, or the audio data is output to the memory 920 for further processing.
  • WiFi is a short-distance wireless transmission technology.
  • the mobile phone can help users send and receive e-mails, browse web pages, and access streaming media through the WiFi module 970. It provides users with wireless broadband Internet access.
  • FIG. 18 shows the WiFi module 970, it is understandable that it is not a necessary component of the mobile phone, and can be omitted as required without changing the essence of the invention.
  • the processor 980 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire mobile phone. It executes by running or executing software programs and/or modules stored in the memory 920, and calling data stored in the memory 920. Various functions and processing data of the mobile phone can be used to monitor the mobile phone as a whole.
  • the processor 980 may include one or more processing units; optionally, the processor 980 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs. And so on, the modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 980.
  • the mobile phone also includes a power supply 990 (such as a battery) for supplying power to various components.
  • a power supply 990 (such as a battery) for supplying power to various components.
  • the power supply can be logically connected to the processor 980 through a power management system, so that functions such as charging, discharging, and power management can be managed through the power management system.
  • the mobile phone may also include a camera, a Bluetooth module, etc., which will not be repeated here.
  • the processor 980 included in the terminal device is further configured to execute the computer program stored in the memory 920 to implement the above-mentioned method for determining the image state based on the pathological image provided in the embodiment of the present application.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program code .
  • U disk mobile hard disk
  • read-only memory read-only memory
  • RAM random access memory
  • magnetic disk or optical disk and other media that can store program code .
  • the pathological image collection is acquired through a microscope, wherein the pathological image collection includes at least an image to be evaluated and an associated image, and the associated image and the image to be evaluated are continuous frame images; according to the pathological image collection Determine the first state corresponding to the image to be evaluated, where the first state is used to indicate the movement and change of the image to be evaluated; if the first state is a static state, based on the pathological image set The second state corresponding to the image to be evaluated is determined, where the second state is used to indicate a change in the definition of the image to be evaluated.
  • the pathological images collected by the microscope camera can be screened to assist in completing the task objectives, reducing the difficulty of image processing, and improving the efficiency of task processing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Optics & Photonics (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Quality & Reliability (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Microscoopes, Condenser (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

一种基于病理图像的图像状态确定方法、装置、设备及计算机存储介质,所述方法包括:获取病理图像集合,其中,病理图像集合至少包括待评估图像以及关联图像,关联图像与待评估图像为连续的帧图像(101);根据病理图像集合确定待评估图像所对应的第一状态,第一状态用于表示待评估图像的移动变化情况(102);若第一状态为静止状态,则根据病理图像集合确定待评估图像所对应的第二状态,第二状态用于表示待评估图像的清晰度变化情况(103)。

Description

图像状态确定方法、装置、设备、系统及计算机存储介质
相关申请的交叉引用
本申请基于申请号为201910457380.4、申请日为2019年05月29日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及人工智能领域,尤其涉及一种基于病理图像的图像状态确定方法、装置、设备、系统及计算机存储介质。
背景技术
病理检查已经大量应用于临床工作及科学研究,医务人员进行病理诊断的主要方式是观察切片,将其放大40到400倍后,观察细胞形态和组织结构作出诊断。智能显微镜和数字病理扫描仪是医务人员最常用的工具。
智能显微镜通常会自带相机,可对显微镜视野连续采集图像,采集的图像用于多种显微镜任务中,如自动图像保存任务、基于图像的实时人工智能(Artificial Intelligence,AI)辅助诊断等。
由于智能显微镜的相机通常是高分辨率的高速工业相机,采图的帧率高(可以达到每秒几帧到几十帧图像),而且图像体积大(单幅图像的总像素可达到400万以上),短时间内会产生大量的图像数据,若能够对图像的状态进行评估,便可基于图像状态对智能显微镜采集的图像进行筛选,进而提高显微镜任务的处理效率,然相关技术中不存在确定图像状态的方案。
发明内容
本申请实施例提供了一种基于病理图像的图像状态确定方法、装置、设备、系统及计算机存储介质,能够对采集到的图像进行移动状态评估以及清晰度状态评估,从而可以将确定的图像状态作为图像筛选的依据,适配不同显微镜任务的需求,提升了任务处理效率。
有鉴于此,本申请实施例提供一种基于病理图像的图像状态确定方法,包括:
通过显微镜获取病理图像集合,其中,所述病理图像集合至少包括待评估图像以及关联图像,所述关联图像与所述待评估图像为连续的帧图像;
根据所述病理图像集合确定所述待评估图像所对应的第一状态,其中,所述第一状态用于表示所述待评估图像的移动变化情况;
若所述第一状态为静止状态,则根据所述病理图像集合确定所述待评估图像所对应的第二状态,其中,所述第二状态用于表示所述待评估图像的清晰度变化情况。
本申请实施例还提供一种图像状态确定装置,包括:
获取模块,用于通过显微镜获取病理图像集合,其中,所述病理图像集合至少包括待评估图像以及关联图像,所述关联图像与所述待评估图像为连续的帧图像;
确定模块,配置为根据所述获取模块获取的所述病理图像集合确定所述待评估图像所对应的第一状态,其中,所述第一状态用于表示所述待评估图像的移动变化情况;
所述确定模块,还配置为若所述第一状态为静止状态,则根据所述病理图像集合确定所述待评估图像所对应的第二状态,其中,所述第二状态用于表示所述待评估图像的清晰度变化情况。
本申请实施例还提供一种智能显微镜系统,所述智能显微镜系统包括 图像采集模块,图像处理分析模块、病理分析模块、存储模块以及传输模块;
其中,所述图像采集模块,配置为获取病理图像集合,其中,所述病理图像集合至少包括待评估图像以及关联图像,所述关联图像与所述待评估图像为连续的帧图像;
所述图像处理分析模块,配置为根据所述病理图像集合确定所述待评估图像所对应的第一状态,其中,所述第一状态用于表示所述待评估图像的移动变化情况;
若所述第一状态为静止状态,则根据所述病理图像集合确定所述待评估图像所对应的第二状态,其中,所述第二状态用于表示所述待评估图像的清晰度变化情况;
所述存储模块,配置为若所述第一状态为移动状态转换至静止状态,则存储所述待评估图像;
若所述第二状态为模糊状态转换至清晰状态,则存储所述待评估图像;
所述病理分析模块,配置为若所述第一状态为移动状态转换至静止状态,则对所述待评估图像进行病理分析;
若所述第二状态为模糊状态转换至清晰状态,则对所述待评估图像进行所述病理分析;
所述传输模块,配置为若所述第一状态为移动状态转换至静止状态,或所述第一状态为静止状态转换至移动状态,或所述第一状态为移动状态,则传输所述待评估图像;
若所述第二状态为模糊状态转换至清晰状态,或所述第二状态为清晰状态转换至模糊状态,则传输所述待评估图像。
本申请实施例还提供一种终端设备,包括:存储器及处理器;
其中,所述存储器,配置为存储计算机程序;
所述处理器,配置为执行所述存储器中的计算机程序时,执行本申请实施例提供的基于病理图像的图像状态确定方法。
本申请实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行本申请实施例提供的基于病理图像的图像状态确定方法。
应用本申请实施例提供的基于病理图像的图像状态确定方法、装置、设备、系统及计算机存储介质,至少具有以下有益技术效果:
本申请实施例中,提供了一种基于病理图像的图像状态确定方法,首先获取病理图像集合,其中,病理图像集合至少包括待评估图像以及关联图像,关联图像为待评估图像相邻的前一帧图像,然后根据病理图像集合确定待评估图像所对应的第一状态,其中,第一状态用于表示待评估图像的移动变化情况,如果第一状态为静止状态,则根据病理图像集合确定待评估图像所对应的第二状态,其中,第二状态用于表示待评估图像的清晰度变化情况。通过上述方式,能够对采集到的图像进行移动状态评估以及清晰度状态评估,由此确定不同图像的图像状态,而病理图像的图像状态往往反映了用户对显微镜的操作及显微镜内图像视野的变化,进而可根据图像状态和任务类型,对显微镜相机采集的病理图像进行筛选,协助完成任务目标,降低了图像处理的难度,提升了任务处理效率。
附图说明
图1为本申请实施例中图像评估系统的一个架构示意图;
图2为本申请实施例中图像评估系统的一个流程示意图;
图3为本申请实施例中基于病理图像的图像状态确定方法的流程示意图;
图4为本申请实施例中图像移动评估的一个流程示意图;
图5为本申请实施例中源区域图像的图像中心一个坐标示意图;
图6为本申请实施例中源区域图像与目标区域图像的一个对比示意图;
图7为本申请实施例中图像清晰评估的一个方法流程示意图;
图8为本申请实施例中基于病理图像的处理方法的流程示意图;
图9为本申请实施例中自动保存图像任务的一个流程示意图;
图10为本申请实施例中基于病理图像的处理方法的流程示意图;
图11为本申请实施例中实时人工智能辅助诊断任务的一个流程示意图;
图12为本申请实施例中基于病理图像的处理方法的流程示意图;
图13为本申请实施例中显微镜视野远程分享任务的一个流程示意图;
图14为本申请实施例中图像状态确定装置的组成结构示意图;
图15为本申请实施例中图像状态确定装置的组成结构示意图;
图16为本申请实施例中图像状态确定装置的组成结构示意图;
图17为本申请实施例中图像状态确定装置的组成结构示意图;
图18为本申请实施例中终端设备的组成结构示意图。
具体实施方式
本申请实施例提供了一种基于病理图像的图像状态确定方法、装置以及系统,能够对采集到的图像进行移动状态评估以及清晰度状态评估,从而可以基于不同的图像状态对图像进行合理的操作,降低了图像处理的难度,提升了任务处理效率。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“对应于”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列 步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
应理解,本申请提供的一种基于病理图像的图像状态确定方法,以及基于病理图像的处理方法可应用于人工智能(Artificial Intelligence,AI)领域,可以应用于基于AI的医学领域以及视频监控领域等,例如基于AI的医学领域,通过显微镜自动调焦,固定当前视野保持不变,使用显微镜自带相机进行连续采图,根据图像清晰状态的变化来自动转动调焦按钮,实现显微镜自动调焦。又例如,基于视频监控领域,可以对道路交通视频进行监控,自动移除静止状态的图像,减轻后续视频分析的工作量。
随着科技的快速发展,AI在医疗行业的应用也越来越广泛,在医学领域中最常见的医学影像包含但不仅限于血管摄影图像、心血管造影图像、电子计算机断层扫描(computerized tomography,CT)图像、B超图像以及病理图像,其中,病理图像通常是由智能显微镜采集到的,病理图像包括活检组织的外观图像以及细胞结构图像等,
医生病理诊断的主要方式是观察切片,将其放大40到400倍后,观察细胞形态和组织结构作出诊断,智能显微镜和数字病理扫描仪是医生最常用的工具。智能显微镜突破了传统显微镜的局限,从被动使用转为主动辅助医师,如通过计算机视觉去帮助医生,从简单但繁琐的细胞计量,到困难且复杂的癌症类型辨识及区域精准划分。同时利用语音识别让医生与智能显微镜进行流畅人机交互。最后通过自然语言处理技术协助最后的病理报告生成。读片时,医生只需给出语音指令,AI就能自动阅片、自动采集图像,并辅助医生诊断;医生阅片完成后,给出“生成报告”指令,智能显微镜就能将显微镜截图和诊断结果填入报告模板, 自动生成报告,让医生复核结果和发布报告,使原本最费事的报告生成环节变得又快又省心。在丝分裂细胞检测、免疫组化定量分析、癌症区域监测以及辅助诊断流程中,智能显微镜都起着重要的作用。
为了便于理解,本申请提出了一种基于病理图像的图像状态确定方法以及基于病理图像的处理方法,上述方法均可以应用于图1所示的图像评估系统,请参阅图1和图2,图1为本申请实施例中图像评估系统的一个架构示意图,图2为本申请实施例中图像评估系统的一个流程示意图,将结合图1和图2进行说明。
在步骤S1中,通过相机采集多张连续的图像;
在实际实施时,可通过终端自带的摄像头进行图像采集,或者由独立于终端的摄像头进行采集。
在步骤S2中,判断当前图像是否为第一张图像,如果是,执行步骤S3,否则,执行步骤S4;
在实际实施时,终端设备判断当前图像是否为第一张图像,或者终端设备将采集到的图像发送至服务器,由服务器判断当前图像是否为第一张图像,若是第一张图像,则执行步骤S3,若不是第一张图像,则执行步骤S4;
在步骤S3中,确定当前图像属于移动状态;
在步骤S4中,对当前图像的移动状态进行评估;
在步骤S5中,若检测到当前图像正在移动,则确定当前图像属于移动状态;
在步骤S6中,若检测到当前图像停止移动,则确定当前图像属于移动状态转为静止状态;
在步骤S7中,若检测到当前图像开始移动,则确定当前图像属于静止状态转为移动状态;
在步骤S8中,若检测到当前图像处于静止状态,则对当前图像的清晰度状态进行评估;
在步骤S9中,若检测到当前图像的清晰度没有变化,则确定当前图像属于静止状态;
在步骤S10中,若检测到当前图像变得更清晰,则确定当前图像属调焦状态,且是从模糊状态转为清晰状态;
在步骤S11中,若检测到当前图像变得更模糊,则确定当前图像属于调焦状态,且是从清晰状态转为模糊状态。
需要说明的是,终端设备包含但不仅限于智能显微镜、平板电脑、笔记本电脑、掌上电脑、手机、语音交互设备及个人电脑(personal computer,PC),此处不做限定。智能显微镜融入了AI的视觉、语音以及自然语言处理技术,医生轻松输入语音指令,AI就能自动识别、检测、定量计算和生成报告,并将检测结果实时显示到医生所看目镜中,及时提醒又不打断医生阅片流程,能提高医生的诊断效率和准确度。
结合上述介绍,下面将对本申请实施例中基于病理图像的图像状态确定方法进行介绍,图3为本申请实施例提供的基于病理图像的图像状态确定方法流程图,请参阅图3,包括:
101、获取病理图像集合,其中,病理图像集合至少包括待评估图像以及关联图像,关联图像为与待评估图像为连续的帧图像。
在实际应用中,由终端设备(如智能显微镜)通过相机采集病理图像集合,图像状态确定装置获取病理图像集合,其中,病理图像集合包括多张连续的图像,也即,关联图像可以有多帧,关联图像可以为在待评估图像之前采集的相邻的前几帧图像。
可以理解的是,图像状态确定装置可以部署于终端设备上,比如部署于智能显微镜,也可以部署于服务器上,此处不做限定。
102、根据病理图像集合确定待评估图像所对应的第一状态,其中,第一状态用于表示待评估图像的移动变化情况。
在实际应用中,图像状态确定装置对待评估图像进行移动状态的评估,从而得到第一状态,该第一状态用于表示待评估图像的移动变化情况。可以理解的是,评估移动状态需要采用病理图像集合中至少连续的三帧图像,也即关联图像为连续的两帧图像,与待评估图像构成连续的三帧图像,如病理图像集合包括待评估图像、待评估图像的上一帧图像以及待评估图像的上上帧图像。
103、若第一状态为静止状态,则根据病理图像集合确定待评估图像所对应的第二状态,第二状态用于表示待评估图像的清晰度变化情况。
在实际应用中,如果图像状态确定装置确定第一状态为静止状态,那么继续对待评估图像进行清晰状态的评估,从而得到第二状态,该第二状态表示待评估图像的清晰度变化情况。可以理解的是,评估清晰度状态需要采用病理图像集合中至少连续的两帧图像,即关联图像为待评估图像的上一帧图像。
为了便于理解,请参阅表1,表1为基于显微镜操作的图像状态的定义和描述示意。
表1
Figure PCTCN2020090007-appb-000001
Figure PCTCN2020090007-appb-000002
其中,第一状态包括四个类型的图像状态,分别为静止状态、移动状态、移动状态转换至静止状态以及静止状态转换至移动状态,第二状态包括两个类型的状态,分别为调焦(清晰状态转换至模糊状态)状态以及调焦(模糊状态转换至清晰状态)状态。上述六种图像状态可以真实反映医生正在对显微镜进行的操作和显微镜视野的变化,从而对图像进行实时评估。
应用本申请上述实施例,能够对采集到的图像进行移动状态评估以及清晰度状态评估,由此确定不同图像的图像状态,而图像状态往往反映了用户操作对图像视野的变化,在很多不同的任务中发挥非常重要的作用,可根据图像状态和任务类型,对显微镜相机采集的病理图像进行筛选,协助完成任务目标,降低了图像处理的难度,提升了任务处理效率。
在一些实施例中,关联图像包括连续的两帧图像,所述连续的两帧图像分别为第一关联图像及第二关联图像,相应的,可通过如下方式确定待评估图像所对应的第一状态:
获取待评估图像与第一关联图像的相似度,其中,第一关联图像属 于病理图像集合,且第一关联图像为待评估图像相邻的上一个图像;
若待评估图像与第一关联图像的相似度大于相似度阈值,则获取第一关联图像与第二关联图像的相似度,其中,第二关联图像属于病理图像集合,且第二关联图像为第一关联图像相邻的上一个图像;
若第一关联图像与第二关联图像的相似度大于相似度阈值,则确定第一状态为静止状态;
若第一关联图像与第二关联图像的相似度小于或等于相似度阈值,则确定第一状态为移动状态转换至静止状态。
在一些实施例中,当关联图像包括连续的第一关联图像及第二关联图像时,根据病理图像集合,可通过如下方式确定待评估图像所对应的第一状态:
获取待评估图像与第一关联图像的相似度,第一关联图像为待评估图像相邻的上一个图像;
若待评估图像与第一关联图像的相似度小于或等于相似度阈值,则获取第一关联图像与第二关联图像的相似度,其中,第二关联图像为第一关联图像相邻的上一个图像;
若第一关联图像与第二关联图像的相似度大于相似度阈值,则确定第一状态为静止状态转换至移动状态;
若第一关联图像与第二关联图像的相似度小于或等于相似度阈值,则确定第一状态为移动状态。
接下来对基于相似度计算确定第一状态的方式进行说明。图4为本申请实施例中图像移动评估的一个流程图示意图,请参阅图4,包括:
在步骤A1中,通过显微镜相机采集多帧连续的图像。
在步骤A2中,判断当前图像(即待评估图像)相对上一幅图像(第一关联图像)是否发生移动,如果发生移动,执行步骤A6;如果未发生 移动,执行步骤A3。
这里,判断的方式为,获取待评估图像与第一关联图像的相似度,如果待评估图像与第一关联图像的相似度大于相似度阈值,则确定待评估图像与第一关联图像未发生移动,此时执行步骤A3。
在步骤A3中,判断上一幅图像(即第一关联图像)相对上上一幅图像(第二关联图像)是否发生移动,如果发生移动,执行步骤A5;如果未发生移动,执行步骤A4。
这里,判断的方式为,获取第一关联图像与第二关联图像的相似度,如果第一关联图像与第二关联图像的相似度大于相似度阈值,则确定第一关联图像与第二关联图像未发生移动,此时执行步骤A4。如果第一关联图像与第二关联图像的相似度小于或等于相似度阈值,则确定第一关联图像与第二关联图像发生移动,此时执行步骤A5。
在步骤A4中,确定当前图像(即待评估图像)的第一状态为静止状态。
在步骤A5中,确定当前图像(即待评估图像)的第一状态为移动状态转为静止状态。
可以理解的是,相似度阈值可以设置为0.9,也可以设置为其他参数,此次不做限定。
在步骤A6中,判断判断上一幅图像(即第一关联图像)相对上上一幅图像(第二关联图像)是否发生移动,如果发生移动,执行步骤A8;如果未发生移动,执行步骤A7。
在步骤A7中,确定当前图像(即待评估图像)的第一状态为静止状态转为移动状态。
在步骤A8中,确定当前图像(即待评估图像)的第一状态为移动状态。
通过上述方式,能够基于相似度来评价两个图像之间的关联性,从而为方案提供合理且可靠的实现方式。
在一些实施例中,可通过如下方式获取待评估图像与第一关联图像的相似度:
根据待评估图像确定源区域病理图像集合,其中,源区域病理图像集合包括M个源区域图像,M为大于1的整数;
根据第一关联图像确定目标区域病理图像集合,其中,目标区域病理图像集合包括M个目标区域图像,目标区域图像的尺寸小于源区域图像的尺寸;
从所述源区域病理图像集合中抽取第一源区域图像,并从所述目标区域病理图像集合中抽取第一目标区域图像;
若第一源区域图像与第一目标区域图像均属于背景图像,则从所述源区域病理图像集合中抽取第二源区域图像,并从所述目标区域病理图像集合中抽取第二目标区域图像,并检测第二源区域图像以及第二目标区域图像是否属于背景图像;
若第一源区域图像与第一目标区域图像不都属于背景图像,则计算第一源区域图像与第一目标区域图像的相似度,并将计算得到的相似度作为所述待评估图像与所述第一关联图像的相似度。
这里,对获取图像之间相似度的方法进行说明。首先,从待评估图像中选定M个源区域图像,M个源区域图像组成源区域病理图像集合。图5为本申请实施例中源区域图像的图像中心一个坐标示意图,请参阅图5,假设为待评估图像选定9个源区域图像,每个源区域图像相对于整个待评估图像的坐标依次为图像中心E(0.50,0.50)、图像左上A(0.25,0.25)、图像左下G(0.25,0.75)、图像右上C(0.75,0.25)、图像右下I(0.75,0.75)、图像上侧B(0.50,0.25)、图像右侧F(0.75,0.50)、图像下侧 H(0.50,0.75)以及图像左侧(0.25,0.50),其中,每个源区域图像的尺寸为W*H。
从第一关联图像中选定M个目标区域图像,M个目标区域图像组成目标区域病理图像集合,其中,假设为目标区域图像选定9个目标区域图像,每个源区域图像相对于整个待评估图像的坐标依次为图像中心E(0.50,0.50)、图像左上A(0.25,0.25)、图像左下G(0.25,0.75)、图像右上C(0.75,0.25)、图像右下I(0.75,0.75)、图像上侧B(0.50,0.25)、图像右侧F(0.75,0.50)、图像下侧H(0.50,0.75)以及图像左侧(0.25,0.50),其中,每个目标区域图像的尺寸为w*h,且需要满足W>w,H>h,例如可以设置W=H=96,w=h=64,请参阅图6,图6为本申请实施例中源区域图像与目标区域图像的一个对比示意图,如图所示,以E为图像中心,大矩形对应的一个源区域图像,小矩形对应一个目标区域图像。
在实际实施时,假设M为9,于是需要遍历9个区域图像的中心,若初始设定i=0,第i次循环需要根据区域图像的尺寸以及第i个中心坐标,分别从待评估图像中抽取第一源区域图像,并且从第一关联图像中抽取第一目标区域图像进行检测,检测方式可以将第一目标区域图像作为滑动窗口的大小,在第一源区域图像上进行模板匹配,如果检测第一源区域图像与第一目标区域图像均属于背景图像,则设置i=i+1,即开始下一轮遍历,检测第二源区域图像以及第二目标区域图像是否属于背景图像。反之,如果第一源区域图像不属于背景图像,或者第一目标区域图像不属于背景图像,或者两者均不属于背景图像,那么使用模板匹配方法计算这两个区域图像之间的相似度。如果计算得到的相似度大于相似度阈值,则认为前后两帧图像没有发生移动,如果计算得到的相似度小于或等于相似度阈值,则认为前后两帧图像发生移动,不论哪一种情况都终止遍历。
如果M个源区域图像以及目标区域图像都是背景图像,则认为这两帧图像都是背景图像,没有发生相对移动。
需要说明的是,在实际应用中,第一关联图像与第二关联图像的相似度计算方式与待评估图像与第一关联图像的相似度计算方式类似,此次不做赘述。
应用本申请实施例提供的上述获取图像之间相似度的方式,将图像分为若干个区域,对区域进行相似度计算,而不直接对整个图像进行相似度计算,这样一方面可以尽量保证相似度判断的准确度,如果所有的区域都是背景图像,那么整个图像很大概率不包括有用信息,另一方面,区域的尺寸远远小于整个图像的尺寸,即使模版匹配方法的时间复杂度较高,但也可以在较短的时间内完成评估。
在一些实施例中,可通过如下方式检测第二源区域图像以及第二目标区域图像是否属于背景图像:
计算第二源区域图像的像素值标准差;
若第二源区域图像的像素值标准差小于或等于标准差阈值,则确定第二源区域图像属于背景图像;
计算第二目标区域图像的像素值标准差;
若第二目标区域图像的像素值标准差小于或等于标准差阈值,则确定第二目标区域图像属于背景图像。
这里,对背景图像的判断方式进行说明。如果源区域图像以及目标区域图像是红绿蓝(red green blue,RGB)图像,则需要先转换为灰度图像。基于灰度图像,分别计算目标区域图像的像素值标准差以及源区域图像的像素值标准差。如果像素值标准差小于或等于给定的标准差阈值,则该区域图像是背景图像。像素值标准差的计算方式如下:
Figure PCTCN2020090007-appb-000003
其中,δ表示像素值标准差,M×N表示区域图像的大小,P(i,j)表示区域图像中第i行第j列的像素值,μ表示均值。
应用本申请实施例提供的上述检测背景图像的方式,利用像素值标准差可以更好地表示图像的变化情况,真实地反映图像内各个像素之间的离散程度,从而提升检测的准确性。
在一些实施例中,可通过如下方式计算第一源区域图像与第一目标区域图像的相似度:
根据第一源区域图像与第一目标区域图像,计算得到图像矩阵,其中,图像矩阵包括多个元素;
根据图像矩阵确定第一源区域图像与第一目标区域图像的相似度,其中,第一源区域图像与第一目标区域图像的相似度为图像矩阵中元素的最大值。
这里,对计算区域图像之间相似度的方式进行说明,以第一源区域图像与第一目标区域图像为例,在实际应用中,对于每个源区域图像与每个目标区域图像均可以采用相同的处理方式。
在实际实施时,对于源区域图像与目标区域图像而言,目标区域图像(w*h)的尺寸小于源区域图像(W*H)的尺寸,且目标区域图像需要滑动遍历整个源区域图像,于是,在水平方向需要滑动遍历(W–w+1)次,在垂直方向上需要滑动遍历(H–h+1)次,因此模板匹配得到的结果是一个尺寸为(W–w+1)*(H–h+1)的图像矩阵,记为R,可采用如下方式计算图像矩阵:
Figure PCTCN2020090007-appb-000004
其中,
Figure PCTCN2020090007-appb-000005
Figure PCTCN2020090007-appb-000006
其中,R(x,y)表示矩阵R在(x,y)处的元素值,I 1表示源区域图像,I′ 1表示归一化处理后的源区域图像,I 2表示目标区域图像,I' 2表示归一化处理后的目标区域图像,x的取值范围为大于或等于0,且小于或等于(W-w)的整数,y的取值范围为大于或等于0,且小于或等于(H-h)的整数,x'的取值范围为大于或等于0,且小于或等于w的整数,y'的取值范围为大于或等于0,且小于或等于h的整数,只对目标区域图像中起点为(x,y),尺寸为w*h的区域进行操作,而对源区域图像的整个图像进行操作。
图像矩阵中的元素的取值范围为0至1,选取其中最大的值作为两幅图像的相似度,相似度越大表示两个图越相似。可以理解的是,本申请采用的模板匹配算法为归一化相关系数匹配法(TM_CCOEFF_NORMED),在实际应用中,还可以采用平方差匹配法(CV_TM_SQDIFF)、归一化平方差匹配法(CV_TM_SQDIFF_NORMED)、相关匹配法(CV_TM_CCORR)、归一化相关匹配法(CV_TM_CCORR_NORMED)或者相关系数匹配法(CV_TM_CCOEFF)。
模板匹配算法可以有效地区分显微镜视野移动和抖动。地面或者桌面晃动会带来显微镜视野抖动,造成连续两幅图像有一个微小的偏移,而人为的移动带来的偏移通常很大。因此在使用模版匹配方法时,需要合理地设置W*H和w*h,可以近似地认为水平方向小于(W-w)/2且垂直方向小于(H-h)/2的偏移属于抖动,而水平方向大于等于(W-w)/2或垂直方向大于等于(H-h)/2的偏移属于移动。
在一些实施例中,根据病理图像集合,可通过如下方式确定待评估图像所对应的第二状态:
获取待评估图像的清晰度与第一关联图像的清晰度,其中,第一关联图像属于病理图像集合,且第一关联图像为与待评估图像相邻的上一个图像;
若待评估图像的清晰度与第一关联图像的清晰度满足第一预设条件,则获取基准图像的清晰度;
若基准图像的清晰度与待评估图像的清晰度满足第二预设条件,则确定第二状态为调焦状态,其中,调焦状态为清晰状态转换至模糊状态,或模糊状态转换至清晰状态。
这里,对图像清晰度评估的方法进行说明,图7为本申请实施例中图像清晰度评估的一个方法流程图示意图,请参阅图7,包括:
在步骤B1中,通过显微镜相机采集多帧连续的图像。
在步骤B2中,判断当前图像(即待评估图像)相对上一幅图像(第一关联图像)的清晰度是否发生改变,如果发生改变,执行步骤B3;如果未发生改变,执行步骤B4。
这里,判断的方式为,首先获取待评估图像的清晰度与第一关联图像的清晰度,然后判断待评估图像的清晰度与第一关联图像的清晰度是否满足第一预设条件,如果满足第一预设条件,则确定待评估图像的清晰度与第一关联图像的清晰度发生改变,此时执行步骤B3。反之,如果不满足第一预设条件,则确定待评估图像的清晰度与第一关联图像的清晰度没有改变,此时执行步骤B4。
在步骤B3中,判断当前图像(即待评估图像)相对基准图像的清晰度是否发生改变。
这里,判断的方式为,首先获取基准图像的清晰度,然后判断基准图像的清晰度与待评估图像的清晰度是否满足第二预设条件,如果满足第二预设条件,则确定第二状态为调焦状态,此时,如果图像变模糊, 则执行步骤B5,确定调焦状态为清晰状态转换至模糊状态。如果图像变清晰,则执行步骤B6,确定调焦状态为模糊状态转换至清晰状态。如果不满足第二预设条件,则指标步骤B7,即确定第二状态为静止状态。
在步骤B4中,确定当前图像(即待评估图像)为静止状态。
在步骤B5中,确定调焦状态为清晰状态转换至模糊状态。
在步骤B6中,确定调焦状态为模糊状态转换至清晰状态。
在步骤B7中,确定第二状态为静止状态。
基于步骤B4、步骤B5和步骤B6的情况,可以执行步骤B8.
在步骤B8中,将基准图像更新为当前图像(即待评估图像)。
应用本申请实施例提供的上述实时评估图像清晰度的方法,通过基准图像和双阈值能够解决图像清晰度对外界环境变化敏感的问题,从而能够更加可靠地推断出是否正在对设备进行调焦。
在一些实施例中,获取待评估图像的清晰度与第一关联图像的清晰度之后,还可执行如下操作:
若待评估图像的清晰度与第一关联图像的清晰度未满足第一预设条件,则将基准图像更新为待评估图像;
若基准图像的清晰度与待评估图像的清晰度未满足第二预设条件,则更新基准图像的清晰度;
相应的,确定第二状态为调焦状态之后,还可执行如下操作:
将基准图像更新为待评估图像。
在实际应用中,图像清晰度对外界环境变化比较敏感,设备的抖动或者相机自我调节(比如自动曝光或者自动白平衡等)都会带来图像清晰度的较大改变。本申请实施例可以通过以下手段来解决该问题。
在实际实施时,使用图像的拉普拉斯(Laplacian)矩阵的标准差作为图像清晰度。Laplacian矩阵刻画了图像的轮廓信息,当显微镜视野保 持不变时,Laplacian矩阵的标准差越大,图像轮廓越清晰,图像清晰度就越大。处理采用Laplacian矩阵的标准差作为图像清晰度,也可以采用其他的指标,例如Laplacian矩阵的平均值或者信息熵等。
在图像处理中,可以使用如下3*3的模板对图像进行卷积操作,生成图像的Laplacian矩阵,该模板为:
Figure PCTCN2020090007-appb-000007
Laplacian矩阵抽取了图像的边缘信息。图像越清晰,图像的边缘就越清晰,Laplacian矩阵中元素的取值波动就越大(边界处元素的取值越大),标准差就越大。
在获取到待评估图像的清晰度与第一关联图像的清晰度之后,如果待评估图像的清晰度与第一关联图像的清晰度未满足第一预设条件,则将基准(Benchmark)图像更新为待评估图像。使用Benchmark图像和连续两幅图像进行评估,当待评估图像和基准图像的清晰度差异小于给的清晰度阈值(即基准图像的清晰度与待评估图像的清晰度未满足第二预设条件)时,可能的情况包括医生没有调焦、医生调焦幅度太小、显微镜抖或者相机自我调节等。此时不需要更新基准图像,而是继续累计基准图像的清晰度,以便做出更准确的推断。累计的方式为清晰度+a或者清晰度-b,a和b为正数。
可以理解的是,第一预设条件可以是待评估图像的清晰度与第一关联图像的清晰度之差大于或等于清晰度阈值,第二预设条件可以是基准图像的清晰度与待评估图像的清晰度大于或等于清晰度阈值。
应用本申请实施例提供的上述双阈值的检测方式,使用基准图像和连续两幅图像进行评估,当前图像和基准图像的清晰度差异小于给的阈值时,可能的情况包括医生没有调焦、医生调焦幅度太小、显微镜抖动 或者相机自我调节等情况,这个时候不需要更新基准图像,而是继续累计基准图像的清晰度差异,从而有利于得到更准确的检测结果。
在一些实施例中,获取待评估图像的清晰度与第一关联图像的清晰度之后,还可执行如下操作:
判断待评估图像的清晰度与第一关联图像的清晰度之差是否大于或等于第一清晰度阈值;
若待评估图像的清晰度与第一关联图像的清晰度之差大于或等于第一清晰度阈值,则确定待评估图像的清晰度与第一关联图像的清晰度满足第一预设条件;
若待评估图像的清晰度与第一关联图像的清晰度之差小于第一清晰度阈值,则判断基准图像的清晰度与待评估图像的清晰度之差是否大于或等于第二清晰度阈值,其中,第二清晰度阈值大于第一清晰度阈值;
若基准图像的清晰度与待评估图像的清晰度之差大于或等于第二清晰度阈值,则确定基准图像的清晰度与待评估图像的清晰度满足第二预设条件。
在本申请实施例引入双阈值,即引入第一清晰度阈值和第二清晰度阈值,当对比当前图像和上一幅图像的清晰度时使用第一清晰度阈值,即判断待评估图像的清晰度与第一关联图像的清晰度之差是否大于或等于第一清晰度阈值,如果待评估图像的清晰度与第一关联图像的清晰度之差大于或等于第一清晰度阈值,则确定待评估图像的清晰度与第一关联图像的清晰度满足第一预设条件。反之,则不满足第一预设条件。
对比当前图像和基准图像的清晰度时使用高阈值,即判断待评估图像的清晰度与基准图像的清晰度之差大于或等于第二清晰度阈值,如果基准图像的清晰度与待评估图像的清晰度之差大于或等于第二清晰度阈值,则确定基准图像的清晰度与待评估图像的清晰度满足第二预设条件。 反之,则不满足第二预设条件。
由于抖动对模糊改变很大,大于第一清晰度阈值时无法推断是医生正在调焦还是显微镜的抖动,只有清晰度的差异大于第二清晰度阈值时,才可以推断医生正在调焦,因此,使用低阈值推断医生没有对显微镜调焦,而使用高阈值推断医生正在对显微镜调焦是更加可靠的。
需要说明的是,第一清晰度阈值可以设置为0.02,第一清晰度阈值为低阈值,第二清晰度阈值可以设置为0.1,第二清晰度阈值为高阈值,在实际应用中,还可以设置为其他参数,此次不做限定。
应用本申请实施例提供的上述双阈值的检测方式,当对比当前图像和上一幅图像的清晰度时使用低阈值,当对比当前图像和基准图像的清晰度时使用高阈值,使用低阈值可以推断医生没有对显微镜调焦,而使用高阈值推断医生正在对显微镜调焦,从而提升清晰度检测的可靠性。
结合上述介绍,下面将对本申请实施例提供的基于病理图像的处理方法进行介绍,在实际实施时,该方法可由终端(如智能显微镜)或服务器单独实现,或由终端及服务器协同实现,以终端单独实现为例,图8为本申请实施例提供的基于病理图像的处理方法流程示意图,请参阅图8,包括:
201、终端获取病理图像集合,其中,病理图像集合包括待评估图像、第一关联图像以及第二关联图像,第一关联图像为待评估图像相邻的上一个图像,第二关联图像为第一关联图像相邻的上一个图像。
在实际应用中,由智能显微镜通过相机采集病理图像集合,从而获取到病理图像集合,其中,病理图像集合包括多张连续的病理图像,即至少包括一张待评估图像以及多张关联图像,关联图像是指在待评估图像之前相邻的前几帧图像。
可以理解的是,智能显微镜也可以将病理图像集合发送给服务器, 由服务器判断待评估图像所对应的图像状态。
202、根据病理图像集合确定待评估图像所对应的第一状态,其中,第一状态用于表示待评估图像的移动变化情况。
在实际应用中,智能显微镜或者服务器对待评估图像进行移动状态的评估,从而得到第一状态,该第一状态用于表示待评估图像的移动变化情况。可以理解的是,评估移动状态需要采用病理图像集合中至少三帧病理图像,即包括待评估图像、待评估图像的上一帧病理图像(即第一关联图像)以及待评估图像的上上帧病理图像(即第二关联图像)。
203、若第一状态为移动状态转换至静止状态,则存储待评估图像。
在实际应用中,如果确定待评估图像的第一状态为移动状态转换至静止状态,则将存储该待评估图像。
204、若第一状态为静止状态,则根据病理图像集合确定待评估图像所对应的第二状态,第二状态用于表示待评估图像的清晰度变化情况。
在实际应用中,如果确定待评估图像的第一状态为静止状态,那么继续对待评估图像进行清晰状态的评估,从而得到第二状态,该第二状态表示待评估图像的清晰度变化情况。可以理解的是,评估清晰度状态需要采用病理图像集合中至少两帧病理图像,即包括待评估图像以及待评估图像的上一帧病理图像(即第一关联图像)。
205、若第二状态为模糊状态转换至清晰状态,则存储待评估图像。
在实际应用中,如果确定待评估图像的第二状态为模糊状态转换至清晰状态,则将存储该待评估图像。
图9为本申请实施例提供的自动保存图像任务的一个方法流程示意图,请参阅图9,首先通过智能显微镜的相机采集到大量的病理图像,然后对这些病理图像进行图像状态的评估,即评估移动状态以及清晰度状态,基于评估结果,可以得到6种图像状态下的病理图像,包括静止状 态、移动状态、移动状态转换至静止状态、静止状态转换至移动状态、调焦(清晰状态转换至模糊状态)状态以及调焦(模糊状态转换至清晰状态)状态。在实际应用中,仅存储移动状态转换至静止状态下的病理图像,以及调焦(模糊状态转换至清晰状态)状态下的病理图像即可,至此,完成病理图像的自动保存。
应用本申请上述实施例中,可以自动保存智能显微镜采集到的病理图像,用于后续的病理报告、交流和备份等。基于自动保存图像的任务,对病理图像集合进行筛选,只需要保存移动状态转换至静止状态的图像,和模糊状态转换至清晰状态的图像,因为其它状态的图像是冗余的或者低质量的,通过上述方式,一方面不需要医务人员手动采图,提升了工作效率,另一方面减少了图像所占用的存储空间。
结合上述介绍,接下来对本申请实施例基于病理图像的处理方法进行介绍,本申请实施例提供的基于病理图像的处理方法的流程示意图,请参阅图10,包括:
301、获取病理图像集合,其中,病理图像集合包括待评估图像、第一关联图像以及第二关联图像,第一关联图像为待评估图像相邻的上一个图像,第二关联图像为第一关联图像相邻的上一个图像。
在实际应用中,由智能显微镜通过相机采集病理图像集合,从而获取到病理图像集合,其中,病理图像集合包括多张连续的病理图像,即至少包括一张待评估图像以及多张关联图像,关联图像是指在待评估图像之前相邻的前几帧图像,即第一关联图像为待评估图像相邻的上一个图像,第二关联图像为第一关联图像相邻的上一个图像。
可以理解的是,智能显微镜也可以将病理图像集合发送给服务器,由服务器判断待评估图像所对应的图像状态。
302、根据病理图像集合确定待评估图像所对应的第一状态,其中, 第一状态用于表示待评估图像的移动变化情况。
在实际应用中,智能显微镜或者服务器对待评估图像进行移动状态的评估,从而得到第一状态,该第一状态用于表示待评估图像的移动变化情况。可以理解的是,评估移动状态需要采用病理图像集合中至少三帧病理图像,即包括待评估图像、待评估图像的上一帧病理图像(即第一关联图像)以及待评估图像的上上帧病理图像(即第二关联图像)。
303、若第一状态为移动状态转换至静止状态,则对待评估图像进行人工智能诊断。
在实际应用中,如果确定待评估图像的第一状态为移动状态转换至静止状态,则对该待评估图像进行AI辅助诊断。
其中,诊断决策支持系统(clinical decision support system)是用于辅助医生在诊断时进行决策的支持系统,该系统通过对病患的数据进行分析,为医生给出诊断建议,医生再结合自己的专业进行判断,从而使诊断更快且更精准。AI在诊断领域的应用主要针对放射科医师增长速率不及影像数据增长速度,医疗人才资源分配不均,误诊率较高。AI可用于对病例数据进行分析,为患者更可靠的提供诊断建议,为医师节省时间。
304、若第一状态为静止状态,则根据病理图像集合确定待评估图像所对应的第二状态,第二状态用于表示待评估图像的清晰度变化情况。
在实际应用中,如果确定待评估图像的第一状态为静止状态,那么继续对待评估图像进行清晰状态的评估,从而得到第二状态,该第二状态表示待评估图像的清晰度变化情况。可以理解的是,评估清晰度状态需要采用病理图像集合中至少两帧病理图像,即包括待评估图像以及待评估图像的上一帧病理图像(即第一关联图像)。
305、若第二状态为模糊状态转换至清晰状态,则对待评估图像进行人工智能诊断。
在实际应用中,如果确定待评估图像的第二状态为模糊状态转换至清晰状态,则对该待评估图像进行AI辅助诊断。
为了便于介绍,请参阅图11,图11为本申请实施例中实时人工智能辅助诊断任务的一个流程图示意图,如图所示,首先通过智能显微镜的相机采集到大量的病理图像,然后对这些病理图像进行图像状态的评估,即评估移动状态以及清晰度状态,基于评估结果,可以得到6种图像状态下的病理图像,包括静止状态、移动状态、移动状态转换至静止状态、静止状态转换至移动状态、调焦(清晰状态转换至模糊状态)状态以及调焦(模糊状态转换至清晰状态)状态。在实际应用中,仅对移动状态转换至静止状态下的病理图像,以及调焦(模糊状态转换至清晰状态)状态下的病理图像进行AI辅助诊断。
这里,基于图像的实时AI辅助诊断是指在医生使用病理显微镜时,实时地将相机采集的图像送入AI辅助诊断模块,并将AI辅助诊断结果反馈给医生,提升医生的工作效率。本申请实施例可以对送入AI辅助诊断模块的图像进行筛选,只选择移动状态转换至静止状态的图像,和模糊状态转换至清晰状态的图像,这是由于这两种状态的图像是医生需要认真观察和真正感兴趣的,从而极大降低了AI辅助诊断模块的吞吐压力。
继续对本申请实施例提供的基于病理图像的处理方法进行介绍,图12为本申请实施例提供的基于病理图像的处理方法流程示意图,请参阅图12,包括:
401、获取病理图像集合,其中,病理图像集合包括待评估图像、第一关联图像以及第二关联图像,第一关联图像为待评估图像相邻的上一个图像,第二关联图像为第一关联图像相邻的上一个图像。
在实际应用中,由智能显微镜通过相机采集病理图像集合,从而获取到病理图像集合,其中,病理图像集合包括多张连续的病理图像,即 至少包括一张待评估图像以及多张关联图像,关联图像是指在待评估图像之前相邻的前几帧图像,即第一关联图像为待评估图像相邻的上一个图像,第二关联图像为第一关联图像相邻的上一个图像。
可以理解的是,智能显微镜也可以将病理图像集合发送给服务器,由服务器判断待评估图像所对应的图像状态。
402、根据病理图像集合确定待评估图像所对应的第一状态,第一状态用于表示待评估图像的移动变化情况。
在实际应用中,智能显微镜或者服务器对待评估图像进行移动状态的评估,从而得到第一状态,该第一状态用于表示待评估图像的移动变化情况。可以理解的是,评估移动状态需要采用病理图像集合中至少三帧病理图像,即包括待评估图像、待评估图像的上一帧病理图像(即第一关联图像)以及待评估图像的上上帧病理图像(即第二关联图像)。
403、若第一状态为移动状态转换至静止状态,或第一状态为静止状态转换至移动状态,或第一状态为移动状态,则传输待评估图像。
在实际应用中,如果确定待评估图像的第一状态为非静止状态(比如移动状态转换至静止状态、静止状态转换至移动状态或移动状态),则传输该待评估图像。
404、若第一状态为静止状态,则根据病理图像集合确定待评估图像所对应的第二状态,第二状态用于表示待评估图像的清晰度变化情况。
在实际应用中,如果确定待评估图像的第一状态为静止状态,那么继续对待评估图像进行清晰状态的评估,从而得到第二状态,该第二状态表示待评估图像的清晰度变化情况。可以理解的是,评估清晰度状态需要采用病理图像集合中至少两帧病理图像,即包括待评估图像以及待评估图像的上一帧病理图像(即第一关联图像)。
405、若第二状态为模糊状态转换至清晰状态,或第二状态为清晰状 态转换至模糊状态,则传输待评估图像。
在实际应用中,如果确定待评估图像的第二状态为模糊状态转换至清晰状态,或者清晰状态转换至模糊状态,则传输待评估图像。
为了便于介绍,请参阅图13,图13为本申请实施例中显微镜视野远程分享任务的一个流程图示意图,如图所示,首先通过智能显微镜的相机采集到大量的病理图像,然后对这些病理图像进行图像状态的评估,即评估移动状态以及清晰度状态,基于评估结果,可以得到6种图像状态下的病理图像,包括静止状态、移动状态、移动状态转换至静止状态、静止状态转换至移动状态、调焦(清晰状态转换至模糊状态)状态以及调焦(模糊状态转换至清晰状态)状态。在实际应用中,只要是非静止状态的病理图像,均可以进行传输。
在实际应用中,医院会诊或交流时,操作显微镜的医生需要把显微镜视野远程分享给其他医生查看,此时需要实时地把显微镜自带相机连续采集的病理图像通过网络传输的方式发送给对方,通过上述方式,可以在网络传输前对病理图像进行筛选,排除静止状态的病理图像,因为这种状态的病理图像是冗余的,从而降低了需要网络传输的数据量。
下面对本申请实施例提供的图像状态确定装置进行详细描述,请参阅图14,图14为本申请实施例提供的图像状态确定装置的组成结构示意图,图像状态确定装置50包括:
获取模块501,配置为获取病理图像集合,其中,所述病理图像集合至少包括待评估图像以及关联图像,所述关联图像与所述待评估图像为连续的帧图像;
确定模块502,配置为根据所述获取模块501获取的所述病理图像集合确定所述待评估图像所对应的第一状态,其中,所述第一状态用于表示所述待评估图像的移动变化情况;
所述确定模块502,还配置为若所述第一状态为静止状态,则根据所述病理图像集合确定所述待评估图像所对应的第二状态,其中,所述第二状态用于表示所述待评估图像的清晰度变化情况。
通过上述方式,能够对采集到的图像进行移动状态评估以及清晰度状态评估,由此确定不同图像的图像状态,而图像状态往往反映了用户操作对图像视野的变化,在很多不同的任务中发挥非常重要的作用,从而可以基于不同的图像状态对图像进行合理的操作,降低了图像处理的难度,提升了任务处理效率。
在一些实施例中,,所述关联图像包括连续的两帧图像,所述连续的两帧图像分别为第一关联图像及第二关联图像;
所述确定模块502,还配置为获取所述待评估图像与第一关联图像的相似度,其中,所述第一关联图像为所述待评估图像相邻的上一个图像;
若所述待评估图像与第一关联图像的相似度大于相似度阈值,则获取所述第一关联图像与第二关联图像的相似度,其中,所述第二关联图像属于所述病理图像集合,且所述第二关联图像为所述第一关联图像相邻的上一个图像;
若所述第一关联图像与第二关联图像的相似度大于所述相似度阈值,则确定所述第一状态为所述静止状态;
若所述第一关联图像与第二关联图像的相似度小于或等于所述相似度阈值,则确定所述第一状态为移动状态转换至静止状态。
通过上述方式,能够基于相似度来评价两个图像之间的关联性,从而为方案提供合理且可靠的实现方式。
在一些实施例中,所述关联图像包括连续的两帧图像,所述连续的两帧图像分别为第一关联图像及第二关联图像;
所述确定模块502,还配置为获取所述待评估图像与第一关联图像的 相似度,其中,所述第一关联图像为所述待评估图像相邻的上一个图像;
若所述待评估图像与第一关联图像的相似度小于或等于相似度阈值,则获取所述第一关联图像与第二关联图像的相似度,其中,所述第二关联图像属于所述病理图像集合,且所述第二关联图像为所述第一关联图像相邻的上一个图像;
若所述第一关联图像与第二关联图像的相似度大于所述相似度阈值,则确定所述第一状态为静止状态转换至移动状态;
若所述第一关联图像与第二关联图像的相似度小于或等于所述相似度阈值,则确定所述第一状态为移动状态。
通过上述方式,能够基于相似度来评价两个图像之间的关联性,从而为方案提供合理且可靠的实现方式。
在一些实施例中,所述确定模块502,还配置为根据所述待评估图像确定源区域病理图像集合,其中,所述源区域病理图像集合包括M个源区域图像,所述M为大于1的整数;
根据所述第一关联图像确定目标区域病理图像集合,其中,所述目标区域病理图像集合包括M个目标区域图像,所述目标区域图像的尺寸小于所述源区域图像的尺寸;
从所述源区域病理图像集合中抽取第一源区域图像,并从所述目标区域病理图像集合中抽取第一目标区域图像;
若第一源区域图像与第一目标区域图像均属于背景图像,则从所述源区域病理图像集合中抽取第二源区域图像,并从所述目标区域病理图像集合中抽取第二目标区域图像,并检测第二源区域图像以及第二目标区域图像是否属于所述背景图像;
若所述第一源区域图像与所述第一目标区域图像不都属于所述背景图像,则计算所述第一源区域图像与所述第一目标区域图像的相似度。
应用本申请实施例提供的上述获取图像之间相似度的方式,将图像分为若干个区域,对区域进行相似度计算,而不直接对整个图像进行相似度计算,这样一方面可以尽量保证相似度判断的准确度,如果所有的区域都是背景图像,那么整个图像很大概率不包括有用信息,另一方面,区域的尺寸远远小于整个图像的尺寸,即使模版匹配方法的时间复杂度较高,但也可以在较短的时间内完成评估。
在一些实施例中,所述确定模块502,还配置为计算所述第二源区域图像的像素值标准差;
若所述第二源区域图像的像素值标准差小于或等于标准差阈值,则确定所述第二源区域图像属于所述背景图像;
计算所述第二目标区域图像的像素值标准差;
若所述第二目标区域图像的像素值标准差小于或等于所述标准差阈值,则确定所述第二目标区域图像属于所述背景图像。
本申请实施例中,利用像素值标准差可以更好地表示图像的变化情况,真实地反映图像内各个像素之间的离散程度,从而提升检测的准确性。
在一些实施例中,所述确定模块502,还配置为根据所述第一源区域图像与所述第一目标区域图像,计算得到图像矩阵,其中,所述图像矩阵包括多个元素;
根据所述图像矩阵确定所述第一源区域图像与所述第一目标区域图像的相似度,其中,所述第一源区域图像与所述第一目标区域图像的相似度为所述图像矩阵中元素的最大值。
应用本申请实施例提供的区域图像之间相似度的计算方式,为方案的实现提供了具体的操作方式,从而提升方案的可行性和可操作性。
在一些实施例中,所述确定模块502,还配置为获取所述待评估图像 的清晰度与第一关联图像的清晰度,其中,所述第一关联图像属于所述病理图像集合,且所述第一关联图像为所述待评估图像相邻的上一个图像;
若所述待评估图像的清晰度与所述第一关联图像的清晰度满足第一预设条件,则获取基准图像的清晰度;
若所述基准图像的清晰度与所述待评估图像的清晰度满足第二预设条件,则确定所述第二状态为调焦状态,其中,所述调焦状态为清晰状态转换至模糊状态,或模糊状态转换至清晰状态。
本申请实施例中,通过基准图像和双阈值能够解决图像清晰度对外界环境变化敏感的问题,从而能够更加可靠地推断出是否正在对设备进行调焦。
在一些实施例中,所述确定模块502,还配置为获取所述待评估图像的清晰度与第一关联图像的清晰度之后,若所述待评估图像的清晰度与所述第一关联图像的清晰度未满足所述第一预设条件,则将所述基准图像更新为所述待评估图像;
若所述基准图像的清晰度与所述待评估图像的清晰度未满足所述第二预设条件,则更新所述基准图像的清晰度;
确定所述第二状态为调焦状态之后,将所述基准图像更新为所述待评估图像。
本申请实施例中,使用基准图像和连续两幅图像进行评估,当前图像和基准图像的清晰度差异小于给的阈值时,可能的情况包括医生没有调焦、医生调焦幅度太小、显微镜抖动或者相机自我调节等情况,这个时候不需要更新基准图像,而是继续累计基准图像的清晰度差异,从而有利于得到更准确的检测结果。
在一些实施例中,所述确定模块502,还配置为获取所述待评估图像 的清晰度与第一关联图像的清晰度之后,判断所述待评估图像的清晰度与所述第一关联图像的清晰度之差是否大于或等于第一清晰度阈值;
若所述待评估图像的清晰度与所述第一关联图像的清晰度之差大于或等于所述第一清晰度阈值,则确定所述待评估图像的清晰度与所述第一关联图像的清晰度满足所述第一预设条件;
若所述待评估图像的清晰度与所述第一关联图像的清晰度之差小于所述第一清晰度阈值,则判断所述基准图像的清晰度与所述待评估图像的清晰度之差是否大于或等于第二清晰度阈值,其中,所述第二清晰度阈值大于所述第一清晰度阈值;
若所述基准图像的清晰度与所述待评估图像的清晰度之差大于或等于所述第二清晰度阈值,则确定所述基准图像的清晰度与所述待评估图像的清晰度满足第二预设条件。
本申请实施例中,当对比当前图像和上一幅图像的清晰度时使用低阈值,当对比当前图像和基准图像的清晰度时使用高阈值,使用低阈值可以推断医生没有对显微镜调焦,而使用高阈值推断医生正在对显微镜调焦,从而提升清晰度检测的可靠性。
在一些实施例中,图15为本申请实施例提供的图像状态确定装置50的组成结构示意图,请参阅图15,在图14的基础上,所述图像状态确定装置50还包括存储模块503;
所述存储模块503,配置为在所述确定模块502根据所述病理图像集合确定所述待评估图像所对应的第二状态之后,若所述第一状态为移动状态转换至静止状态,则存储所述待评估图像;
若所述第二状态为模糊状态转换至清晰状态,则存储所述待评估图像。
本申请实施例中,可以自动保存智能显微镜采集到的病理图像,用 于后续的病理报告、交流和备份等。基于自动保存图像的任务,对病理图像集合进行筛选,只需要保存移动状态转换至静止状态的图像,和模糊状态转换至清晰状态的图像,因为其它状态的图像是冗余的或者低质量的,通过上述方式,一方面不需要医务人员手动采图,提升了工作效率,另一方面减少了图像所占用的存储空间。
在一些实施例中,图16为本申请实施例提供的图像状态确定装置50的组成结构示意图,请参阅图16,在图14的基础上,所述图像状态确定装置50还包括诊断模块504;
所述诊断模块504,配置为在所述确定模块502根据所述病理图像集合确定所述待评估图像所对应的第二状态之后,若所述第一状态为移动状态转换至静止状态,则对所述待评估图像进行病理分析;
若所述第二状态为模糊状态转换至清晰状态,则对所述待评估图像进行所述病理分析。
本申请实施例中,基于图像的实时AI辅助诊断是指在医生使用病理显微镜时,实时地将相机采集的图像送入AI辅助诊断模块,并将AI辅助诊断结果反馈给医生,提升医生的工作效率。本申请可以对送入AI辅助诊断模块的图像进行筛选,只选择移动状态转换至静止状态的图像,和模糊状态转换至清晰状态的图像,这是由于这两种状态的图像是医生需要认真观察和真正感兴趣的,从而极大降低了AI辅助诊断模块的吞吐压力。
在一些实施例中,图17为本申请实施例提供的图像状态确定装置50的组成结构示意图,请参阅图17,在图14的基础上,所述图像状态确定装置50还包括传输模块505;
所述传输模块505,配置为在所述确定模块502根据所述病理图像集合确定所述待评估图像所对应的第二状态之后,若所述第一状态为移动 状态转换至静止状态,或所述第一状态为静止状态转换至移动状态,或所述第一状态为移动状态,则传输所述待评估图像;
若所述第二状态为模糊状态转换至清晰状态,或所述第二状态为清晰状态转换至模糊状态,则传输所述待评估图像。
本申请实施例中,医院会诊或交流时,操作显微镜的医生需要把显微镜视野远程分享给其他医生查看,此时需要实时地把显微镜自带相机连续采集的病理图像通过网络传输的方式发送给对方,通过上述方式,可以在网络传输前对病理图像进行筛选,排除静止状态的病理图像,因为这种状态的病理图像是冗余的,从而降低了需要网络传输的数据量。
在实际实施时,本申请实施例提供的图像状态确定装置可由终端设备实施,图18为本申请实施例提供的终端设备的组成结构示意图,如图18所示,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该终端设备可以为包括手机、平板电脑、个人数字助理(Personal Digital Assistant,PDA)、销售终端设备(Point of Sales,POS)、车载电脑等任意终端设备,以终端设备为手机为例:
图18示出的是与本申请实施例提供的终端设备相关的手机的部分结构的框图。参考图18,手机包括:射频(Radio Frequency,RF)电路910、存储器920、输入单元930、显示单元940、传感器950、音频电路960、无线保真(Wireless Fidelity,WiFi)模块970、处理器980、以及电源990等部件。本领域技术人员可以理解,图18中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图18对手机的各个构成部件进行具体的介绍:
RF电路910可配置为收发信息或通话过程中,信号的接收和发送, 特别地,将基站的下行信息接收后,给处理器980处理;另外,将设计上行的数据发送给基站。通常,RF电路910包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路910还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器920可配置为存储软件程序以及模块,处理器980通过运行存储在存储器920的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器920可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器920可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
在一些实施例中,所述存储器920,还配置为存储计算机程序,该计算机程序用于执行本申请实施例提供的基于病理图像的图像状态确定方法。
输入单元930可配置为接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。在实际实施时,输入单元930可包括触控面板931以及其他输入设备932。触控面板931,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、 触笔等任何适合的物体或附件在触控面板931上或在触控面板931附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板931可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器980,并能接收处理器980发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板931。除了触控面板931,输入单元930还可以包括其他输入设备932。在实际实施时,其他输入设备932可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元940可配置为显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元940可包括显示面板941,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板941。进一步的,触控面板931可覆盖显示面板941,当触控面板931检测到在其上或附近的触摸操作后,传送给处理器980以确定触摸事件的类型,随后处理器980根据触摸事件的类型在显示面板941上提供相应的视觉输出。虽然在图18中,触控面板931与显示面板941是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板931与显示面板941集成而实现手机的输入和输出功能。
手机还可包括至少一种传感器950,比如光传感器、运动传感器以及其他传感器。在实际实施时,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板941的亮度,接近传感器可在手机移动到耳边时,关闭显示面板941和/或背光。 作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路960、扬声器961,传声器962可提供用户与手机之间的音频接口。音频电路960可将接收到的音频数据转换后的电信号,传输到扬声器961,由扬声器961转换为声音信号输出;另一方面,传声器962将收集的声音信号转换为电信号,由音频电路960接收后转换为音频数据,再将音频数据输出处理器980处理后,经RF电路910以发送给比如另一手机,或者将音频数据输出至存储器920以便进一步处理。
WiFi属于短距离无线传输技术,手机通过WiFi模块970可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图18示出了WiFi模块970,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器980是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器920内的软件程序和/或模块,以及调用存储在存储器920内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器980可包括一个或多个处理单元;可选的,处理器980可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器980中。
手机还包括给各个部件供电的电源990(比如电池),可选的,电源 可以通过电源管理系统与处理器980逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。尽管未示出,手机还可以包括摄像头、蓝牙模块等,在此不再赘述。
在本申请实施例中,该终端设备所包括的处理器980,还配置为执行存储器920中存储的计算机程序,以实现本申请实施例提供的上述基于病理图像的图像状态确定方法。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。
工业实用性
本申请实施例中通过显微镜获取病理图像集合,其中,所述病理图像集合至少包括待评估图像以及关联图像,所述关联图像与所述待评估图像为连续的帧图像;根据所述病理图像集合确定所述待评估图像所对应的第一状态,其中,所述第一状态用于表示所述待评估图像的移动变化情况;若所述第一状态为静止状态,则根据所述病理图像集合确定所述待评估图像所对应的第二状态,其中,所述第二状态用于表示所述待评估图像的清晰度变化情况。如此,能够对采集到的图像进行移动状态评估以及清晰度状态评估,由此确定不同图像的图像状态,而病理图像的图像状态往往反映了用户对显微镜的操作及显微镜内图像视野的变化,进而可根据图像状态和任务类型,对显微镜相机采集的病理图像进 行筛选,协助完成任务目标,降低了图像处理的难度,提升了任务处理效率。

Claims (28)

  1. 一种基于病理图像的图像状态确定方法,所述方法包括:
    通过显微镜获取病理图像集合,其中,所述病理图像集合至少包括待评估图像以及关联图像,所述关联图像与所述待评估图像为连续的帧图像;
    根据所述病理图像集合确定所述待评估图像所对应的第一状态,其中,所述第一状态用于表示所述待评估图像的移动变化情况;
    若所述第一状态为静止状态,则根据所述病理图像集合确定所述待评估图像所对应的第二状态,其中,所述第二状态用于表示所述待评估图像的清晰度变化情况。
  2. 根据权利要求1所述的方法,其中,所述关联图像包括连续的两帧图像,所述连续的两帧图像分别为第一关联图像及第二关联图像;
    所述根据所述病理图像集合确定所述待评估图像所对应的第一状态,包括:
    获取所述待评估图像与所述第一关联图像的相似度,所述第一关联图像为所述待评估图像相邻的上一个图像;
    若所述待评估图像与第一关联图像的相似度大于相似度阈值,则获取所述第一关联图像与所述第二关联图像的相似度,所述第二关联图像为所述第一关联图像相邻的上一个图像;
    若所述第一关联图像与第二关联图像的相似度大于所述相似度阈值,则确定所述第一状态为所述静止状态;
    若所述第一关联图像与第二关联图像的相似度小于或等于所述相似度阈值,则确定所述第一状态为移动状态转换至静止状态。
  3. 根据权利要求1所述的方法,其中,所述关联图像包括连续的两帧图像,所述连续的两帧图像分别为第一关联图像及第二关联图像;
    所述根据所述病理图像集合确定所述待评估图像所对应的第一状态,包括:
    获取所述待评估图像与所述第一关联图像的相似度,所述第一关联图像为所述待评估图像相邻的上一个图像;
    若所述待评估图像与第一关联图像的相似度小于或等于相似度阈值,则获取所述第一关联图像与第二关联图像的相似度,所述第二关联图像为所述第一关联图像相邻的上一个图像;
    若所述第一关联图像与第二关联图像的相似度大于所述相似度阈值,则确定所述第一状态为静止状态转换至移动状态;
    若所述第一关联图像与第二关联图像的相似度小于或等于所述相似度阈值,则确定所述第一状态为移动状态。
  4. 根据权利要求2或3所述的方法,其中,所述获取所述待评估图像与所述第一关联图像的相似度,包括:
    根据所述待评估图像确定源区域病理图像集合,其中,所述源区域病理图像集合包括M个源区域图像,所述M为大于1的整数;
    根据所述第一关联图像确定目标区域病理图像集合,其中,所述目标区域病理图像集合包括M个目标区域图像,所述目标区域图像的尺寸小于所述源区域图像的尺寸;
    从所述源区域病理图像集合中抽取第一源区域图像,并从所述目标区域病理图像集合中抽取第一目标区域图像;
    若所述第一源区域图像与所述第一目标区域图像均属于背景图像,则从所述源区域病理图像集合中抽取第二源区域图像,并从所述目标区域病理图像集合中抽取第二目标区域图像,并检测所述第二源区域图像以及所述第二目标区域图像是否属于所述背景图像;
    若所述第一源区域图像与所述第一目标区域图像不都属于所述背景 图像,则计算所述第一源区域图像与所述第一目标区域图像的相似度,并将计算得到的相似度作为所述待评估图像与所述第一关联图像的相似度。
  5. 根据权利要求4所述的方法,其中,所述检测所述第二源区域图像以及所述第二目标区域图像是否属于所述背景图像,包括:
    计算所述第二源区域图像的像素值标准差;
    若所述第二源区域图像的像素值标准差小于或等于标准差阈值,则确定所述第二源区域图像属于所述背景图像;
    计算所述第二目标区域图像的像素值标准差;
    若所述第二目标区域图像的像素值标准差小于或等于所述标准差阈值,则确定所述第二目标区域图像属于所述背景图像。
  6. 根据权利要求4所述的方法,其中,所述计算所述第一源区域图像与所述第一目标区域图像的相似度,包括:
    根据所述第一源区域图像与所述第一目标区域图像,计算得到图像矩阵,其中,所述图像矩阵包括多个元素;
    根据所述图像矩阵确定所述第一源区域图像与所述第一目标区域图像的相似度,其中,所述第一源区域图像与所述第一目标区域图像的相似度为所述图像矩阵中元素的最大值。
  7. 根据权利要求1所述的方法,其中,所述根据所述病理图像集合确定所述待评估图像所对应的第二状态,包括:
    获取所述待评估图像的清晰度与第一关联图像的清晰度,其中,所述第一关联图像属于所述病理图像集合,且所述第一关联图像为与所述待评估图像相邻的上一个图像;
    若所述待评估图像的清晰度与所述第一关联图像的清晰度满足第一预设条件,则获取基准图像的清晰度;
    若所述基准图像的清晰度与所述待评估图像的清晰度满足第二预设条件,则确定所述第二状态为调焦状态,其中,所述调焦状态为清晰状态转换至模糊状态,或模糊状态转换至清晰状态。
  8. 根据权利要求7所述的方法,其中,所述获取所述待评估图像的清晰度与第一关联图像的清晰度之后,所述方法还包括:
    若所述待评估图像的清晰度与所述第一关联图像的清晰度未满足所述第一预设条件,则将所述基准图像更新为所述待评估图像;
    若所述基准图像的清晰度与所述待评估图像的清晰度未满足所述第二预设条件,则更新所述基准图像的清晰度;
    所述确定所述第二状态为调焦状态之后,所述方法还包括:
    将所述基准图像更新为所述待评估图像。
  9. 根据权利要求7所述的方法,其中,所述获取所述待评估图像的清晰度与第一关联图像的清晰度之后,所述方法还包括:
    判断所述待评估图像的清晰度与所述第一关联图像的清晰度之差是否大于或等于第一清晰度阈值;
    若所述待评估图像的清晰度与所述第一关联图像的清晰度之差大于或等于所述第一清晰度阈值,则确定所述待评估图像的清晰度与所述第一关联图像的清晰度满足所述第一预设条件;
    若所述待评估图像的清晰度与所述第一关联图像的清晰度之差小于所述第一清晰度阈值,则判断所述基准图像的清晰度与所述待评估图像的清晰度之差是否大于或等于第二清晰度阈值,其中,所述第二清晰度阈值大于所述第一清晰度阈值;
    若所述基准图像的清晰度与所述待评估图像的清晰度之差大于或等于所述第二清晰度阈值,则确定所述基准图像的清晰度与所述待评估图像的清晰度满足第二预设条件。
  10. 根据权利要求1所述的方法,其中,所述根据所述病理图像集合确定所述待评估图像所对应的第一状态之后,所述方法还包括:
    若所述第一状态为移动状态转换至静止状态,则存储所述待评估图像;
    所述根据所述病理图像集合确定所述待评估图像所对应的第二状态之后,所述方法还包括:
    若所述第二状态为模糊状态转换至清晰状态,则存储所述待评估图像。
  11. 根据权利要求1所述的方法,其中,所述根据所述病理图像集合确定所述待评估图像所对应的第一状态之后,所述方法还包括:
    若所述第一状态为移动状态转换至静止状态,则对所述待评估图像进行病理分析;
    所述根据所述病理图像集合确定所述待评估图像所对应的第二状态之后,所述方法还包括:
    若所述第二状态为模糊状态转换至清晰状态,则对所述待评估图像进行所述病理分析。
  12. 根据权利要求1所述的方法,其中,所述根据所述病理图像集合确定所述待评估图像所对应的第一状态之后,所述方法还包括:
    若所述第一状态为移动状态转换至静止状态,或所述第一状态为静止状态转换至移动状态,或所述第一状态为移动状态,则传输所述待评估图像;
    所述根据所述病理图像集合确定所述待评估图像所对应的第二状态之后,所述方法还包括:
    若所述第二状态为模糊状态转换至清晰状态,或所述第二状态为清晰状态转换至模糊状态,则传输所述待评估图像。
  13. 一种基于病理图像的图像状态确定方法,所述方法由终端设备执行,所述终端设备包括有一个或多个处理器以及存储器,以及一个或一个以上的程序,其中,所述一个或一个以上的程序存储于存储器中,所述程序可以包括一个或一个以上的每一个对应于一组指令的单元,所述一个或多个处理器被配置为执行指令;所述方法包括:
    通过显微镜获取病理图像集合,其中,所述病理图像集合至少包括待评估图像以及关联图像,所述关联图像与所述待评估图像为连续的帧图像;
    根据所述病理图像集合确定所述待评估图像所对应的第一状态,其中,所述第一状态用于表示所述待评估图像的移动变化情况;
    若所述第一状态为静止状态,则根据所述病理图像集合确定所述待评估图像所对应的第二状态,其中,所述第二状态用于表示所述待评估图像的清晰度变化情况。
  14. 根据权利要求13所述的方法,其中,所述关联图像包括连续的两帧图像,所述连续的两帧图像分别为第一关联图像及第二关联图像;
    所述根据所述病理图像集合确定所述待评估图像所对应的第一状态,包括:
    获取所述待评估图像与所述第一关联图像的相似度,所述第一关联图像为所述待评估图像相邻的上一个图像;
    若所述待评估图像与第一关联图像的相似度大于相似度阈值,则获取所述第一关联图像与所述第二关联图像的相似度,所述第二关联图像为所述第一关联图像相邻的上一个图像;
    若所述第一关联图像与第二关联图像的相似度大于所述相似度阈值,则确定所述第一状态为所述静止状态;
    若所述第一关联图像与第二关联图像的相似度小于或等于所述相似 度阈值,则确定所述第一状态为移动状态转换至静止状态。
  15. 根据权利要求13所述的方法,其中,所述关联图像包括连续的两帧图像,所述连续的两帧图像分别为第一关联图像及第二关联图像;
    所述根据所述病理图像集合确定所述待评估图像所对应的第一状态,包括:
    获取所述待评估图像与所述第一关联图像的相似度,所述第一关联图像为所述待评估图像相邻的上一个图像;
    若所述待评估图像与第一关联图像的相似度小于或等于相似度阈值,则获取所述第一关联图像与第二关联图像的相似度,所述第二关联图像为所述第一关联图像相邻的上一个图像;
    若所述第一关联图像与第二关联图像的相似度大于所述相似度阈值,则确定所述第一状态为静止状态转换至移动状态;
    若所述第一关联图像与第二关联图像的相似度小于或等于所述相似度阈值,则确定所述第一状态为移动状态。
  16. 根据权利要求14或15所述的方法,其中,所述获取所述待评估图像与所述第一关联图像的相似度,包括:
    根据所述待评估图像确定源区域病理图像集合,其中,所述源区域病理图像集合包括M个源区域图像,所述M为大于1的整数;
    根据所述第一关联图像确定目标区域病理图像集合,其中,所述目标区域病理图像集合包括M个目标区域图像,所述目标区域图像的尺寸小于所述源区域图像的尺寸;
    从所述源区域病理图像集合中抽取第一源区域图像,并从所述目标区域病理图像集合中抽取第一目标区域图像;
    若所述第一源区域图像与所述第一目标区域图像均属于背景图像,则从所述源区域病理图像集合中抽取第二源区域图像,并从所述目标区 域病理图像集合中抽取第二目标区域图像,并检测所述第二源区域图像以及所述第二目标区域图像是否属于所述背景图像;
    若所述第一源区域图像与所述第一目标区域图像不都属于所述背景图像,则计算所述第一源区域图像与所述第一目标区域图像的相似度,并将计算得到的相似度作为所述待评估图像与所述第一关联图像的相似度。
  17. 根据权利要求16所述的方法,其中,所述检测所述第二源区域图像以及所述第二目标区域图像是否属于所述背景图像,包括:
    计算所述第二源区域图像的像素值标准差;
    若所述第二源区域图像的像素值标准差小于或等于标准差阈值,则确定所述第二源区域图像属于所述背景图像;
    计算所述第二目标区域图像的像素值标准差;
    若所述第二目标区域图像的像素值标准差小于或等于所述标准差阈值,则确定所述第二目标区域图像属于所述背景图像。
  18. 根据权利要求16所述的方法,其中,所述计算所述第一源区域图像与所述第一目标区域图像的相似度,包括:
    根据所述第一源区域图像与所述第一目标区域图像,计算得到图像矩阵,其中,所述图像矩阵包括多个元素;
    根据所述图像矩阵确定所述第一源区域图像与所述第一目标区域图像的相似度,其中,所述第一源区域图像与所述第一目标区域图像的相似度为所述图像矩阵中元素的最大值。
  19. 根据权利要求13所述的方法,其中,所述根据所述病理图像集合确定所述待评估图像所对应的第二状态,包括:
    获取所述待评估图像的清晰度与第一关联图像的清晰度,其中,所述第一关联图像属于所述病理图像集合,且所述第一关联图像为与所述 待评估图像相邻的上一个图像;
    若所述待评估图像的清晰度与所述第一关联图像的清晰度满足第一预设条件,则获取基准图像的清晰度;
    若所述基准图像的清晰度与所述待评估图像的清晰度满足第二预设条件,则确定所述第二状态为调焦状态,其中,所述调焦状态为清晰状态转换至模糊状态,或模糊状态转换至清晰状态。
  20. 根据权利要求19所述的方法,其中,所述获取所述待评估图像的清晰度与第一关联图像的清晰度之后,所述方法还包括:
    若所述待评估图像的清晰度与所述第一关联图像的清晰度未满足所述第一预设条件,则将所述基准图像更新为所述待评估图像;
    若所述基准图像的清晰度与所述待评估图像的清晰度未满足所述第二预设条件,则更新所述基准图像的清晰度;
    所述确定所述第二状态为调焦状态之后,所述方法还包括:
    将所述基准图像更新为所述待评估图像。
  21. 根据权利要求19所述的方法,其中,所述获取所述待评估图像的清晰度与第一关联图像的清晰度之后,所述方法还包括:
    判断所述待评估图像的清晰度与所述第一关联图像的清晰度之差是否大于或等于第一清晰度阈值;
    若所述待评估图像的清晰度与所述第一关联图像的清晰度之差大于或等于所述第一清晰度阈值,则确定所述待评估图像的清晰度与所述第一关联图像的清晰度满足所述第一预设条件;
    若所述待评估图像的清晰度与所述第一关联图像的清晰度之差小于所述第一清晰度阈值,则判断所述基准图像的清晰度与所述待评估图像的清晰度之差是否大于或等于第二清晰度阈值,其中,所述第二清晰度阈值大于所述第一清晰度阈值;
    若所述基准图像的清晰度与所述待评估图像的清晰度之差大于或等于所述第二清晰度阈值,则确定所述基准图像的清晰度与所述待评估图像的清晰度满足第二预设条件。
  22. 根据权利要求13所述的方法,其中,所述根据所述病理图像集合确定所述待评估图像所对应的第一状态之后,所述方法还包括:
    若所述第一状态为移动状态转换至静止状态,则存储所述待评估图像;
    所述根据所述病理图像集合确定所述待评估图像所对应的第二状态之后,所述方法还包括:
    若所述第二状态为模糊状态转换至清晰状态,则存储所述待评估图像。
  23. 根据权利要求13所述的方法,其中,所述根据所述病理图像集合确定所述待评估图像所对应的第一状态之后,所述方法还包括:
    若所述第一状态为移动状态转换至静止状态,则对所述待评估图像进行病理分析;
    所述根据所述病理图像集合确定所述待评估图像所对应的第二状态之后,所述方法还包括:
    若所述第二状态为模糊状态转换至清晰状态,则对所述待评估图像进行所述病理分析。
  24. 根据权利要求13所述的方法,其中,所述根据所述病理图像集合确定所述待评估图像所对应的第一状态之后,所述方法还包括:
    若所述第一状态为移动状态转换至静止状态,或所述第一状态为静止状态转换至移动状态,或所述第一状态为移动状态,则传输所述待评估图像;
    所述根据所述病理图像集合确定所述待评估图像所对应的第二状态 之后,所述方法还包括:
    若所述第二状态为模糊状态转换至清晰状态,或所述第二状态为清晰状态转换至模糊状态,则传输所述待评估图像。
  25. 一种智能显微镜系统,所述智能显微镜系统包括图像采集模块,图像处理分析模块、病理分析模块、存储模块以及传输模块;
    其中,所述图像采集模块,配置为获取病理图像集合,其中,所述病理图像集合至少包括待评估图像以及关联图像,所述关联图像与所述待评估图像为连续的帧图像;
    所述图像处理分析模块,配置为根据所述病理图像集合确定所述待评估图像所对应的第一状态,其中,所述第一状态用于表示所述待评估图像的移动变化情况;
    若所述第一状态为静止状态,则根据所述病理图像集合确定所述待评估图像所对应的第二状态,其中,所述第二状态用于表示所述待评估图像的清晰度变化情况;
    所述存储模块,配置为若所述第一状态为移动状态转换至静止状态,则存储所述待评估图像;
    若所述第二状态为模糊状态转换至清晰状态,则存储所述待评估图像;
    所述病理分析模块,配置为若所述第一状态为移动状态转换至静止状态,则对所述待评估图像进行病理分析;
    若所述第二状态为模糊状态转换至清晰状态,则对所述待评估图像进行所述病理分析;
    所述传输模块,配置为若所述第一状态为移动状态转换至静止状态,或所述第一状态为静止状态转换至移动状态,或所述第一状态为移动状态,则传输所述待评估图像;
    若所述第二状态为模糊状态转换至清晰状态,或所述第二状态为清晰状态转换至模糊状态,则传输所述待评估图像。
  26. 一种图像状态确定装置,包括:
    获取模块,配置为获取病理图像集合,其中,所述病理图像集合至少包括待评估图像以及关联图像,所述关联图像与所述待评估图像为连续的帧图像;
    确定模块,配置为根据所述获取模块获取的所述病理图像集合确定所述待评估图像所对应的第一状态,其中,所述第一状态用于表示所述待评估图像的移动变化情况;
    所述确定模块,还配置为若所述第一状态为静止状态,则根据所述病理图像集合确定所述待评估图像所对应的第二状态,其中,所述第二状态用于表示所述待评估图像的清晰度变化情况。
  27. 一种终端设备,包括:存储器以及处理器;
    其中,所述存储器,配置为存储计算机程序;
    所述处理器,配置为执行所述存储器中的计算机程序时,执行权利要求1至12任一项所述的基于病理图像的图像状态确定方法。
  28. 一种计算机存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序用于执行权利要求1至12任一项所述的基于病理图像的图像状态确定方法。
PCT/CN2020/090007 2019-05-29 2020-05-13 图像状态确定方法、装置、设备、系统及计算机存储介质 WO2020238626A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20814634.0A EP3979194A4 (en) 2019-05-29 2020-05-13 IMAGE STATE DETERMINATION METHOD AND DEVICE, APPARATUS, SYSTEM AND COMPUTER RECORDING MEDIUM
US17/373,416 US11921278B2 (en) 2019-05-29 2021-07-12 Image status determining method an apparatus, device, system, and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910457380.4 2019-05-29
CN201910457380.4A CN110175995B (zh) 2019-05-29 2019-05-29 一种基于病理图像的图像状态确定方法、装置以及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/373,416 Continuation US11921278B2 (en) 2019-05-29 2021-07-12 Image status determining method an apparatus, device, system, and computer storage medium

Publications (1)

Publication Number Publication Date
WO2020238626A1 true WO2020238626A1 (zh) 2020-12-03

Family

ID=67696079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/090007 WO2020238626A1 (zh) 2019-05-29 2020-05-13 图像状态确定方法、装置、设备、系统及计算机存储介质

Country Status (4)

Country Link
US (1) US11921278B2 (zh)
EP (1) EP3979194A4 (zh)
CN (2) CN110443794B (zh)
WO (1) WO2020238626A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071657A (zh) * 2023-03-07 2023-05-05 青岛旭华建设集团有限公司 一种建筑施工视频监控大数据的智能预警系统
WO2023133223A1 (en) * 2022-01-05 2023-07-13 Merative Us L.P. Medical image study difficulty estimation

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443794B (zh) 2019-05-29 2020-12-18 腾讯科技(深圳)有限公司 一种基于病理图像的图像状态确定方法、装置以及系统
CN111126460B (zh) * 2019-12-10 2024-04-02 福建省高速公路集团有限公司 基于人工智能的路面病害自动巡检方法、介质、设备及装置
CN111143601A (zh) * 2019-12-31 2020-05-12 深圳市芭田生态工程股份有限公司 一种图像处理方法
CN111951261A (zh) * 2020-08-24 2020-11-17 郑州中普医疗器械有限公司 离体生物样本检查过程的控制方法、计算机设备和控制系统
CN112241953B (zh) * 2020-10-22 2023-07-21 江苏美克医学技术有限公司 基于多聚焦图像融合和hdr算法的样本图像融合方法及装置
CN112957062B (zh) * 2021-05-18 2021-07-16 雅安市人民医院 基于5g传输的车载ct成像系统及成像方法
CN116681702B (zh) * 2023-08-03 2023-10-17 山东华光新材料技术有限公司 一种光纤预制棒的一次拉伸评估方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7885443B2 (en) * 2005-11-14 2011-02-08 Hologic, Inc. Facilitating temporal comparison of medical images
CN102314680A (zh) * 2010-06-29 2012-01-11 索尼公司 图像处理装置、图像处理系统、图像处理方法和程序
CN104144345A (zh) * 2013-09-18 2014-11-12 腾讯科技(深圳)有限公司 在移动终端进行实时图像识别的方法及该移动终端
CN105118088A (zh) * 2015-08-06 2015-12-02 曲阜裕隆生物科技有限公司 一种基于病理切片扫描装置的3d成像及融合的方法
CN110175995A (zh) * 2019-05-29 2019-08-27 腾讯科技(深圳)有限公司 一种基于病理图像的图像状态确定方法、装置以及系统

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7672369B2 (en) * 2002-02-13 2010-03-02 Reify Corporation Method and apparatus for acquisition, compression, and characterization of spatiotemporal signals
CA2595248A1 (en) * 2005-01-18 2006-07-27 Trestle Corporation System and method for creating variable quality images of a slide
JP5047764B2 (ja) * 2007-12-06 2012-10-10 オリンパス株式会社 顕微鏡用撮影装置
EP2239612A1 (de) * 2009-04-06 2010-10-13 Carl Zeiss Surgical GmbH Verfahren und Vorrichtung zum Extrahieren von Standbildern aus Bildaufnahmen eines Operationsmikroskops
CN101702053B (zh) * 2009-11-13 2012-01-25 长春迪瑞实业有限公司 一种尿沉渣检验设备中显微镜系统的自动聚焦方法
CN101706609B (zh) * 2009-11-23 2012-05-02 常州超媒体与感知技术研究所有限公司 基于图像处理的显微镜快速自动聚焦方法
CN102221550B (zh) * 2011-05-30 2013-05-29 成都西图科技有限公司 一种岩心岩屑图像采集装置及图像采集、处理方法
JP6445000B2 (ja) * 2013-10-18 2018-12-26 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Mdxのための画像ベースのroi追跡
CN104680483B (zh) * 2013-11-25 2016-03-02 浙江大华技术股份有限公司 图像的噪声估计方法、视频图像去噪方法及装置
EP3234917B1 (en) * 2014-12-17 2019-08-14 Koninklijke Philips N.V. Method and system for calculating a displacement of an object of interest
WO2016172656A1 (en) * 2015-04-24 2016-10-27 Canfield Scientific, Incorporated Dermatological feature tracking over multiple images
CN106251358B (zh) * 2016-08-08 2019-05-07 珠海赛纳打印科技股份有限公司 一种图像处理方法及装置
CN107767365A (zh) * 2017-09-21 2018-03-06 华中科技大学鄂州工业技术研究院 一种内窥镜图像处理方法及系统
CN108933897B (zh) * 2018-07-27 2020-10-16 南昌黑鲨科技有限公司 基于图像序列的运动检测方法及装置
CN109674494B (zh) * 2019-01-29 2021-09-14 深圳瀚维智能医疗科技有限公司 超声扫查实时控制方法、装置、存储介质及计算机设备
CN109782414B (zh) * 2019-03-01 2021-05-18 广州医软智能科技有限公司 一种基于无参考结构清晰度的自动调焦方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7885443B2 (en) * 2005-11-14 2011-02-08 Hologic, Inc. Facilitating temporal comparison of medical images
CN102314680A (zh) * 2010-06-29 2012-01-11 索尼公司 图像处理装置、图像处理系统、图像处理方法和程序
CN104144345A (zh) * 2013-09-18 2014-11-12 腾讯科技(深圳)有限公司 在移动终端进行实时图像识别的方法及该移动终端
CN105118088A (zh) * 2015-08-06 2015-12-02 曲阜裕隆生物科技有限公司 一种基于病理切片扫描装置的3d成像及融合的方法
CN110175995A (zh) * 2019-05-29 2019-08-27 腾讯科技(深圳)有限公司 一种基于病理图像的图像状态确定方法、装置以及系统
CN110443794A (zh) * 2019-05-29 2019-11-12 腾讯科技(深圳)有限公司 一种基于病理图像的图像状态确定方法、装置以及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3979194A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023133223A1 (en) * 2022-01-05 2023-07-13 Merative Us L.P. Medical image study difficulty estimation
CN116071657A (zh) * 2023-03-07 2023-05-05 青岛旭华建设集团有限公司 一种建筑施工视频监控大数据的智能预警系统

Also Published As

Publication number Publication date
EP3979194A1 (en) 2022-04-06
US20210341725A1 (en) 2021-11-04
CN110443794B (zh) 2020-12-18
US11921278B2 (en) 2024-03-05
CN110443794A (zh) 2019-11-12
CN110175995A (zh) 2019-08-27
CN110175995B (zh) 2021-04-30
EP3979194A4 (en) 2022-08-17

Similar Documents

Publication Publication Date Title
WO2020238626A1 (zh) 图像状态确定方法、装置、设备、系统及计算机存储介质
JP7090836B2 (ja) 結腸ポリープ画像の処理方法並びに、その装置、コンピュータプログラム及びシステム
CN110348543B (zh) 眼底图像识别方法、装置、计算机设备及存储介质
WO2021135601A1 (zh) 辅助拍照方法、装置、终端设备及存储介质
AU2014271202B2 (en) A system and method for remote medical diagnosis
WO2018113512A1 (zh) 图像处理方法以及相关装置
WO2021098609A1 (zh) 图像检测方法、装置及电子设备
US11612314B2 (en) Electronic device and method for determining degree of conjunctival hyperemia by using same
CN114171167B (zh) 图像显示方法、装置、终端及存储介质
US11747900B2 (en) Eye-tracking image viewer for digital pathology
WO2020015149A1 (zh) 一种皱纹检测方法及电子设备
US11721023B1 (en) Distinguishing a disease state from a non-disease state in an image
CN111598133B (zh) 基于人工智能的图像显示方法、装置、系统、设备及介质
WO2021017713A1 (zh) 拍摄方法及移动终端
WO2022095640A1 (zh) 对图像中的树状组织进行重建的方法、设备及存储介质
CN113724188A (zh) 一种病灶图像的处理方法以及相关装置
CN111540443A (zh) 医学影像显示方法和通信终端
CN115984228A (zh) 胃镜图像处理方法、装置、电子设备及存储介质
EP4113970A1 (en) Image processing method and device
CN114565622B (zh) 房间隔缺损长度的确定方法及装置、电子设备和存储介质
CN114627071A (zh) 视杯视盘分割方法、装置、计算机及存储介质
CN114334114A (zh) 内镜检查术前提醒方法、装置及存储介质
CN113576488A (zh) 基于心率的肺影像组学的确定方法及装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20814634

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020814634

Country of ref document: EP

Effective date: 20220103