WO2022095640A1 - 对图像中的树状组织进行重建的方法、设备及存储介质 - Google Patents

对图像中的树状组织进行重建的方法、设备及存储介质 Download PDF

Info

Publication number
WO2022095640A1
WO2022095640A1 PCT/CN2021/121600 CN2021121600W WO2022095640A1 WO 2022095640 A1 WO2022095640 A1 WO 2022095640A1 CN 2021121600 W CN2021121600 W CN 2021121600W WO 2022095640 A1 WO2022095640 A1 WO 2022095640A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image
model
feature
reconstruction
Prior art date
Application number
PCT/CN2021/121600
Other languages
English (en)
French (fr)
Inventor
卢东焕
马锴
郑冶枫
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP21888337.9A priority Critical patent/EP4181061A4/en
Publication of WO2022095640A1 publication Critical patent/WO2022095640A1/zh
Priority to US17/964,705 priority patent/US20230032683A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the embodiments of the present application relate to the field of computer technologies, and in particular, to a method, a device, and a storage medium for reconstructing a tree-like organization in an image.
  • Dendritic tissue refers to a tissue having a tree-like structure in an organism, for example, neurons in the human body, blood vessels in the human body, and the like.
  • Reconstructing the dendritic tissue in the image refers to marking the dendritic tissue in the image containing the dendritic tissue, so as to obtain the reconstruction result of the dendritic tissue.
  • Reconstructing the tree-like organization in the image can provide key data for the realization of artificial intelligence.
  • the dendritic tissue in the image is manually reconstructed by an annotator, the reconstruction efficiency is low, and the reliability of the obtained dendritic tissue reconstruction result is poor.
  • the embodiments of the present application provide a method, a device and a storage medium for reconstructing a dendritic tissue in an image, which can be used to improve the efficiency of reconstructing a dendritic tissue in an image.
  • an embodiment of the present application provides a method for reconstructing a dendritic tissue in an image, the method comprising:
  • the target image corresponding to the target tree organization, the original image data corresponding to the target image, and the reconstruction reference data corresponding to the target image, and the reconstruction reference data is based on the local part of the target tree organization in the target image. The reconstruction result is confirmed;
  • the target segmentation model calling the target segmentation model, and based on the original image data and the reconstructed reference data, to obtain a target segmentation result corresponding to the target image, where the target segmentation result is used to indicate the target category of each pixel in the target image,
  • the target category of any pixel point is used to indicate that the any pixel point belongs to the target tree-like organization or that the any pixel point does not belong to the target tree-like organization;
  • an apparatus for reconstructing dendritic tissue in an image comprising:
  • the first obtaining unit is used to obtain the target image corresponding to the target tree organization, the original image data corresponding to the target image, and the reconstruction reference data corresponding to the target image, and the reconstruction reference data is based on the target tree organization. determining the local reconstruction result in the target image;
  • the second obtaining unit is configured to call a target segmentation model, and obtain a target segmentation result corresponding to the target image based on the original image data and the reconstruction reference data, where the target segmentation result is used to indicate the target segmentation result in the target image.
  • the target category of each pixel point, the target category of any pixel point is used to indicate that the any pixel point belongs to the target tree-like organization or that any pixel point does not belong to the target tree-like organization;
  • a reconstruction unit configured to reconstruct the target tree structure in the target image based on the target segmentation result, to obtain a complete reconstruction result of the target tree structure in the target image.
  • a computer device in another aspect, includes a processor and a memory, the memory stores at least one piece of program code, and the at least one piece of program code is loaded and executed by the processor to implement the above Any of the described methods for reconstructing dendritic tissue in an image.
  • a computer-readable storage medium is also provided, and at least one piece of program code is stored in the computer-readable storage medium, and the at least one piece of program code is loaded and executed by a processor, so as to realize any one of the above A method for reconstructing dendritic tissue in an image.
  • a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes any of the above-mentioned methods for reconstructing the tree-like organization in the image. method.
  • the target segmentation result corresponding to the target image is automatically obtained based on the original image data corresponding to the target image and the reconstruction reference data, and then the complete reconstruction result of the target tree-like organization in the target image is automatically obtained based on the target segmentation result.
  • the automatic reconstruction of the dendritic tissue can be realized, and the reconstruction process of the dendritic tissue does not need to rely on manual labor, which is beneficial to improve the efficiency of reconstructing the dendritic tissue in the image, and the obtained reconstruction results of the dendritic tissue are reliable. Sex is higher.
  • FIG. 1 is a schematic diagram of an implementation environment of a method for reconstructing a dendritic tissue in an image provided by an embodiment of the present application;
  • FIG. 2 is a flowchart of a method for reconstructing a dendritic tissue in an image provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of an image including neurons provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a process for obtaining a target segmentation result corresponding to a target image provided by an embodiment of the present application
  • FIG. 5 is a flowchart of a process for obtaining target reconstruction confidence information provided by an embodiment of the present application
  • FIG. 6 is a schematic structural diagram of a target classification model provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a process for reconstructing a tree-like tissue in an image provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a target image including different marking results provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a SWC file for storing a complete reconstruction result of a neuron provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a three-dimensional point cloud data provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a device for reconstructing a dendritic tissue in an image provided by an embodiment of the present application
  • FIG. 12 is a schematic diagram of another apparatus for reconstructing a dendritic tissue in an image provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a server provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • FIG. 1 shows an implementation environment of the method for reconstructing a dendritic tissue in an image provided by the embodiment of the present application. Schematic.
  • the implementation environment includes: a terminal 11 and a server 12 .
  • the method for reconstructing a tree-like organization in an image provided by the embodiment of the present application is executed by the terminal 11 or by the server 12, which is not limited in the embodiment of the present application.
  • the terminal 11 can display the The complete reconstruction of the target tree-like organization in the target image.
  • the terminal 11 can also send the complete reconstruction result of the target tree-like organization in the target image to the server 12 for storage.
  • the server 12 can The complete reconstruction result of the dendritic tissue in the target image is sent to the terminal 11 for display.
  • the terminal 11 is a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc., but is not limited thereto.
  • the server 12 is an independent physical server, or a server cluster or distributed system composed of multiple physical servers, or provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, and middleware. Cloud servers for basic cloud computing services such as services, domain name services, security services, CDN (Content Delivery Network), and big data and artificial intelligence platforms.
  • the terminal 11 and the server 12 are directly or indirectly connected through wired or wireless communication, which is not limited in this application.
  • terminal 11 and server 12 are only examples. If other existing or future terminals or servers are applicable to this application, they should also be included in the protection scope of this application, and are hereby referred to as References are included here.
  • an embodiment of the present application provides a method for reconstructing a tree-like organization in an image, taking the application of the method to the server 12 as an example.
  • the method provided by this embodiment of the present application includes the following steps 201 to 203 .
  • step 201 a target image corresponding to the target tree, original image data corresponding to the target image, and reconstruction reference data corresponding to the target image are obtained, and the reconstruction reference data is determined based on the local reconstruction result of the target tree in the target image.
  • the target dendritic tissue refers to any dendritic tissue to be reconstructed.
  • a dendritic tissue refers to a tissue having a tree-like structure in an organism, and the type of the dendritic tissue to be reconstructed is not limited in the embodiments of the present application.
  • the dendritic tissue to be reconstructed refers to a neuron in the human body, or
  • the dendritic tissue to be reconstructed refers to blood vessels and the like in the human body.
  • neurons are the basic units that constitute the structure and function of the nervous system.
  • the reconstruction of the dendritic tissue refers to the reconstruction of neurons. Reconstruction of neurons is one of the keys to establishing big data in brain science and understanding human intelligence and emotion.
  • the target image corresponding to the target tree structure refers to the image that contains the complete or partial target tree structure, and the included complete or partial target tree structure is not completely reconstructed.
  • the complete or partial target tree structure contained in the target image can be completely reconstructed, so as to obtain the complete reconstruction result of the target tree structure in the target image.
  • the target image is obtained from an initial image containing a complete target tree structure, and the initial image may be a two-dimensional image or a three-dimensional image, which is not limited in this embodiment of the present application. In one embodiment, the target image may be obtained from an initial image containing part of the target tree.
  • the initial image containing the target neuron is a three-dimensional brain image, that is, the target image is obtained from a three-dimensional brain image containing the target neuron.
  • the initial image containing the target blood vessel is a three-dimensional image of the blood vessel, that is, the target image is obtained from the three-dimensional image of the blood vessel containing the target blood vessel.
  • the process of acquiring the target image corresponding to the target tree organization includes the following steps 2011 to 2013 .
  • Step 2011 Obtain the initial reconstruction result of the target tree structure in the initial image.
  • the initial reconstruction result of the target dendritic tissue in the initial image refers to the result obtained after preliminary reconstruction of the target dendritic tissue in the initial image.
  • This embodiment of the present application does not limit the representation of the initial reconstruction result of the target tree-like organization in the initial image.
  • the initial reconstruction result of the target tree-like organization in the initial image refers to the initial reconstruction node marked with the initial reconstruction node and the initial reconstruction result.
  • An image of the connection relationship between nodes; or, the initial reconstruction result of the target tree-like organization in the initial image refers to a file including relevant data of the initial reconstructed node.
  • the relevant data of any initial reconstruction node includes, but is not limited to, the position data of the any initial reconstruction node in the initial image, the association data between the any initial reconstruction node and other initial reconstruction nodes, and the like.
  • the association data between any initial reconstruction node and other initial reconstruction nodes is used to indicate the connection relationship between any initial reconstruction node and other initial reconstruction nodes.
  • the initial reconstruction node refers to the reconstruction node marked after preliminary reconstruction of the target tree-like result in the initial image. It should be noted that an initial reconstruction node corresponds to a pixel in the initial image that belongs to the target tree-like organization.
  • the initial reconstruction result of the target tree-like structure in the initial image is a result obtained after preliminary reconstruction of the target tree-like structure starts from a starting pixel point in the initial image.
  • This embodiment of the present application does not limit the position of the starting pixel point.
  • the initial image refers to a three-dimensional image in a three-dimensional coordinate system and a corner of the initial image is located at the origin of the three-dimensional coordinate system, then the starting pixel point means that the three-dimensional coordinate in the initial image is (0,0 ,0) pixels.
  • the manners of obtaining the initial reconstruction result of the target dendritic tissue in the initial image include but are not limited to the following three.
  • Method 1 Directly extract the initial reconstruction result of the target tree structure in the initial image.
  • This way 1 occurs when the initial reconstruction result of the target tree structure in the initial image is acquired and stored in advance.
  • Method 2 Obtain the initial reconstruction result of the target tree-like organization in the initial image obtained by manual annotation.
  • This way 2 occurs when the initial reconstruction result of the target tree structure in the initial image is not acquired and stored in advance.
  • the initial reconstruction result of the target dendritic tissue in the initial image is not acquired and stored in advance, it is necessary to manually mark the initial reconstruction node of the target dendritic tissue and the connection relationship between the initial reconstruction nodes in the initial image. , to obtain the initial reconstruction result of the target dendritic tissue in the initial image.
  • the process of manually marking the initial reconstruction nodes of the target dendritic tissue and the connection relationship between the initial reconstruction nodes is: manually and manually in the initial image to continuously determine k (k is greater than 1) belonging to the target dendritic tissue. Integer) pixels, mark the k pixels as the initial reconstruction nodes of the target tree, and establish the connection relationship between the marked initial reconstruction nodes according to the overall trend of the target tree.
  • k is greater than 1 belonging to the target dendritic tissue.
  • Integer pixels mark the k pixels as the initial reconstruction nodes of the target tree, and establish the connection relationship between the marked initial reconstruction nodes according to the overall trend of the target tree.
  • whether each pixel belongs to the target tree structure is sequentially determined from the starting pixel point in the initial image.
  • Mode 3 Obtain the initial reconstruction result of the target tree structure in the initial image provided by the third party.
  • This method 3 occurs when the third party, that is, the service provider that provides the initial reconstruction result, stores the initial reconstruction result of the target tree organization in the initial image.
  • Step 2012 Based on the initial reconstruction result of the target tree structure in the initial image, determine the pixel points belonging to the target tree structure in the initial image, and determine the pixel points that meet the conditions among the pixels belonging to the target tree structure.
  • the initial reconstruction node of the target tree structure that has been reconstructed in the initial image can be determined, and each initial reconstruction node of the target tree structure corresponds to one of the initial images. Pixels belonging to the target tree. The pixels corresponding to the initial reconstructed nodes of the target tree structure that have been reconstructed in the initial image are regarded as the pixels belonging to the target tree structure, and then the pixels that meet the conditions are determined from the pixels belonging to the target tree structure.
  • the satisfied conditions are set according to experience or flexibly adjusted according to application scenarios, which are not limited in this embodiment of the present application.
  • the pixel points that meet the conditions refer to The pixel that is farthest from the starting pixel of the initial image among the pixels belonging to the target tree.
  • Step 2013 take the pixel that satisfies the condition as the center point, and intercept the image of the target size from the initial image as the target image.
  • the target size is set according to experience, or flexibly adjusted according to available computing resources, which is not limited in this embodiment of the present application.
  • the target size is 32 ⁇ 32 ⁇ 32 (unit is pixel).
  • the target image can be obtained by intercepting an image of the target size in the initial image by taking the pixel that satisfies the condition as the center point.
  • the above steps 2011 to 2013 are only an exemplary description of the manner of acquiring the target image corresponding to the target tree organization, and the embodiments of the present application are not limited thereto.
  • the method of acquiring the target image corresponding to the target tree structure is: determining any pixel point belonging to the target tree structure in the initial image, and taking the any pixel point as the center point to intercept the target size image as the target image.
  • the method of obtaining the target image corresponding to the target tree structure is as follows: in the initial image, a specified pixel point belonging to the target tree structure is determined, and an image of the target size is intercepted with the specified pixel point as the center point as the target image, the specified pixel point is preset.
  • the target image is a partial image in the initial image.
  • the 3D image corresponding to the neuron takes up too much space, and the neuron itself is very sparse in the image (as shown in Figure 3). There will be a lot of redundant information in the 3D image. It is easy to cause problems of low reconstruction efficiency and poor reconstruction accuracy when reconstructing images. Therefore, in the embodiment of the present application, the reconstruction of the dendritic tissue is performed for the local image, which is beneficial to improve reconstruction efficiency and reconstruction accuracy.
  • the original image data corresponding to the target image and the reconstructed reference data corresponding to the target image are further obtained.
  • the related contents of acquiring the original image data corresponding to the target image and the reconstructed reference data corresponding to the target image are introduced respectively.
  • the original image data corresponding to the target image is used to characterize the original image features of the target image.
  • the acquisition method of the original image data corresponding to the target image is: acquiring the grayscale features of each pixel in the target image; determining the original image data corresponding to the target image based on the grayscale features of each pixel.
  • the grayscale feature of any pixel is determined based on the grayscale value of the any pixel in the target image, for example, the grayscale value of any pixel in the target image is used as the grayscale value of the any pixel.
  • Grayscale feature or, normalize the grayscale value of any pixel in the target image, and use the standardized value as the grayscale feature of any pixel.
  • the raw image data corresponding to the target image is three-dimensional data
  • the three-dimensional raw image data includes grayscale features of each pixel in the target image.
  • the reconstruction reference data corresponding to the target image is used to provide data reference for the reconstruction process of the target tree structure in the target image.
  • the reconstruction reference data is determined based on the local reconstruction results of the target tree organization in the target image. That is to say, before obtaining the reconstruction reference data corresponding to the target image, it is necessary to obtain the local reconstruction result of the target tree-like organization in the target image.
  • the local reconstruction result of the target tree structure in the target image can provide a data reference for the reconstruction process of obtaining the complete reconstruction result of the target tree structure in the target image.
  • the acquisition method of the local reconstruction result of the target tree structure in the target image is: In the initial reconstruction results of the target tree structure in the initial image, the initial reconstruction results corresponding to the target image are determined; based on the initial reconstruction results corresponding to the target image, the local reconstruction results of the target tree structure in the target image are determined.
  • the method of determining the local reconstruction result of the target tree structure in the target image is: taking the initial reconstruction result corresponding to the target image as the target tree structure in the target image. Local reconstruction results in the target image.
  • the method of determining the local reconstruction result of the target tree structure in the target image is: acquiring the incremental reconstruction result of the target tree structure in the target image , and the aggregated result of the initial reconstruction result and the incremental reconstruction result corresponding to the target image is taken as the local reconstruction result of the target tree-like organization in the target image.
  • the incremental reconstruction result of the target tree-like organization in the target image is further obtained, and then the target image corresponding to the incremental reconstruction result is further obtained.
  • the summary results of the reconstruction results and incremental reconstruction results are used as the local reconstruction results for providing data reference for the automatic reconstruction process.
  • the incremental reconstruction result of the target tree-like organization in the target image is obtained by manually performing additional marking on the basis of the initial reconstruction result corresponding to the target image.
  • the reconstruction reference data corresponding to the target image includes reconstruction reference features of each pixel in the target image, and the reconstruction reference feature of any pixel is used to indicate whether any pixel is based on the local reconstruction result The determined reconstruction reference pixels belonging to the target tree-like organization.
  • the process of obtaining the reconstruction reference data corresponding to the target image is: based on the local reconstruction result, determine the reconstruction reference pixel points belonging to the target tree-like organization in each pixel point in the target image; The reconstruction reference pixels and other pixels determined by the reconstruction result are binarized to obtain the reconstruction reference features of each pixel in the target image; the data including the reconstruction reference features of each pixel in the target image is used as the target image. Corresponding reconstruction reference data.
  • the local reconstruction results it can be accurately determined that the reconstructed reference pixels belong to the target tree-like organization, but it cannot be accurately determined whether other pixels except the reconstructed reference pixels belong to the target tree-like organization, that is, other Pixel points may actually belong to the target tree-like organization or may not belong to the target tree-like organization.
  • the reconstruction reference data it is temporarily defaulted that other pixels do not belong to the target tree-like organization.
  • the categories of other pixel points may be will change.
  • the reconstruction reference pixel points and other pixel points determined based on the local reconstruction result are subjected to binarization processing, and the reconstruction reference feature of each pixel point in the target image is obtained in the following way:
  • the determined reconstruction reference pixels are assigned a first value
  • other pixels are assigned a second value. That is, the reconstructed reference features of the reconstructed reference pixels in the target image are the first numerical value
  • the reconstructed reference features of other pixels in the target image except the reconstructed reference pixels are the second numerical value.
  • the first numerical value and the second numerical value are set according to experience, or flexibly adjusted according to an application scenario, which is not limited in this embodiment of the present application. Illustratively, the first value is 1 and the second value is 0.
  • the reconstruction reference data corresponding to the target image can intuitively indicate which pixel or pixels in the target image are based on the local reconstruction result.
  • the determined reconstruction reference pixels belonging to the target dendritic tissue provide data reference for the subsequent reconstruction process for the target dendritic tissue.
  • the reconstructed reference data of the target image is three-dimensional data.
  • the three-dimensional reconstruction reference data includes reconstruction reference features of each pixel point.
  • the dendritic tissue as an example of neurons, as shown in Figure 3, different neurons may be very close. If the reconstruction is directly based on the initial image data, multiple close neurons may be reconstructed into one. neurons, resulting in poor reconstruction accuracy of neurons.
  • the embodiment of the present application can provide a powerful data reference for accurately reconstructing a neuron in the target image by acquiring the reconstruction reference data for a neuron, so that the reconstruction can be performed accurately for the neuron in the target image. .
  • step 202 the target segmentation model is invoked, and based on the original image data and the reconstructed reference data, a target segmentation result corresponding to the target image is obtained, and the target segmentation result is used to indicate the target category of each pixel in the target image.
  • the target category of any pixel point is used to indicate that any pixel point belongs to the target tree-like organization or that any pixel point does not belong to the target tree-like organization.
  • the original image data and the reconstructed reference data are input into the target segmentation model for segmentation processing, and the target segmentation result corresponding to the target image is obtained.
  • the target segmentation result is used to indicate the target category of each pixel in the target image, and the target category of any pixel is used to indicate whether any pixel belongs to the target tree organization.
  • the target category of any pixel point here refers to the actual category of any pixel point obtained by calling the target segmentation model on the basis of the original image data and reconstructed reference data corresponding to the target image.
  • the target category of the reconstruction reference pixels is used to indicate that the reconstruction reference pixels belong to the target tree structure; for except for the reconstruction reference pixels indicated by the reconstruction reference data and determined based on the local reconstruction results and belonging to the target tree organization, the target category of any other pixel may be used to indicate that any other pixel belongs to the target tree It may also be used to indicate that any other pixel does not belong to the target tree.
  • the target segmentation result corresponding to the target image can indicate whether each pixel in the target image actually belongs to the target tree structure, the target segmentation result can provide direct data support for the automatic reconstruction of the target tree structure.
  • the inputs to the target segmentation model are raw image data and reconstructed reference data.
  • the original image data and the reconstructed reference data can be regarded as two-channel image data. That is to say, the input of the target segmentation model is two-channel image data, one of the two channels is the original image feature channel, and the other channel of the two channels is the reconstructed reference feature channel.
  • the data corresponding to the original image feature channel is the original image data, and the data corresponding to the reconstructed reference feature channel is the reconstructed reference data.
  • the input of the target segmentation model is two-channel three-dimensional image data, all three dimensions are spatial coordinates, one channel in the two channels is the three-dimensional original image feature channel, and the other channel in the two channels is the feature channel of the original three-dimensional image.
  • One channel is the reference feature channel for 3D reconstruction.
  • the embodiment of the present application does not limit the model structure of the target segmentation model, as long as the segmentation of each pixel in the target image can be achieved based on the original image data and the reconstructed reference data.
  • the model structure of the target segmentation model is a 3D-UNet (three-dimensional U-shaped network) structure.
  • the process of invoking the target segmentation model and obtaining the target segmentation result corresponding to the target image based on the original image data and the reconstructed reference data includes the following steps 2021 to 2023 .
  • Step 2021 Invoke the target segmentation model, and sequentially perform downsampling processing for the first reference number based on the fusion data of the original image data and the reconstructed reference data to obtain the first target feature corresponding to the target image.
  • the fusion data of the original image data and the reconstructed reference data refers to data obtained by fusing the original image data and the reconstructed reference data, and the embodiment of the present application does not limit the method of fusing the original image data and the reconstructed reference data.
  • the target segmentation model includes a data fusion layer, and the process of fusing the original image data and the reconstructed reference data is performed in the data fusion layer in the target segmentation model.
  • a first reference number of downsampling processes are sequentially performed based on the fusion data to obtain a first target feature corresponding to the target image.
  • the first target feature corresponding to the target image refers to a deep-level feature obtained by down-sampling the fusion data of the original image data and the reconstructed reference data.
  • the first reference quantity is set according to experience, or flexibly adjusted according to an application scenario, which is not limited in this embodiment of the present application.
  • the first reference number of times is three times, or the first reference number of times is four times.
  • the process of sequentially performing the first reference number of downsampling processes based on the fusion data is realized by reasonably setting the model structure of the target segmentation model.
  • any downsampling process includes a convolution process and a pooling process. Taking the first reference number as three times as an example, based on the fusion data of the original image data and the reconstructed reference data, the down-sampling process of the first reference number is sequentially performed, and the process of obtaining the first target feature corresponding to the target image includes the following steps a to step c.
  • Step a Perform a first convolution process on the fusion data of the original image data and the reconstructed reference data to obtain a first convolution feature corresponding to the target image; perform a first pooling process on the first convolution feature to obtain a first convolution feature corresponding to the target image.
  • the first pooled feature Perform a first convolution process on the fusion data of the original image data and the reconstructed reference data to obtain a first convolution feature corresponding to the target image; perform a first pooling process on the first convolution feature to obtain a first convolution feature corresponding to the target image. The first pooled feature.
  • each convolutional layer is composed of a convolution function, a BN (Batch Normalization, batch normalization) function, and a ReLU (Rectified Linear Unit, linear rectification unit) function.
  • the concatenated convolutional layers are 3D convolutional layers.
  • the size of the convolution kernel of the convolution layer is not limited in this embodiment of the present application. For example, the size of the convolution kernel of the convolution layer is 3 ⁇ 3 ⁇ 3.
  • a first pooling process is performed on the first convolution feature to reduce the size of the first convolution feature.
  • the first pooling process is as follows: feature extraction is performed on the first convolutional feature through a maximum pooling layer.
  • the kernel size of the max pooling layer is 2 ⁇ 2 ⁇ 2.
  • Step b Perform a second convolution process on the first pooling feature to obtain a second convolution feature corresponding to the target image; perform a second pooling process on the second convolution feature to obtain a second pooling feature corresponding to the target image .
  • Step c Perform convolution processing on the second pooling feature to obtain a third convolution feature corresponding to the target image; perform pooling processing on the third convolution feature to obtain a first target feature corresponding to the target image.
  • step b and step c refer to the implementation manner of step a, which will not be repeated here.
  • the processing parameters of the first convolution processing, the second convolution processing, and the third convolution processing may be the same or different, which are not limited in this embodiment of the present application.
  • the processing parameters of the first convolution process, the second convolution process, and the third convolution process are different, so that the feature dimensions corresponding to the features extracted after different convolution processes are different.
  • the processing parameters of the first pooling process, the second pooling process and the third pooling process are the same, and all are used to reduce the size of the features by the same proportion.
  • Step 2022 Based on the target convolution feature corresponding to the first target feature, perform upsampling processing for the first reference number in sequence to obtain a second target feature corresponding to the target image.
  • the target convolution feature corresponding to the first target feature refers to a feature obtained by performing convolution processing on the first target feature.
  • the process of performing convolution processing on the first target feature is determined by the model structure of the target segmentation model. This embodiment of the present application This is not limited.
  • the process of performing convolution processing on the first target feature is: performing feature extraction on the first target feature through two cascaded convolution layers.
  • the upsampling process of the first reference number is sequentially performed to obtain the second target feature corresponding to the target image.
  • the number of times of the upsampling process performed based on the target convolution feature here is the same as the number of times of the downsampling process performed sequentially based on the fusion data of the original image data and the reconstruction parameter data in step 2021 .
  • the number of times of the upsampling process performed based on the target convolution feature and the number of times of the downsampling process performed sequentially based on the fusion data of the original image data and the reconstruction parameter data in step 2021 may also be different.
  • any downsampling process includes a deconvolution process and a convolution process.
  • the first reference number is three times, based on the target convolution feature corresponding to the first target feature, the first step is performed in sequence.
  • the process of obtaining the second target feature corresponding to the target image by upsampling a reference number of times includes the following steps A to C.
  • Step A Perform a first deconvolution process on the target convolution feature corresponding to the first target feature to obtain a first upsampling feature corresponding to the target image; Four convolution processing to obtain the fourth convolution feature corresponding to the target image.
  • the size of the target convolution feature can be enlarged and the feature dimension can be reduced.
  • the embodiments of the present application do not limit the implementation manner of the first deconvolution processing.
  • the implementation manner of the first deconvolution processing is: performing deconvolution on the target convolution feature through a deconvolution layer.
  • the first up-sampling feature is obtained.
  • the first upsampling feature and the third convolutional feature obtained in step c in step 2021 have the same size and feature dimension. Therefore, the first upsampling feature and the third convolutional feature can be spliced to obtain the first upsampling feature and the third convolutional feature.
  • the concatenated feature of the first upsampling feature and the third convolutional feature is a feature obtained by concatenating the first upsampling feature and the third convolutional feature in the feature dimension.
  • the fourth convolution processing method is: performing feature extraction on the concatenated features of the first up-sampling feature and the third convolution feature through two concatenated convolution layers.
  • Step B performing a second deconvolution process on the fourth convolution feature to obtain a second upsampling feature corresponding to the target image; performing a fifth convolution process on the splicing feature of the second upsampling feature and the second convolution feature, The fifth convolution feature corresponding to the target image is obtained.
  • Step C performing a third deconvolution process on the fifth convolution feature to obtain a third upsampling feature corresponding to the target image; performing a sixth convolution process on the splicing feature of the third upsampling feature and the first convolution feature, The second target feature corresponding to the target image is obtained.
  • the processing parameters of the first deconvolution process, the second deconvolution process and the third deconvolution process are different, so that the feature dimensions of the features obtained after different deconvolution processes are different.
  • the processing parameters of the fourth convolution process, the fifth convolution process and the sixth convolution process are different, so that the feature dimensions of the features obtained after different convolution processes are different.
  • Step 2023 Perform target convolution processing on the second target feature to obtain a target segmentation result corresponding to the target image.
  • target convolution processing is performed on the second target feature to obtain a target segmentation result corresponding to the target image.
  • the process of performing target convolution processing on the second target feature is determined by the model structure of the target segmentation model, which is not limited in this embodiment of the present application.
  • the method of performing the target convolution processing on the second target feature is different from the method of performing the target convolution processing on other features, and performing the target convolution processing on the second target feature The target segmentation result of the target category of each pixel point.
  • the process of invoking the target segmentation model and obtaining the target segmentation result corresponding to the target image based on the original image data and the reconstructed reference data is shown in FIG. 4 .
  • the first convolution process is performed on the fusion data 401 of the original image data and the reconstructed reference data to obtain the first convolution feature 402 corresponding to the target image;
  • the first pooling process is performed on the product feature 402 to obtain the first pooling feature 403 corresponding to the target image;
  • the second convolution process is performed on the first pooling feature 403 to obtain the second convolution feature 404 corresponding to the target image;
  • the second pooling process is performed on the second convolution feature 404 to obtain the second pooling feature 405 corresponding to the target image;
  • the third convolution process is performed on the second pooling feature 405 to obtain the third convolution feature 406 corresponding to the target image;
  • a third pooling process is performed on the third convolution feature 406 to obtain a first
  • a target convolution feature 408 corresponding to the first target feature is obtained by performing convolution processing on the first target feature 407 .
  • the first deconvolution process is performed on the target convolution feature 408 to obtain the first upsampling feature 409 corresponding to the target image;
  • the fourth convolution process is performed on the splicing feature of the first upsampling feature 409 and the third convolution feature 406,
  • the fourth convolution feature 410 corresponding to the target image is obtained.
  • the third deconvolution process is performed on the fifth convolution feature 412 to obtain the third upsampling feature 413 corresponding to the target image
  • the sixth convolution process is performed on the splicing feature of the third upsampling feature 413 and the first convolution feature 402 , to obtain the second target feature 414 corresponding to the target image.
  • target convolution processing is performed on the second target feature 414 to obtain a target segmentation result 415 corresponding to the target image.
  • the numbers marked on each feature in FIG. 4 represent the feature dimension of each feature.
  • the number 48 marked on the first convolution feature 402 indicates that the feature dimension of the first convolution feature 402 is 48; the number 96 marked on the second convolution feature 404 indicates that the feature dimension of the second convolution feature 404 is 96.
  • the number 2 marked on the target segmentation result 415 indicates that the dimension of the target segmentation result is 2, that is to say, the target segmentation result includes pixels in the target image corresponding to each pixel in the target tree organization. Probability values and probability values that do not belong to the target tree.
  • the method further includes: invoking the target classification model, and obtaining target reconstruction confidence information based on the original image data and the target segmentation result.
  • the target reconstruction confidence information is used to indicate the reliability of the complete reconstruction result obtained based on the target segmentation result.
  • the target reconstruction confidence information includes a probability value that the target segmentation result is a correct segmentation result and a probability value that the target segmentation result is an incorrect segmentation result, a probability value that the target segmentation result is a correct segmentation result, and the target segmentation result is an incorrect segmentation result.
  • the sum of the probability values is 1.
  • the probability value that the target segmentation result is correct segmentation is not less than the probability value that the target segmentation result is an incorrect segmentation result, it is considered that the reliability of the complete reconstruction result based on the target segmentation result is high; if the target segmentation result is the probability value of correct segmentation If it is less than the probability value that the target segmentation result is an incorrect segmentation result, it is considered that the reliability of the complete reconstruction result determined based on the target segmentation result is low. In this case, it means that the target tree-like organization determined based on the target segmentation result is a complete reconstruction result in the target image. There may be errors and need to be corrected manually.
  • the model structure of the target classification model is a CNN (Convolutional Neural Network, convolutional neural network) structure.
  • the model structure of the target classification model is a 3D-CNN (three-dimensional convolutional neural network) structure.
  • the model structure of the target classification model is a 3D-CNN structure
  • the target classification model is a 3D-VGG11 (3D Visual Geometry Group 11, three-dimensional visual geometry group 11) model. It should be noted that the model structure of the target classification model is not limited to this.
  • the target classification model includes at least one convolutional sub-model, at least one fully-connected sub-model, and one confidence prediction sub-model connected in sequence.
  • the target classification model is invoked, and based on the original image data and the target segmentation result, the process of obtaining target reconstruction confidence information includes the following steps 501 to 505 .
  • Step 501 Input the original image data and the target segmentation result into the first convolution sub-model in the target classification model for processing, and obtain the classification feature output by the first convolution sub-model.
  • the first convolutional sub-model includes at least one convolutional layer and one pooling layer connected in sequence
  • the original image data and the target segmentation result are input into the first convolutional sub-model in the target classification model
  • the processing process is as follows: processing the fusion data of the original image data and the target segmentation result based on at least one convolutional layer and one pooling layer connected in sequence.
  • the embodiments of the present application do not limit the number of convolutional layers included in the first convolutional sub-model, the size of the convolution kernel of each convolutional layer, the type of the pooling layer, and the kernel size of the pooling layer.
  • the number of convolutional layers included in the first convolutional sub-model is 1, the convolutional sum size of each convolutional layer is 3 ⁇ 3 ⁇ 3, the type of pooling layer is max pooling layer, pooling The kernel size of the layer is 2 ⁇ 2 ⁇ 2.
  • Step 502 Starting from the second convolution sub-model, input the classification feature output by the previous convolution sub-model into the next convolution sub-model for processing, and obtain the classification feature output by the next convolution sub-model.
  • the classification features output by the first convolution sub-model are obtained, the classification features output by the first convolution sub-model are input into the second convolution sub-model for processing, and the classification features output by the second convolution sub-model are obtained. , and so on, until the classification feature output by the last convolution sub-model is obtained.
  • the embodiment of the present application does not limit the number of convolution sub-models included in the target classification model.
  • the convolution sub-models included in different The number of layers may be the same or may be different, which is not limited in this embodiment of the present application.
  • the embodiments of the present application do not limit the setting methods of the processing parameters of the convolution layer and the pooling layer, and different processing parameters can obtain features of different dimensions.
  • Step 503 Input the classification feature output by the last convolution sub-model into the first fully-connected sub-model for processing, and obtain the fully-connected feature output by the first fully-connected sub-model.
  • the classification feature output by the last convolution sub-model is obtained, the classification feature output by the last convolution sub-model is used as the input of the first fully-connected sub-model, and then the last convolution is performed by the first fully-connected sub-model.
  • the classification features output by the sub-model are processed to obtain the fully-connected features output by the first fully-connected sub-model.
  • the first fully-connected sub-model includes a fully-connected layer, and the classification features output by the last convolutional sub-model are processed through the fully-connected layer.
  • This embodiment of the present application does not limit the setting manner of the processing parameters of the fully connected layer included in the first fully connected sub-model, and may be set according to experience.
  • Step 504 Starting from the second fully-connected sub-model, input the fully-connected feature output by the previous fully-connected sub-model into the next fully-connected sub-model for processing, and obtain the fully-connected feature output by the next fully-connected sub-model.
  • the embodiment of the present application does not limit the number of fully connected sub-models included in the target classification model.
  • the fully-connected layers in different fully connected sub-models are not limited.
  • the processing parameters may be the same or may be different, which are not limited in this embodiment of the present application.
  • Step 505 Input the fully connected feature output by the last fully connected sub-model into the confidence prediction sub-model for processing, and obtain target reconstruction confidence information output by the confidence prediction sub-model.
  • the fully-connected feature output by the last fully-connected sub-model is used as the input of the confidence prediction sub-model, and then the confidence prediction sub-model determines the last fully-connected sub-model.
  • the output fully connected features are processed to obtain the target reconstruction confidence information output by the confidence prediction sub-model.
  • This embodiment of the present application does not limit the structure of the confidence prediction sub-model, as long as the reconstruction confidence information can be output.
  • the confidence prediction sub-model includes a fully connected layer, and the confidence information is reconstructed through the processing output target of the fully connected layer.
  • the activation function used in the object classification model is a ReLU function.
  • the target classification model includes five convolutional sub-models, two fully-connected sub-models and one confidence prediction sub-model connected in sequence.
  • the number of convolutional layers included in each of the five convolutional submodels is 1, 1, 2, 2, and 2, respectively, and each convolutional submodel includes a pooling layer.
  • all convolutional layers have a kernel size of 3 ⁇ 3 ⁇ 3, all pooling layers are pooling layers with a kernel size of 2 ⁇ 2 ⁇ 2, and the pooling layers are max pooling layers or average pooling layers.
  • the pooling layer, etc. is not limited in this embodiment of the present application.
  • the first convolution sub-model 601 includes a convolution layer and a pooling layer
  • the second convolution sub-model 602 includes a convolution layer and a pooling layer
  • the third convolution sub-model 602 includes a convolution layer and a pooling layer.
  • the model 603 includes two convolutional layers and one pooling layer
  • the fourth convolutional sub-model 604 includes two convolutional layers and one pooling layer
  • the fifth convolutional sub-model 605 includes two convolutional layers. layer and a pooling layer.
  • the input of the target classification model is 3D image data with two channels (that is, the original image data and the target reconstruction result). Assuming that the size of the target image is 32 ⁇ 32 ⁇ 32, the size of the two-channel 3D image data input to the target classification model is 32 ⁇ 32 ⁇ 32 ⁇ 2, and the output of the target classification model is target reconstruction confidence information used to indicate the reliability of the complete reconstruction result obtained based on the target segmentation result.
  • the two-channel 3D image data passes through the convolution layer in the first convolution sub-model 601, 64-dimensional features will be extracted from each pixel, and the size in each direction will be reduced to 1/1 of the original size through the pooling layer. 2, that is, after the processing of the first convolution sub-model, the size of the output classification feature is 16 ⁇ 16 ⁇ 16 ⁇ 64. Thereafter, the feature dimensions of the categorical features output by each convolutional sub-model are 128, 256, 512, and 512, respectively. Finally, after processing two fully connected sub-models with output feature dimensions of 4096 and 4096 and a confidence prediction sub-model with output feature dimension of 2, the target reconstruction confidence information is obtained.
  • the target segmentation model After obtaining the target segmentation result, it is not necessary to call the target classification model to obtain the target reconstruction confidence information. Before calling the target segmentation model to obtain the target segmentation result, the target segmentation model needs to be trained first. In a possible implementation manner, in this case, the process of obtaining the target segmentation model by training includes the following steps 1-1 and 1-2.
  • Step 1-1 Acquire at least one sample image, original sample image data corresponding to the at least one sample image, reconstructed reference sample data corresponding to the at least one sample image, and standard segmentation results corresponding to the at least one sample image.
  • a sample image refers to an image used to train a segmentation model.
  • a sample image corresponds to a sample tree structure. Different sample images can correspond to the same sample tree structure or to different sample tree structures. This application implements the The example is not limited to this.
  • the real complete reconstruction result of the sample tree structure corresponding to any sample image in the any sample image is known, and the real complete reconstruction result of the sample tree structure corresponding to any sample image in the any sample image is known As the standard complete reconstruction result of the sample tree structure corresponding to any sample image in any sample image.
  • the embodiment of the present application does not limit the acquisition method of the sample image, as long as it is ensured that the standard complete reconstruction result of the sample tree structure corresponding to the sample image in the sample image is known.
  • the sample image is a partial image in the whole image, and the size of the sample image is 32 ⁇ 32 ⁇ 32. After the sample image is determined, the original sample image data corresponding to the sample image can be directly obtained.
  • the reconstructed reference sample data corresponding to any sample image is used to provide data reference for invoking the segmentation model to obtain the predicted segmentation result of the any sample image.
  • the reconstruction reference sample data corresponding to any sample image is determined based on the local reconstruction result of the sample tree structure corresponding to the any sample image in the standard complete reconstruction result in the any sample image.
  • the local reconstruction result of the sample tree structure corresponding to any sample image in the standard complete reconstruction result in any sample image is retained, and based on the retained local reconstruction result, the reconstruction reference sample data corresponding to any sample image is determined.
  • the standard complete reconstruction result can indicate all the pixels belonging to the sample tree structure corresponding to any sample image
  • the retained local reconstruction results can indicate some pixels belonging to the sample tree structure corresponding to any sample image.
  • Part of the pixels belonging to the sample tree structure corresponding to any sample image can provide data reference for invoking the segmentation model to realize the reconstruction process in the sample tree structure.
  • the embodiment of the present application does not limit the relationship between the retained local reconstruction result and the standard complete reconstruction result.
  • the part indicated by the retained local reconstruction result belongs to the pixel points of the sample tree structure corresponding to any sample image as the standard
  • the complete reconstruction result indicates the first reference number of pixels among all the pixels that belong to the sample tree organization corresponding to any sample image.
  • the first reference number of pixels in the pixel points all belonging to the sample tree organization corresponding to any sample image refers to the distance of the sample image in the pixels all belonging to the sample tree organization corresponding to any sample image. The number of pixels from the starting pixel to the nearest previous reference.
  • the reference number is set according to experience, or flexibly adjusted according to the number of pixel points all belonging to the sample tree structure corresponding to any sample image, which is not limited in this embodiment of the present application.
  • the reference number is half of the number of pixel points that all belong to the sample tree-like organization corresponding to any sample image.
  • the method of determining the reconstruction reference sample data corresponding to any sample image refers to obtaining the reconstruction corresponding to the target image in step 201 The method of reference data will not be repeated here.
  • the standard segmentation result corresponding to any sample image is used to indicate the standard category corresponding to each pixel in any sample image, and the standard category corresponding to any pixel is used to indicate whether any pixel actually belongs to the same category.
  • the standard segmentation result corresponding to any sample image can be directly determined based on the standard complete reconstruction result of the sample tree structure corresponding to any sample image in any sample image.
  • Step 1-2 Supervise the training of the initial segmentation model based on the original sample image data corresponding to the at least one sample image, the reconstructed reference sample data corresponding to the at least one sample image, and the standard segmentation result corresponding to the at least one sample image, respectively, to obtain the target segmentation model.
  • the initial segmentation model refers to the segmentation model that needs to be trained
  • the target segmentation model refers to the trained segmentation model.
  • the initial segmentation model is supervised and trained based on the original sample image data corresponding to the at least one sample image, the reconstructed reference sample data corresponding to the at least one sample image, and the standard segmentation result corresponding to the at least one sample image respectively.
  • the process of obtaining the target segmentation model is: 1.
  • the initial classification model Invoke the initial classification model, and obtain the predicted segmentation result corresponding to the target sample image based on the original sample image data corresponding to the target sample image in at least one sample image and the reconstructed reference sample data, and the target sample image
  • the image is a sample image used to update the parameters of the classification model once in at least one sample image; 2.
  • determine the target loss function Based on the predicted segmentation result corresponding to the target sample image and the standard segmentation result corresponding to the target sample image, determine the target loss function, based on the target loss function Inverse
  • the parameters of the initial classification model to obtain the classification model after updating the parameters; 3.
  • perform steps 1 and 2 In response to the parameter update termination condition not being met, perform steps 1 and 2 based on the classification model after updating the parameters until the parameter update termination condition is met, and the target is obtained classification model.
  • the target sample image used in the process of performing steps 1 and 2 based on the classification model after updating the parameters can be the same as the target sample image used in the process of performing steps 1 and 2 based on the initial classification model, or it can be Differently, this embodiment of the present application does not limit this.
  • the number of target sample images used in each execution of step 1 and step 2 may be one or more, which is not limited in this embodiment of the present application. In an exemplary embodiment, the same number of target sample images are utilized each time steps 1 and 2 are performed.
  • satisfying the parameter update termination condition includes, but is not limited to, any of the following: the objective loss function converges, the objective loss function is smaller than the reference loss threshold, and the number of parameter updates reaches the threshold number of times.
  • a complete training That is, an epoch (epoch)
  • satisfying the parameter update termination condition also includes: the number of times of complete training reaches a specified threshold. For example, specify a threshold of 50.
  • the process of determining the target loss function is implemented based on formula 1 based on the predicted segmentation result corresponding to the target sample image and the standard segmentation result corresponding to the target sample image.
  • L 1 represents the target loss function
  • z represents the reconstructed reference sample data corresponding to the target sample image
  • I represents the original sample image data corresponding to the target sample image
  • G(z, I) represents the predicted segmentation result corresponding to the target sample image
  • y Indicates the standard segmentation result corresponding to the target sample image.
  • the manner of determining the target loss function may also be implemented based on other manners, which are not limited in this embodiment of the present application.
  • the target classification model is also invoked to obtain target reconstruction confidence information, in addition to the trained target segmentation model, the trained target classification model also needs to be obtained.
  • the target segmentation model and the target classification model may be obtained by unified training through adversarial training, or may be obtained by separate training, which is not limited in this embodiment of the present application.
  • the process of obtaining the target segmentation model and the target classification model by training includes the following steps 2-1 and 2-2.
  • Step 2-1 Obtain at least one sample image, original sample image data corresponding to at least one sample image, reconstructed reference sample data corresponding to at least one sample image, and standard segmentation results corresponding to at least one sample image.
  • step 2-1 For the implementation manner of this step 2-1, refer to the foregoing step 1-1, and details are not repeated here.
  • Step 2-2 Confront the initial segmentation model and the initial classification model based on the original sample image data corresponding to the at least one sample image, the reconstructed reference sample data corresponding to the at least one sample image, and the standard segmentation result corresponding to the at least one sample image respectively. After training, the target segmentation model and the target classification model are obtained.
  • the initial segmentation model refers to the segmentation model that needs to be trained
  • the initial classification model refers to the classification model that needs to be trained
  • the target segmentation model refers to the trained segmentation model
  • the target classification model refers to the trained classification model.
  • the implementation process of step 2-2 includes the following steps 2-2a to 2-2g.
  • the first sample image refers to a sample image in at least one sample image used for updating the parameters of the classification model once in one confrontation training.
  • the number of the first sample images may be one or more, which is not limited in this embodiment of the present application.
  • step 2-2a refer to the process of invoking the target segmentation model to obtain the target segmentation result corresponding to the target image in step 202, and details are not repeated here.
  • the first reconstruction confidence level information refers to the information predicted by the initial classification model based on the original sample image data corresponding to the first sample image and the predicted segmentation result corresponding to the first sample image, and is used to indicate the predicted segmentation result corresponding to the first sample image.
  • Information about the reliability of the obtained reconstruction result refers to the information predicted by the initial classification model based on the original sample image data corresponding to the first sample image and the standard segmentation result corresponding to the first sample image, which is used to indicate Information about the reliability of the reconstruction result obtained according to the standard segmentation result corresponding to the first sample image.
  • the first reconstruction confidence information includes a probability value that the predicted segmentation result corresponding to the first sample image is a correct segmentation result and a probability value that the predicted segmentation result corresponding to the first sample image is an incorrect segmentation result;
  • the confidence information includes a probability value that the standard segmentation result corresponding to the first sample image is a correct segmentation result and a probability value that the standard segmentation result corresponding to the first sample image is an incorrect segmentation result.
  • step 2-2b refer to the process of invoking the target classification model to obtain target confidence information in step 202, which will not be repeated here.
  • the process of determining the first loss function is implemented based on formula 2.
  • L 2 represents the first loss function
  • z represents the reconstructed reference sample data corresponding to the first sample image
  • I represents the original sample image data corresponding to the first sample image
  • G(z, I) represents the first sample image The corresponding predicted segmentation result
  • D(G(z, I)) represents the probability value that the predicted segmentation result corresponding to the first sample image included in the first reconstruction confidence information is the correct segmentation result
  • y represents the first sample image The corresponding standard segmentation result
  • D(y, I) represents the probability value that the standard segmentation result corresponding to the first sample image included in the second reconstruction confidence information is the correct segmentation result.
  • the update goal is to maximize the first loss function, that is, the update goal is to make the classification model classify the standard segmentation result corresponding to the first sample image as correct segmentation
  • the probability value of the result is predicted to be 1 as far as possible, and the probability value that the predicted segmentation result corresponding to the first sample image is the correct segmentation result is predicted to be 0 as much as possible.
  • the parameters of the initial classification model After the parameters of the initial classification model are updated once based on the first loss function, it is determined whether the updating process of the parameters of the initial classification model satisfies the first termination condition. When the updating process of the parameters of the initial classification model satisfies the first termination condition, the first classification model is obtained, and the subsequent steps 2-2d are then executed. When the updating process of the parameters of the initial classification model does not meet the first termination condition, continue to update the parameters of the initial classification model again based on the steps from Step 2-2a to Step 2-2c above, until the updating process of the parameters of the initial classification model After the first termination condition is satisfied, the subsequent steps 2-2d are performed.
  • that the updating process of the parameters of the initial classification model satisfies the first termination condition means that the number of updates of the parameters of the initial classification model reaches the first threshold.
  • the first threshold is set according to experience or flexibly adjusted according to an application scenario, which is not limited in this embodiment of the present application.
  • a complete adversarial training includes not only the training of the classification model, but also the training of the segmentation model.
  • the training of the segmentation model in a complete adversarial training is implemented based on the subsequent steps 2-2d to 2-2f.
  • the second sample image refers to a sample image in at least one sample image used for updating the parameters of the segmentation model in one adversarial training.
  • the second sample image may be the same as the first sample image, or may be different from the first sample image, which is not limited in this embodiment of the present application. Also, the number of the second sample images may be one or more.
  • this step 2-2d refer to the process of invoking the target segmentation model to obtain the target segmentation result corresponding to the target image in step 202, and details are not repeated here.
  • the third reconstruction confidence information refers to the information predicted by the first classification model based on the original sample image data corresponding to the second sample image and the predicted segmentation result corresponding to the second sample image, and is used to indicate that the predicted segmentation result corresponding to the second sample image is obtained. Information about the reliability of the reconstruction results.
  • the third reconstruction confidence information includes a probability value that the predicted segmentation result corresponding to the second sample image is a correct segmentation result and a probability value that the predicted segmentation result corresponding to the second sample image is an incorrect segmentation result.
  • this step 2-2e refer to the process of invoking the target classification model to obtain target confidence information in step 202, which will not be repeated here.
  • the process of determining the second loss function is implemented based on formula 3.
  • L3 represents the second loss function ;
  • z represents the reconstruction reference sample data corresponding to the second sample image;
  • I represents the original sample image data corresponding to the second sample image;
  • G(z, I) represents the prediction corresponding to the second sample image Segmentation result;
  • y represents the standard segmentation result corresponding to the second sample image;
  • D(G(z, I)) represents the probability value that the predicted segmentation result corresponding to the second sample image included in the third reconstruction confidence information is the correct segmentation result .
  • the update target is to minimize the second loss function, that is, the update target is to make the classification model classify the predicted segmentation result corresponding to the second sample image as the correct segmentation result
  • the probability value of is predicted to be 1 as far as possible, so that the predicted segmentation result predicted by the segmentation model is as close to the standard segmentation result as possible.
  • the fact that the updating process of the parameters of the initial segmentation model satisfies the second termination condition means that the number of updates of the parameters of the initial segmentation model reaches the second threshold.
  • the second threshold is set according to experience or flexibly adjusted according to an application scenario, which is not limited in this embodiment of the present application.
  • the second threshold is the same as the first threshold or different from the first threshold, which is not limited in this embodiment of the present application.
  • the training of the classification model in a complete confrontation training is completed. After completing a complete adversarial training, the first classification model and the first segmentation model are obtained.
  • the first classification model and the first segmentation model are obtained, that is, after a complete confrontation training is completed, it is determined whether the confrontation training process satisfies the target termination condition.
  • the adversarial training process satisfies the target termination condition
  • the first segmentation model is directly used as the target segmentation model, and the first classification model is used as the target classification model.
  • the adversarial training process does not meet the target termination condition, continue to perform adversarial training on the first classification model and the second segmentation model based on steps 2-2a to 2-2f until the adversarial training process satisfies the target termination condition, and the target termination condition will be satisfied
  • the segmentation model obtained when the target is used as the target segmentation model, and the classification model obtained when the target termination condition is satisfied is used as the target classification model.
  • the adversarial process satisfies the target termination condition including but not limited to any of the following: the number of adversarial training reaches a third threshold; the specified loss function converges; the specified loss function is not greater than the specified loss threshold.
  • the specified loss function refers to the second loss function when a complete adversarial training is completed.
  • the segmentation model and the classification model constitute a GAN (Generative Adversary Network, generative adversarial network) network framework.
  • the input of the segmentation model is the original sample image data and the reconstructed reference sample data, and the output is the predicted segmentation result corresponding to the sample image;
  • the input of the classification model is the original sample image data, the standard segmentation result and the predicted segmentation result output by the segmentation network, and the output is the reconstruction. confidence information.
  • the reconstruction confidence information determined by the segmentation result includes that the probability that the predicted segmentation result is a correct segmentation result is closer to 1, indicating that the classification model considers that the segmentation result predicted by the segmentation model has a higher reliability of the reconstruction result.
  • V(D, G) Ey[logD( y ,I)]+ Ez [log(1-D(G(z,I)))]+
  • Equation 2 For the meaning of the parameters involved in Equation 4, see Equation 2 and Equation 3.
  • the parameters of the segmentation model G are fixed first, and the parameters of the classification model D are updated, so that the classification model D includes the reconstruction confidence information determined based on the predicted segmentation results.
  • the probability value that the predicted segmentation result is the correct segmentation result is as close to 0 as possible, and the reconstructed confidence information determined based on the standard segmentation result including the probability that the standard segmentation result is the correct segmentation result is as close to 1 as possible.
  • the classification model D fix the parameters of the classification model D and update the parameters of the segmentation model G, so that the predicted segmentation results predicted by the segmentation model G are as close to the standard segmentation results as possible, so that the classification model D can reconstruct the predictions included in the confidence information determined based on the predicted segmentation results.
  • the probability that the segmentation result is the correct segmentation result is close to 1.
  • the target classification model and the target segmentation model are obtained by training, and the embodiments of the present application are not limited thereto.
  • the target segmentation model and the target classification model can also be obtained by separate training, as long as it can be ensured that the target segmentation model can predict accurate segmentation results based on the original image data and the reconstructed reference data, and that the target classification model can be based on the original image.
  • the data and segmentation results can predict accurate reconstruction confidence information.
  • the embodiment of the present application adopts the gradient descent method based on the Adam (a stochastic optimization algorithm) optimization algorithm to update the parameters of the model in the training process.
  • the betas (decay) (0.95, 0.9995) in Adam, that is, the exponential decay rate of the first-order moment estimation is 0.95, and the exponential decay rate of the second-order moment estimation is 0.995. Do not use weight decay.
  • the initial learning rate is set to 0.10001, which is reduced to one-tenth of the original after every 10 epochs, and a total of 50 epochs are trained.
  • a drop out layer is added between any two fully connected layers, and the drop out rate is set to 0.5, that is, in each iteration, only 50% of the randomly selected features are used. in training.
  • step 203 based on the target segmentation result, the target tree structure is reconstructed in the target image to obtain a complete reconstruction result of the target tree structure in the target image.
  • the target segmentation result can indicate whether each pixel in the target image belongs to the target tree-like organization. After the target segmentation result is obtained, the target tree-like organization can be automatically reconstructed in the target image based on the target segmentation result. The result of is the complete reconstruction result of the target tree-like organization in the target image.
  • the methods for reconstructing the target tree-like organization in the target image include but are not limited to the following two.
  • Method 1 The target tree-like organization is reconstructed in the target image directly based on the target segmentation result.
  • each pixel belonging to the target tree organization corresponds to a node of the target tree organization .
  • the nodes of the target tree structure are marked in the target image directly according to all the pixels belonging to the target tree structure, so as to realize the process of reconstructing the target tree structure in the target image directly based on the target segmentation result.
  • the connection relationship between the nodes of the target tree-like organization is also marked, so as to realize the target image segmentation result directly based on the target segmentation result. The process of reconstructing dendritic tissue.
  • the node of the target tree tissue refers to the neuron node of the target neuron
  • the node of the target tree tissue refers to the neuron node of the target neuron
  • the vessel node of the target vessel refers to the target vessel.
  • Method 2 Based on the target segmentation result and the local reconstruction result of the target tree structure in the target image, the target tree structure is reconstructed in the target image.
  • the process of reconstructing the target tree structure in the target image is realized based on the target segmentation result and the local reconstruction result of the target tree structure in the target image. Based on this, the process of reconstructing the target tree structure in the target image is equivalent to the process of supplementing the marking result indicated by the local reconstruction result of the target tree structure in the target image.
  • the process of reconstructing the target tree structure in the target image is as follows: based on the target segmentation result, determine that all objects belong to the target Pixel points of the tree-like organization; based on the local reconstruction results of the target tree-like organization in the target image, determine the reconstructed pixel points; on the basis of the marking results indicated by the local reconstruction results of the target tree-like organization in the target image, based on all Other pixels except the reconstructed pixel points in the pixel points belonging to the target tree-like organization are marked in the target image as other nodes of the target tree-like organization.
  • connection relationships between other nodes are also marked.
  • the target labeling result can be obtained, and the target labeling result is the complete labeling result of the target tree structure in the target image.
  • the target labeling result is obtained, based on the target labeling result, the complete reconstruction result of the target tree-like organization in the target image is obtained.
  • the method of obtaining the complete reconstruction result of the target tree-like organization in the target image is: taking the target image including the target labeling result as the complete reconstruction result of the target tree-like organization in the target image .
  • the method for obtaining the complete reconstruction result of the target tree organization in the target image based on the target labeling result is: based on the target labeling result, determine the relevant data of each node of the target tree organization, The file including the relevant data of each node is organized as a complete reconstruction result of the target tree in the target image.
  • the target dendritic tissue may be a target neuron or a target blood vessel, and the embodiments of the present application take the target dendritic tissue as the target neuron as an example for description.
  • the target tree organization is a target neuron
  • the target category of any pixel is used to indicate that any pixel belongs to the target neuron or that any pixel does not belong to the target neuron.
  • the process of obtaining the complete reconstruction result of the target tree structure in the target image includes: based on the target segmentation result, in the target image
  • the target pixel points belonging to the target neuron are determined in each pixel point; based on the target pixel points, the connection relationship between the neuron node of the target neuron and the neuron node of the target neuron is marked in the target image, and the target labeling result is obtained. ; Based on the target labeling result, obtain the complete reconstruction result of the target neuron in the target image.
  • the target pixel point refers to the pixel points that all belong to the target neuron in each pixel point in the target image.
  • the target image is marked with the neuron node of the target neuron and the connection relationship between the neuron node of the target neuron, and the implementation method of obtaining the target mark result is as follows: directly According to the target pixel point, all the neuron nodes of the target neuron and the connection relationship between all the neuron nodes of the target neuron are marked in the target image, and the target labeling result is obtained.
  • the target image is marked with the neuron node of the target neuron and the connection relationship between the neuron node of the target neuron
  • the implementation manner of obtaining the target labeling result is as follows: Based on the local reconstruction result of the target neuron in the target image, the reconstructed pixel points are determined; on the basis of the labeling result indicated by the local reconstruction result of the target neuron in the target image, based on the target pixel points except for the reconstructed pixel points The other pixel points of the target image are marked with other neuron nodes of the target neuron and the connection relationship between other neuron nodes, and the target labeling result is obtained.
  • the process of reconstructing the dendritic tissue in the image is shown in FIG. 7 .
  • the target classification model is processed to obtain the target reconstruction confidence information output by the target classification model; based on the target segmentation result corresponding to the target image, the target tree-like organization is reconstructed in the target image, and the complete target tree-like organization in the target image is obtained. Rebuild the result.
  • the reliability of the complete reconstruction results of the target tree organization in the target image is judged according to the target reconstruction confidence information, and then the researchers correct the complete reconstruction results with low reliability.
  • the labeling result indicated by the complete reconstruction result of the target tree in the target image adds new nodes compared to the labeling result indicated by the partial reconstruction result of the target tree in the target image.
  • a target image that does not include any labeling results a target image that includes a labeling result indicated by a partial reconstruction result of the target tree organized in the target image, and a label that includes a complete reconstruction result indicated by the target tree organized in the target image.
  • Schematic diagrams of the resulting target images are shown in (1) in FIG. 8, (2) in FIG. 8, and (3) in FIG. 8, respectively.
  • the number of nodes marked in the target image shown in (3) in FIG. 8 is larger than the number of nodes marked in the target image shown in (2) in FIG. 8 .
  • the target image is an initial partial image corresponding to the target tree in the initial image including the complete target tree, that is, the target image includes the starting point in the complete target tree part.
  • the following steps 204 to 205 are further included.
  • Step 204 in response to the complete reconstruction result of the target tree structure in the target image not satisfying the reconstruction termination condition, obtain the bottom line corresponding to the target tree structure in the initial image based on the complete reconstruction result of the target tree structure in the target image.
  • One partial image obtains the complete reconstruction of the target tree in the next partial image.
  • the complete reconstruction result of the target tree structure in the target image After obtaining the complete reconstruction result of the target tree structure in the target image, it is determined whether the complete reconstruction result of the target tree structure in the target image satisfies the reconstruction termination condition.
  • the manner of judging whether the complete reconstruction result of the target tree organization in the target image satisfies the reconstruction termination condition is set according to experience, or flexibly adjusted according to the application scenario, which is not limited in this embodiment of the present application.
  • the way of judging whether the complete reconstruction result of the target tree structure in the target image satisfies the reconstruction termination condition is: in response to the complete reconstruction result of the target tree structure in the target image, the complete reconstruction result of the target tree structure does not exist except for the target tree structure.
  • the complete reconstruction results of the target dendritic tissue in the target image satisfy the reconstruction termination condition;
  • the supplementary reconstruction result in addition to the local reconstruction result of the target dendritic tissue in the target image determines that the complete reconstruction result of the target dendritic tissue in the target image does not satisfy the reconstruction termination condition.
  • the target dendritic tissue needs to be reconstructed to obtain the complete reconstruction result of the target dendritic tissue in the initial image.
  • the way to continue the reconstruction is: based on the complete reconstruction result of the target tree structure in the target image, obtain the next partial image corresponding to the target tree structure in the initial image; obtain the complete target tree structure in the next partial image. Rebuild the result.
  • the method of acquiring the next partial image corresponding to the target tree structure in the initial image is: determining that the target tree structure is in the target image The pixel point that is farthest from the starting pixel of the initial image among the pixels that belong to the target tree-like organization indicated by the complete reconstruction result of the next partial image.
  • the complete reconstruction result of the target image in the initial image is obtained.
  • Complete reconstruction results Since the target image is the starting image corresponding to the target tree organization in the initial image, the complete reconstruction result of the target tree organization in the target image is directly used as the complete reconstruction result of the target tree organization in the initial image.
  • Step 205 In response to the complete reconstruction result of the target tree structure in the next partial image meeting the reconstruction termination condition, obtain the target tree structure in the initial image based on the obtained complete reconstruction results of the target tree structure in each partial image full reconstruction results.
  • the complete reconstruction result of the target tree structure in the next partial image After obtaining the complete reconstruction result of the target tree structure in the next partial image, it is judged whether the complete reconstruction result of the target tree structure in the next partial image satisfies the reconstruction termination condition.
  • the complete reconstruction result of the target tree structure in the next partial image When the result satisfies the reconstruction termination condition, the complete reconstruction result of the target tree structure in the initial image is obtained based on the obtained complete reconstruction results of the target tree structure in each partial image.
  • the complete reconstruction result of the target dendritic tissue in the initial image refers to the result obtained after the target dendritic tissue is completely marked in the initial image.
  • the obtained complete reconstruction results of the target tree structure in each partial image can indicate the local target tree structure marked in the initial image. It should be noted that the target tree structure is in two adjacent parts. There may be overlaps in the full reconstruction results in the images.
  • the method for obtaining the complete reconstruction results of the target tree-like organization in the initial image is: The complete reconstruction results organized in each partial image are combined and processed according to the association relationship between each partial image, and the complete reconstruction result of the target tree-like organization in the initial image is obtained.
  • the complete reconstruction result of the target dendritic tissue in the next partial image does not meet the reconstruction termination condition, continue to obtain the reconstruction corresponding to the target dendritic tissue in the initial image based on the complete reconstruction result of the target dendritic tissue in the next partial image.
  • the next partial image and obtain the complete reconstruction result of the target tree structure in the next partial image, and so on, until the complete reconstruction result of the target tree structure in a partial image satisfies the reconstruction termination condition, based on the acquired
  • the complete reconstruction results of the target dendritic tissue in each partial image are obtained, and the complete reconstruction results of the target dendritic tissue in the initial image are obtained.
  • the complete reconstruction result of the target dendritic tissue in the initial image is stored, so that researchers can directly extract and perform further Research.
  • the embodiment of the present application does not limit the manner of storing the complete reconstruction result of the target tree organization in the initial image.
  • the reconstruction of the neuron is based on the high-resolution three-dimensional image of the brain under the microscope, and the length of a certain dimension of the reconstructed neuron can reach several thousand pixels , even if only the image of the area where the neuron is located will occupy the space of the upper T (capacity unit), and a single neuron only occupies a very small position in the image.
  • the SWC (file type name) file is used to store the neuron. The complete reconstruction result of the meta.
  • Each SWC file represents a neuron, and each line in the SWC file represents a neuron node in the neuron.
  • the SWC file used to store the complete reconstruction result of the neuron is shown in Figure 9.
  • the coordinates (x, y, z) of the neuron node in the image are included, and the neuron node
  • the type of the neuron node, the radius (r) of the neuron node, the number (id) of the neuron node, and the number (pid) of the parent node of the neuron node can reflect the connection relationship between the neuron nodes.
  • the type of the neuron node is an axon or a dendrite, and a digital identifier may be used instead of a specific type when storing the type of the neuron node.
  • the three-dimensional point cloud data shown in FIG. 10 can be generated based on the SWC file storing the complete reconstruction results of the neurons.
  • a smaller image for example, 32 ⁇ 32 ⁇ 32
  • the original image data corresponding to the image and the reconstruction reference data predict the subsequent nodes of the tree structure, and then intercept the next image according to the subsequent nodes, and continue to complete until the tree structure is completely reconstructed.
  • the predicted segmentation results can be input into the classification model to obtain the reconstruction confidence, so as to judge whether the reconstruction results obtained based on the segmentation results are reliable.
  • the automatic reconstruction of the dendritic tissue can be realized, the speed and accuracy of the reconstruction can be improved, and the reconstruction confidence information of the segmentation result can be given, so as to help researchers determine whether the reconstruction is reliable, and Quickly locate areas of the image that may be erroneous and require manual correction.
  • the target segmentation result corresponding to the target image is automatically obtained based on the original image data corresponding to the target image and the reconstruction reference data, and then the complete reconstruction result of the target tree-like organization in the target image is automatically obtained based on the target segmentation result.
  • the automatic reconstruction of the dendritic tissue can be realized, and the reconstruction process of the dendritic tissue does not need to rely on manual labor, which is beneficial to improve the efficiency of reconstructing the dendritic tissue in the image, and the obtained reconstruction results of the dendritic tissue are reliable. Sex is higher.
  • an embodiment of the present application provides an apparatus for reconstructing tree-like tissue in an image, the apparatus including:
  • the first obtaining unit 1101 is used to obtain the target image corresponding to the target tree, the original image data corresponding to the target image, and the reconstruction reference data corresponding to the target image.
  • the reconstruction reference data is based on the local reconstruction result of the target tree in the target image.
  • the second obtaining unit 1102 is configured to call the target segmentation model, and obtain the target segmentation result corresponding to the target image based on the original image data and the reconstructed reference data, and the target segmentation result is used to indicate the target category of each pixel in the target image.
  • the target category of the pixel point is used to indicate that any pixel point belongs to the target tree-like organization or any pixel point does not belong to the target tree-like organization;
  • the reconstruction unit 1103 is configured to reconstruct the target tree structure in the target image based on the target segmentation result to obtain a complete reconstruction result of the target tree structure in the target image.
  • the second obtaining unit 1102 is configured to call the target segmentation model, and based on the fusion data of the original image data and the reconstructed reference data, perform down-sampling processing for the first reference number in sequence to obtain the first reference number corresponding to the target image.
  • a target feature based on the target convolution feature corresponding to the first target feature, perform upsampling processing for the first reference number in sequence to obtain a second target feature corresponding to the target image; perform target convolution processing on the second target feature to obtain the target The target segmentation result corresponding to the image.
  • the first reference number is three times, and any downsampling process includes one convolution process and one pooling process; the second acquisition unit 1102 is further configured to perform a comparison between the original image data and the reconstructed reference data.
  • Perform the first convolution process on the fusion data to obtain the first convolution feature corresponding to the target image; perform the first pooling process on the first convolution feature to obtain the first pooled feature corresponding to the target image;
  • Perform the second convolution process to obtain the second convolution feature corresponding to the target image; perform the second pooling process on the second convolution feature to obtain the second pooling feature corresponding to the target image; perform the first pooling feature on the second pooling feature.
  • the third convolution process is performed to obtain the third convolution feature corresponding to the target image; the third pooling process is performed on the third convolution feature to obtain the first target feature corresponding to the target image.
  • any one upsampling process includes one deconvolution process and one convolution process; the second acquiring unit 1102 is further configured to perform a first deconvolution on the target convolution feature corresponding to the first target feature product processing to obtain the first up-sampling feature corresponding to the target image; perform fourth convolution processing on the splicing feature of the first up-sampling feature and the third convolution feature to obtain the fourth convolution feature corresponding to the target image; The second deconvolution process is performed on the convolution feature to obtain the second upsampling feature corresponding to the target image; the fifth convolution process is performed on the splicing feature of the second upsampling feature and the second convolution feature to obtain the second upsampling feature corresponding to the target image.
  • the apparatus further includes:
  • the third obtaining unit 1104 is configured to invoke the target classification model, and obtain target reconstruction confidence information based on the original image data and the target segmentation result.
  • the target classification model includes at least one convolutional sub-model, at least one fully-connected sub-model, and one confidence prediction sub-model connected in sequence;
  • the third obtaining unit 1104 is configured to combine the original image data with the target
  • the segmentation result is input into the first convolution sub-model in the target classification model for processing, and the classification features output by the first convolution sub-model are obtained; starting from the second convolution sub-model, the output of the previous convolution sub-model is The classification features are input into the next convolution sub-model for processing, and the classification features output by the next convolution sub-model are obtained; the classification features output by the last convolution sub-model are input into the first fully connected sub-model for processing, and the first Fully connected features output by the fully connected sub-model; starting from the second fully connected sub-model, input the fully-connected features output by the previous fully-connected sub-model into the next fully-connected sub-model for processing to obtain the output of the next fully-connected sub-model
  • the fully connected feature of the last fully connected sub-model is
  • the first acquiring unit 1101 is further configured to acquire at least one sample image, original sample image data corresponding to at least one sample image, reconstructed reference sample data corresponding to at least one sample image, and at least one sample image respectively The standard segmentation results corresponding to the images respectively;
  • the device also includes:
  • the training unit 1105 is used to supervise the training of the initial segmentation model based on the original sample image data corresponding to the at least one sample image, the reconstructed reference sample data corresponding to the at least one sample image respectively, and the standard segmentation result corresponding to the at least one sample image respectively, to obtain target segmentation model.
  • the first acquiring unit 1101 is further configured to acquire at least one sample image, original sample image data corresponding to at least one sample image, reconstructed reference sample data corresponding to at least one sample image, and at least one sample image respectively The standard segmentation results corresponding to the images respectively;
  • the training unit 1105 is further configured to analyze the initial segmentation model and the initial classification model based on the original sample image data corresponding to the at least one sample image, the reconstructed reference sample data corresponding to the at least one sample image, and the standard segmentation result corresponding to the at least one sample image respectively. Perform adversarial training to obtain a target segmentation model and a target classification model.
  • the training unit 1105 is further configured to call an initial segmentation model, based on the original sample image data corresponding to the first sample image in the at least one sample image and the reconstructed reference sample data corresponding to the first sample image , obtain the predicted segmentation result corresponding to the first sample image; call the initial classification model, and obtain the first reconstruction confidence information based on the original sample image data corresponding to the first sample image and the predicted segmentation result corresponding to the first sample image; Based on the original sample image data corresponding to the first sample image and the standard segmentation result corresponding to the first sample image, obtain the second reconstruction confidence information; based on the first reconstruction confidence information and the second reconstruction confidence information, determine the first reconstruction confidence loss function; update the parameters of the initial classification model based on the first loss function; obtain the first classification model in response to the updating process of the parameters of the initial classification model meeting the first termination condition; call the initial segmentation model, based on at least one first sample image The original sample image data corresponding to the second sample image and the reconstructed reference sample data corresponding to
  • the first obtaining unit 1101 is further configured to respond that the complete reconstruction result of the target tree organization in the target image does not satisfy the reconstruction termination condition, based on the complete reconstruction result of the target tree organization in the target image , obtain the next partial image corresponding to the target tree structure in the initial image; obtain the complete reconstruction result of the target tree structure in the next partial image; in response to the complete reconstruction result of the target tree structure in the next partial image satisfying the reconstruction
  • the termination condition is to obtain the complete reconstruction result of the target tree structure in the initial image based on the obtained complete reconstruction results of the target tree structure in each partial image.
  • the target dendritic organization is a target neuron
  • the target image is obtained from a three-dimensional image of the brain containing the target neuron.
  • the target category of any pixel is used to indicate that any pixel belongs to the target neuron or that any pixel does not belong to the target neuron;
  • the reconstruction unit 1103 is used to determine the target pixel points belonging to the target neuron in each pixel point in the target image based on the target segmentation result; based on the target pixel points, mark the neuron node of the target neuron and the target in the target image.
  • the connection relationship between the neuron nodes of the neuron is used to obtain the target labeling result; based on the target labeling result, the complete reconstruction result of the target neuron in the target image is determined.
  • the target segmentation result corresponding to the target image is automatically obtained based on the original image data corresponding to the target image and the reconstruction reference data, and then the complete reconstruction result of the target tree-like organization in the target image is automatically obtained based on the target segmentation result.
  • the automatic reconstruction of the dendritic tissue can be realized, and the reconstruction process of the dendritic tissue does not need to rely on manual labor, which is beneficial to improve the efficiency of reconstructing the dendritic tissue in the image, and the obtained reconstruction results of the dendritic tissue are reliable. Sex is higher.
  • FIG. 13 is a schematic structural diagram of a server provided by an embodiment of the present application.
  • the server may vary greatly due to different configurations or performance, and may include one or more processors (Central Processing Units, CPU) 1301 and one or more A plurality of memories 1302, wherein, at least one piece of program code is stored in the one or more memories 1302, and the at least one piece of program code is loaded and executed by the one or more processors 1301, so as to realize the pairing provided by the above method embodiments. Methods for reconstruction of dendritic organization in images.
  • the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface for input and output, and the server may also include other components for implementing device functions, which will not be described here.
  • FIG. 14 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • the terminal is: a smart phone, a tablet computer, a notebook computer or a desktop computer.
  • a terminal may also be called user equipment, portable terminal, laptop terminal, desktop terminal, etc. by other names.
  • the terminal includes: a processor 1401 and a memory 1402 .
  • the processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 1401 can use at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
  • the processor 1401 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor for processing data in a standby state.
  • the processor 1401 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 1401 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 1402 may include one or more computer-readable storage media, which may be non-transitory. Memory 1402 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices. In some embodiments, a non-transitory computer-readable storage medium in the memory 1402 is used to store at least one instruction, the at least one instruction is used to be executed by the processor 1401 to implement the image pairing provided by the method embodiments of the present application Methods for reconstruction of dendritic organization.
  • the terminal may optionally further include: a peripheral device interface 1403 and at least one peripheral device.
  • the processor 1401, the memory 1402 and the peripheral device interface 1403 may be connected through a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1403 through a bus, a signal line or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 1404 , a display screen 1405 , a camera assembly 1406 , an audio circuit 1407 , a positioning assembly 1408 and a power supply 1409 .
  • the peripheral device interface 1403 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1401 and the memory 1402 .
  • processor 1401, memory 1402, and peripherals interface 1403 are integrated on the same chip or circuit board; in some other embodiments, any one of processor 1401, memory 1402, and peripherals interface 1403 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 1404 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 1404 communicates with communication networks and other communication devices via electromagnetic signals.
  • the radio frequency circuit 1404 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 1404 includes an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
  • the radio frequency circuit 1404 may communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocols include, but are not limited to, metropolitan area networks, mobile communication networks of various generations (2G, 3G, 4G and 5G), wireless local area networks and/or WiFi (Wireless Fidelity, wireless fidelity) networks.
  • the radio frequency circuit 1404 may further include a circuit related to NFC (Near Field Communication, short-range wireless communication), which is not limited in this application.
  • the display screen 1405 is used for displaying UI (User Interface, user interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • the display screen 1405 also has the ability to acquire touch signals on or above the surface of the display screen 1405 .
  • the touch signal may be input to the processor 1401 as a control signal for processing.
  • the display screen 1405 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards.
  • the display screen 1405 there may be one display screen 1405, which is arranged on the front panel of the terminal; in other embodiments, there may be at least two display screens 1405, which are respectively arranged on different surfaces of the terminal or in a folded design; In some embodiments, the display screen 1405 may be a flexible display screen disposed on a curved surface or a folding surface of the terminal. Even, the display screen 1405 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
  • the display screen 1405 can be prepared by using materials such as LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, organic light emitting diode).
  • the camera assembly 1406 is used to capture images or video.
  • the camera assembly 1406 includes a front camera and a rear camera.
  • the front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal.
  • there are at least two rear cameras which are any one of a main camera, a depth-of-field camera, a wide-angle camera, and a telephoto camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function, the main camera It is integrated with the wide-angle camera to achieve panoramic shooting and VR (Virtual Reality, virtual reality) shooting functions or other integrated shooting functions.
  • the camera assembly 1406 may also include a flash.
  • the flash can be a single color temperature flash or a dual color temperature flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • Audio circuitry 1407 may include a microphone and speakers.
  • the microphone is used to collect the sound waves of the user and the environment, convert the sound waves into electrical signals, and input them to the processor 1401 for processing, or to the radio frequency circuit 1404 to realize voice communication.
  • the microphone may also be an array microphone or an omnidirectional collection microphone.
  • the speaker is used to convert the electrical signal from the processor 1401 or the radio frequency circuit 1404 into sound waves.
  • the loudspeaker can be a traditional thin-film loudspeaker or a piezoelectric ceramic loudspeaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible to humans, but also convert electrical signals into sound waves inaudible to humans for distance measurement and other purposes.
  • the audio circuit 1407 may also include a headphone jack.
  • the positioning component 1408 is used to locate the current geographic location of the terminal to implement navigation or LBS (Location Based Service, location-based service).
  • the positioning component 1408 may be a positioning component based on the GPS (Global Positioning System, global positioning system) of the United States, the Beidou system of China, the Grenas system of Russia, or the Galileo system of the European Union.
  • GPS Global Positioning System, global positioning system
  • the power supply 1409 is used to power various components in the terminal.
  • the power source 1409 may be alternating current, direct current, disposable batteries or rechargeable batteries.
  • the rechargeable battery can support wired charging or wireless charging.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal further includes one or more sensors 1410 .
  • the one or more sensors 1410 include, but are not limited to, an acceleration sensor 1411 , a gyro sensor 1412 , a pressure sensor 1413 , a fingerprint sensor 1414 , an optical sensor 1415 , and a proximity sensor 1416 .
  • the acceleration sensor 1411 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal. For example, the acceleration sensor 1411 can be used to detect the components of the gravitational acceleration on the three coordinate axes.
  • the processor 1401 can control the display screen 1405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1411 .
  • the acceleration sensor 1411 can also be used for game or user movement data collection.
  • the gyroscope sensor 1412 can detect the body direction and rotation angle of the terminal, and the gyroscope sensor 1412 can cooperate with the acceleration sensor 1411 to collect 3D actions of the user on the terminal.
  • the processor 1401 can implement the following functions according to the data collected by the gyro sensor 1412 : motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 1413 may be disposed on the side frame of the terminal and/or the lower layer of the display screen 1405 .
  • the processor 1401 performs left and right hand identification or shortcut operations according to the holding signal collected by the pressure sensor 1413 .
  • the processor 1401 controls the operability controls on the UI interface according to the user's pressure operation on the display screen 1405.
  • the operability controls include at least one of button controls, scroll bar controls, icon controls, and menu controls.
  • the fingerprint sensor 1414 is used to collect the user's fingerprint, and the processor 1401 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 1414, or the fingerprint sensor 1414 identifies the user's identity according to the collected fingerprint. When the user's identity is identified as a trusted identity, the processor 1401 authorizes the user to perform relevant sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, making payments, and changing settings.
  • the fingerprint sensor 1414 may be disposed on the front, back or side of the terminal. When a physical button or a manufacturer's logo is provided on the terminal, the fingerprint sensor 1414 can be integrated with the physical button or the manufacturer's logo.
  • Optical sensor 1415 is used to collect ambient light intensity.
  • the processor 1401 may control the display brightness of the display screen 1405 according to the ambient light intensity collected by the optical sensor 1415 . Specifically, when the ambient light intensity is high, the display brightness of the display screen 1405 is increased; when the ambient light intensity is low, the display brightness of the display screen 1405 is decreased.
  • the processor 1401 may also dynamically adjust the shooting parameters of the camera assembly 1406 according to the ambient light intensity collected by the optical sensor 1415 .
  • a proximity sensor 1416 also called a distance sensor, is usually provided on the front panel of the terminal.
  • the proximity sensor 1416 is used to collect the distance between the user and the front of the terminal.
  • the processor 1401 controls the display screen 1405 to switch from the bright screen state to the off screen state; when the proximity sensor 1416 detects that the user When the distance from the front of the terminal gradually increases, the processor 1401 controls the display screen 1405 to switch from the closed screen state to the bright screen state.
  • FIG. 14 does not constitute a limitation on the terminal, and may include more or less components than the one shown, or combine some components, or adopt different component arrangements.
  • a computer device including a processor and a memory having at least one piece of program code stored in the memory.
  • the at least one piece of program code is loaded and executed by one or more processors to implement any of the above-mentioned methods for reconstructing a tree-like organization in an image.
  • a computer-readable storage medium is also provided, and at least one piece of program code is stored in the computer-readable storage medium, and the at least one piece of program code is loaded and executed by the processor of the computer device, so as to realize the above-mentioned Any method of reconstructing dendritic organization in an image.
  • the above-mentioned computer-readable storage medium may be a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a compact disc (Compact Disc Read-Only Memory) , CD-ROM), magnetic tapes, floppy disks and optical data storage devices, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • CD-ROM Compact Disc Read-Only Memory
  • magnetic tapes floppy disks and optical data storage devices, etc.
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs any of the above-mentioned methods for reconstructing a tree-like organization in an image.
  • references herein to "a plurality” means two or more.
  • "And/or" which describes the association relationship of the associated objects, means that there can be three kinds of relationships, for example, A and/or B, which can mean that A exists alone, A and B exist at the same time, and B exists alone.
  • the character “/” generally indicates that the associated objects are an "or" relationship.

Abstract

一种对图像中的树状组织进行重建的方法、设备及存储介质。方法包括:获取目标树状组织对应的目标图像、目标图像对应的原始图像数据以及目标图像对应的重建参考数据,重建参考数据基于局部重建结果确定(201);调用目标分割模型,基于原始图像数据和重建参考数据,获取目标图像对应的目标分割结果,目标分割结果用于指示各个像素点的目标类别(202);基于目标分割结果,在目标图像中对目标树状组织进行重建,得到目标树状组织在目标图像中的完整重建结果(203)。基于此种过程,能够实现对树状组织的自动重建,树状组织的重建过程无需依赖人工,有利于提高对图像中的树状组织进行重建的效率,得到的树状组织的重建结果的可靠性较高。

Description

对图像中的树状组织进行重建的方法、设备及存储介质
本申请要求于2020年11月09日提交中国专利局,申请号为2020112389942,申请名称为“对图像中的树状组织进行重建的方法、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及计算机技术领域,特别涉及一种对图像中的树状组织进行重建的方法、设备及存储介质。
背景技术
树状组织是指生物体中具有树状结构的组织,例如,人体中的神经元、人体中的血管等。对图像中的树状组织进行重建是指在包含树状组织的图像中标记出树状组织,以得到树状组织的重建结果。对图像中的树状组织进行重建能够为人工智能的实现提供关键数据。
相关技术中,由标注人员人工对图像中的树状组织进行重建,重建效率较低,得到的树状组织的重建结果的可靠性较差。
发明内容
本申请实施例提供了一种对图像中的树状组织进行重建的方法、设备及存储介质,可用于提高对图像中的树状组织进行重建的效率。
一方面,本申请实施例提供了一种对图像中的树状组织进行重建的方法,所述方法包括:
获取目标树状组织对应的目标图像、所述目标图像对应的原始图像数据以及所述目标图像对应的重建参考数据,所述重建参考数据基于所述目标树状组织在所述目标图像中的局部重建结果确定;
调用目标分割模型,基于所述原始图像数据和所述重建参考数据,获取所述目标图像对应的目标分割结果,所述目标分割结果用于指示所述目标图像中的各个像素点的目标类别,任一像素点的目标类别用于指示所述任一像素点属于所述目标树状组织或者所述任一像素点不属于所述目标树状组织;
基于所述目标分割结果,在所述目标图像中对所述目标树状组织进行重建,得到所述目标树状组织在所述目标图像中的完整重建结果。
另一方面,提供了一种对图像中的树状组织进行重建的装置,所述装置包括:
第一获取单元,用于获取目标树状组织对应的目标图像、所述目标图像对应的原始图像数据以及所述目标图像对应的重建参考数据,所述重建参考数据基于所述目标树状组织在所述目标图像中的局部重建结果确定;
第二获取单元,用于调用目标分割模型,基于所述原始图像数据和所述重建参考数据,获取所述目标图像对应的目标分割结果,所述目标分割结果用于指示所述目标图像中的各个像素点的目标类别,任一像素点的目标类别用于指示所述任一像素点属于所述目标树状组织或者所述任一像素点不属于所述目标树状组织;
重建单元,用于基于所述目标分割结果,在所述目标图像中对所述目标树状组织进行重 建,得到所述目标树状组织在所述目标图像中的完整重建结果。
另一方面,提供了一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条程序代码,所述至少一条程序代码由所述处理器加载并执行,以实现上述任一所述的对图像中的树状组织进行重建的方法。
另一方面,还提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条程序代码,所述至少一条程序代码由处理器加载并执行,以实现上述任一所述的对图像中的树状组织进行重建的方法。
另一方面,还提供了一种计算机程序产品或计算机程序,所述计算机程序产品或计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中。计算机设备的处理器从所述计算机可读存储介质读取所述计算机指令,处理器执行所述计算机指令,使得所述计算机设备执行上述任一所述的对图像中的树状组织进行重建的方法。
本申请实施例提供的技术方案至少带来如下有益效果:
在本申请实施例中,先基于目标图像对应的原始图像数据和重建参考数据自动获取目标图像对应的目标分割结果,然后基于目标分割结果自动得到目标树状组织在目标图像中的完整重建结果。基于此种过程,能够实现对树状组织的自动重建,树状组织的重建过程无需依赖人工,有利于提高对图像中的树状组织进行重建的效率,得到的树状组织的重建结果的可靠性较高。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种对图像中的树状组织进行重建的方法的实施环境的示意图;
图2是本申请实施例提供的一种对图像中的树状组织进行重建的方法的流程图;
图3是本申请实施例提供的一种包含神经元的图像的示意图;
图4是本申请实施例提供的一种获取目标图像对应的目标分割结果的过程的示意图;
图5是本申请实施例提供的一种获取目标重建置信度信息的过程的流程图;
图6是本申请实施例提供的一种目标分类模型的结构示意图;
图7是本申请实施例提供的一种对图像中的树状组织进行重建的过程的示意图;
图8是本申请实施例提供的一种包括不同的标记结果的目标图像的示意图;
图9是本申请实施例提供的一种用于存储神经元的完整重建结果的SWC文件的示意图;
图10是本申请实施例提供的一种三维点云数据的示意图;
图11是本申请实施例提供的一种对图像中的树状组织进行重建的装置的示意图;
图12是本申请实施例提供的另一种对图像中的树状组织进行重建的装置的示意图;
图13是本申请实施例提供的一种服务器的结构示意图;
图14是本申请实施例提供的一种终端的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
本申请实施例提供了一种对图像中的树状组织进行重建的方法,请参考图1,其示出了本申请实施例提供的对图像中的树状组织进行重建的方法的实施环境的示意图。该实施环境包括:终端11和服务器12。
本申请实施例提供的对图像中的树状组织进行重建的方法由终端11执行或者由服务器12执行,本申请实施例对此不加以限定。示例性地,对于本申请实施例提供的对图像中的树状组织进行重建的方法由终端11执行的情况,在得到目标树状组织在目标图像中的完整重建结果后,终端11能够展示该目标树状组织在目标图像中的完整重建结果。当然,终端11还能够将目标树状组织在目标图像中的完整重建结果发送至服务器12进行存储。
示例性地,对于本申请实施例提供的对图像中的树状组织进行重建的方法由服务器12执行的情况,在得到目标树状组织在目标图像中的完整重建结果之后,服务器12能够将目标树状组织在目标图像中的完整重建结果发送至终端11进行展示。
在一种可能实现方式中,终端11是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能手表等,但并不局限于此。服务器12是独立的物理服务器,或者是多个物理服务器构成的服务器集群或者分布式系统,再或者是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN(Content Delivery Network,内容分发网络)、以及大数据和人工智能平台等基础云计算服务的云服务器。终端11以及服务器12通过有线或无线通信方式进行直接或间接地连接,本申请在此不做限制。
本领域技术人员应能理解上述终端11和服务器12仅为举例,其他现有的或今后可能出现的终端或服务器如可适用于本申请,也应包含在本申请保护范围以内,并在此以引用方式包含于此。
基于上述图1所示的实施环境,本申请实施例提供一种对图像中的树状组织进行重建的方法,以该方法应用于服务器12为例。如图2所示,本申请实施例提供的方法包括如下步骤201至步骤203。
在步骤201中,获取目标树状组织对应的目标图像、目标图像对应的原始图像数据以及目标图像对应的重建参考数据,重建参考数据基于目标树状组织在目标图像中的局部重建结果确定。
目标树状组织是指待重建的任一树状组织。树状组织是指生物体中具有树状结构的组织,本申请实施例对待重建的树状组织的类型不加以限定,示例性地,待重建的树状组织是指人体中的神经元,或者待重建的树状组织是指人体中的血管等。其中,神经元是构成神经系统结构和功能的基本单位。当待重建的树状组织是指神经元时,树状组织的重建是指神经元的重建。神经元的重建是建立脑科学大数据、理解人类智能和情绪的关键之一。
目标树状组织对应的目标图像是指包含完整或者局部目标树状组织、且包括的完整或者 局部目标树状组织未完整重建出来的图像。通过对图像中的树状组织进行重建的方法,能够将目标图像中包含的完整或者局部目标树状组织完整重建出来,以得到目标树状组织在目标图像中的完整重建结果。
在一种可能实现方式中,目标图像是从包含完整目标树状组织的初始图像中获取的,初始图像可以为二维图像,也可以为三维图像,本申请实施例对此不加以限定,在一个实施例中,目标图像可以是从包含部分目标树状组织的初始图像中获取到的。在示例性实施例中,当目标树状组织为目标神经元时,包含目标神经元的初始图像为大脑三维图像,也就是说,目标图像从包含目标神经元的大脑三维图像中获取得到。在示例性实施例中,当目标树状组织为目标血管时,包含目标血管的初始图像为血管三维图像,也就是说,目标图像从包含目标血管的血管三维图像中获取得到。
在一种可能实现方式中,获取目标树状组织对应的目标图像的过程包括以下步骤2011至步骤2013。
步骤2011:获取目标树状组织在初始图像中的初始重建结果。
目标树状组织在初始图像中的初始重建结果是指在初始图像中对目标树状组织进行初步重建后得到的结果。本申请实施例对目标树状组织在初始图像中的初始重建结果的表现形式不加以限定,示例性地,目标树状组织在初始图像中的初始重建结果是指标记有初始重建节点以及初始重建节点之间的连接关系的图像;或者,目标树状组织在初始图像中的初始重建结果是指包括初始重建节点的相关数据的文件。任一初始重建节点的相关数据包括但不限于该任一初始重建节点在初始图像中的位置数据、该任一初始重建节点与其他初始重建节点之间的关联数据等。其中,任一初始重建节点与其他初始重建节点之间的关联数据用于指示该任一初始重建节点与其他初始重建节点之间的连接关系。
初始重建节点是指在初始图像中对目标树状结果进行初步重建后标记出的重建节点。需要说明的是,一个初始重建节点对应初始图像中的一个属于目标树状组织的像素点。
在示例性实施例中,目标树状组织在初始图像中的初始重建结果是从初始图像中的起始像素点开始对目标树状组织进行初步重建后得到的结果。本申请实施例对起始像素点的位置不加以限定。示例性地,假设初始图像是指处于三维坐标系下的三维图像且初始图像的一个角点位于三维坐标系的原点处,则起始像素点是指初始图像中的三维坐标为(0,0,0)的像素点。
在一种可能实现方式中,获取目标树状组织在初始图像中的初始重建结果的方式包括但不限于以下三种。
方式1:直接提取目标树状组织在初始图像中的初始重建结果。
此种方式1发生在预先获取并存储了目标树状组织在初始图像中的初始重建结果的情况下。
方式2:获取通过人工标注的方式得到的目标树状组织在初始图像中的初始重建结果。
此种方式2发生在未预先获取并存储了目标树状组织在初始图像中的初始重建结果的情况下。在未预先获取并存储了目标树状组织在初始图像中的初始重建结果的情况下,需要通过人工在初始图像中手动标记出目标树状组织的初始重建节点以及初始重建节点之间的连接关系,来获取目标树状组织在初始图像中的初始重建结果。
示例性地,手动标记出目标树状组织的初始重建节点以及初始重建节点之间的连接关系的过程为:人工手动在初始图像中连续确定出属于目标树状组织的k(k为大于1的整数)个 像素点,将该k个像素点标记为目标树状组织的初始重建节点,根据目标树状组织的整体走向建立标记出的初始重建节点之间的连接关系。在示例性实施例中,在连续确定出属于目标树状组织的k个像素点的过程中,从初始图像中的起始像素点开始依次判断各个像素点是否属于目标树状组织。
方式3,获取到第三方提供的目标树状组织在初始图像中的初始重建结果。
此种方式3发生在第三方即提供初始重建结果的服务方中存储了目标树状组织在初始图像中的初始重建结果的情况下。
步骤2012:基于目标树状组织在初始图像中的初始重建结果,在初始图像中确定属于目标树状组织的像素点,在属于目标树状组织的像素点中确定满足条件的像素点。
根据目标树状组织在初始图像中的初始重建结果能够确定出已经在初始图像中重建出的目标树状组织的初始重建节点,目标树状组织的每个初始重建节点均对应初始图像中的一个属于目标树状组织的像素点。将已经在初始图像中重建出的目标树状组织的初始重建节点对应的像素点作为属于目标树状组织的像素点,进而在属于目标树状组织的像素点中确定满足条件的像素点。
满足条件根据经验设置或者根据应用场景灵活调整,本申请实施例对此不加以限定。示例性地,对于目标树状组织在初始图像中的初始重建结果是从初始图像中的起始像素点开始对目标树状组织进行初步重建后得到的结果的情况,满足条件的像素点是指属于目标树状组织的像素点中距离初始图像的起始像素点距离最远的像素点。
步骤2013:以满足条件的像素点为中心点,在初始图像中截取目标尺寸的图像作为目标图像。
目标尺寸根据经验设置,或者根据可用计算资源灵活调整,本申请实施例对此不加以限定。示例性地,对于初始图像为三维图像的情况,目标尺寸为32×32×32(单位为像素)。通过以满足条件的像素点为中心点,在初始图像中截取一张目标尺寸的图像,即可得到目标图像。
需要说明的是,以上步骤2011至步骤2013仅为获取目标树状组织对应的目标图像的方式的一种示例性描述,本申请实施例并不局限于此。在示例性实施例中,获取目标树状组织对应的目标图像的方式为:在初始图像中确定出属于目标树状组织的任一像素点,以该任一像素点为中心点截取目标尺寸的图像作为目标图像。在一个实施例中,获取目标树状组织对应的目标图像的方式为:在初始图像中确定出属于目标树状组织的指定像素点,以该指定像素点为中心点截取目标尺寸的图像作为目标图像,该指定像素点是预先设置好的。
在示例性实施例中,目标图像为初始图像中的局部图像。以树状组织为神经元为例,神经元对应的三维图像占用空间过大,神经元本身在图像中非常稀疏(如图3所示),三维图像中会存在很多冗余信息,直接针对整张图像进行重建容易造成重建效率低和重建精度差的问题。因此,本申请实施例针对局部图像进行树状组织的重建,有利于提高重建效率和重建精度。
在获取目标树状组织对应的目标图像后,进一步获取目标图像对应的原始图像数据以及目标图像对应的重建参考数据。接下来分别介绍获取目标图像对应的原始图像数据以及目标图像对应的重建参考数据的相关内容。
目标图像对应的原始图像数据用于表征目标图像的原始图像特征。在示例性实施例中,目标图像对应的原始图像数据的获取方式为:获取目标图像中的各个像素点的灰度特征;基 于各个像素点的灰度特征确定目标图像对应的原始图像数据。示例性地,任一像素点的灰度特征基于该任一像素点在目标图像中的灰度值确定,例如,将任一像素点在目标图像中的灰度值作为该任一像素点的灰度特征;或者,对任一像素点在目标图像中的灰度值进行标准化,将标准化的值作为该任一像素点的灰度特征。
在示例性实施例中,对于目标图像为三维图像的情况,目标图像对应的原始图像数据为三维数据,该三维的原始图像数据中包括目标图像中的各个像素点的灰度特征。
目标图像对应的重建参考数据用于为在目标图像中对目标树状组织进行的重建的过程提供数据参考。重建参考数据基于目标树状组织在目标图像中的局部重建结果确定。也就是说,在获取目标图像对应的重建参考数据之前,需要先获取目标树状组织在目标图像中的局部重建结果。
目标树状组织在目标图像中的局部重建结果能够为获取目标树状组织在目标图像中的完整重建结果的重建过程提供数据参考。在示例性实施例中,对于目标树状组织对应的目标图像基于目标树状组织在初始图像中的初始重建结果确定的情况,目标树状组织在目标图像中的局部重建结果的获取方式为:在目标树状组织在初始图像中的初始重建结果中,确定与目标图像对应的初始重建结果;基于与目标图像对应的初始重建结果,确定目标树状组织在目标图像中的局部重建结果。
在一种可能实现方式中,基于与目标图像对应的初始重建结果,确定目标树状组织在目标图像中的局部重建结果的方式为:将与目标图像对应的初始重建结果作为目标树状组织在目标图像中的局部重建结果。
在另一种可能实现方式中,基于与目标图像对应的初始重建结果,确定目标树状组织在目标图像中的局部重建结果的方式为:获取目标树状组织在目标图像中的增量重建结果,将与目标图像对应的初始重建结果和增量重建结果的汇总结果作为目标树状组织在目标图像中的局部重建结果。在示例性实施例中,当与目标图像对应的初始重建结果无法为自动重建过程提供足够的数据参考时,进一步获取目标树状组织在目标图像中的增量重建结果,进而将与目标图像对应的重建结果和增量重建结果的汇总结果作为用于为自动重建过程提供数据参考的局部重建结果。在示例性实施例中,目标树状组织在目标图像中的增量重建结果由人工在与目标图像对应的初始重建结果的基础上进行额外标记得到。
在示例性实施例中,目标图像对应的重建参考数据中包括目标图像中的各个像素点的重建参考特征,任一像素点的重建参考特征用于指示该任一像素点是否为根据局部重建结果确定出的属于目标树状组织的重建参考像素点。
在一种可能实现方式中,获取目标图像对应的重建参考数据的过程为:基于局部重建结果,在目标图像中的各个像素点中确定出属于目标树状组织的重建参考像素点;将基于局部重建结果确定出的重建参考像素点和其他像素点进行二值化处理,得到目标图像中的各个像素点的重建参考特征;将包括目标图像中的各个像素点的重建参考特征的数据作为目标图像对应的重建参考数据。
需要说明的是,根据局部重建结果能够准确确定出重建参考像素点属于目标树状组织,但是无法准确确定出除重建参考像素点外的其他像素点是否属于目标树状组织,也就是说,其他像素点实际上可能属于目标树状组织,也可能不属于目标树状组织,在重建参考数据中暂时默认其他像素点不属于目标树状组织,在后续的重建过程中,其他像素点的类别可能会发生变化。
在示例性实施例中,将基于局部重建结果确定出的重建参考像素点和其他像素点进行二值化处理,得到目标图像中的各个像素点的重建参考特征的方式为:将基于局部重建结果确定出的重建参考像素点赋予第一数值,将其他像素点赋予第二数值。也就是说,目标图像中的重建参考像素点的重建参考特征为第一数值,目标图像中的除重建参考像素点外的其他像素点的重建参考特征为第二数值。第一数值和第二数值根据经验设置,或者根据应用场景灵活调整,本申请实施例对此不加以限定。示例性地,第一数值为1,第二数值为0。
在将包括图像中的各个像素点的重建参考特征的数据作为目标图像对应的重建参考数据后,目标图像对应的重建参考数据能够直观表示出目标图像中的哪个或者哪些像素点为基于局部重建结果确定出的属于目标树状组织的重建参考像素点,进而为后续针对目标树状组织的重建过程提供数据参考。
在示例性实施例中,对于目标图像为三维图像的情况,目标图像的重建参考数据为三维数据。该三维的重建参考数据中包括每个像素点的重建参考特征。
示例性地,以树状组织为神经元为例,如图3所示,不同的神经元可能会非常接近,若直接基于初始图像数据进行重建,可能会将多个接近的神经元重建为一个神经元,导致神经元的重建精度较差。本申请实施例通过获取针对某一个神经元的重建参考数据,能够为在目标图像中准确重建出该一个神经元提供有力的数据参考,从而能够在目标图像中精准的针对该一个神经元进行重建。
在步骤202中,调用目标分割模型,基于原始图像数据和重建参考数据,获取目标图像对应的目标分割结果,目标分割结果用于指示目标图像中的各个像素点的目标类别。
其中,任一像素点的目标类别用于指示任一像素点属于目标树状组织或者任一像素点不属于目标树状组织。
在获取目标图像对应的原始图像数据和目标图像对应的重建参考数据后,将原始图像数据和重建参考数据输入目标分割模型进行分割处理,得到目标图像对应的目标分割结果。目标分割结果用于指示目标图像中的各个像素点的目标类别,任一像素点的目标类别用于指示该任一像素点是否属于目标树状组织。需要说明的是,此处的任一像素点的目标类别是指在目标图像对应的原始图像数据和重建参考数据的基础上调用目标分割模型得到的该任一像素点的实际类别。
需要说明的是,对于重建参考数据指示的基于局部重建结果确定出的属于目标树状组织的重建参考像素点,重建参考像素点的目标类别用于指示重建参考像素点属于目标树状组织;对于除重建参考数据指示的基于局部重建结果确定出的属于目标树状组织的重建参考像素点外的其他像素点,任一其他像素点的目标类别可能用于指示该任一其他像素点属于目标树状组织,也可能用于指示该任一其他像素点不属于目标树状组织。
由于目标图像对应的目标分割结果能够指示出目标图像中的各个像素点实际上是否属于目标树状组织,所以目标分割结果能够为目标树状组织的自动重建提供直接的数据支持。
在示例性实施例中,目标分割模型的输入为原始图像数据和重建参考数据。原始图像数据和重建参考数据可视为双通道的图像数据。也就是说,目标分割模型的输入为双通道的图像数据,双通道中的一个通道为原始图像特征通道,双通道中的另外一个通道为重建参考特征通道。原始图像特征通道对应的数据为原始图像数据,重建参考特征通道对应的数据为重建参考数据。通过输入此种双通道的图像数据,能够为目标分割模型提供更全面的初始信息,从而使得目标分割模型输出更为精准的目标分割结果。
示例性地,以目标图像为三维图像为例,目标分割模型的输入为双通道的三维图像数据,三维均为空间坐标,双通道中的一个通道为三维原始图像特征通道,双通道中的另外一个通道为三维重建参考特征通道。
本申请实施例对目标分割模型的模型结构不加以限定,只要能够基于原始图像数据和重建参考数据实现对目标图像中的各个像素点的分割即可。示例性地,对于目标图像为三维图像的情况,目标分割模型的模型结构为3D-UNet(三维U型网络)结构。
在一种可能实现方式中,调用目标分割模型,基于原始图像数据和重建参考数据,获取目标图像对应的目标分割结果的过程包括以下步骤2021至步骤2023。
步骤2021:调用目标分割模型,基于原始图像数据和重建参考数据的融合数据,依次执行第一参考数量次下采样处理,得到目标图像对应的第一目标特征。
原始图像数据和重建参考数据的融合数据是指将原始图像数据和重建参考数据进行融合后得到的数据,本申请实施例对将原始图像数据和重建参考数据进行融合的方式不加以限定。示例性地,目标分割模型中包括数据融合层,将原始图像数据和重建参考数据进行融合的过程在目标分割模型中的数据融合层执行。
在得到原始图像数据和重建参考数据的融合数据后,基于融合数据依次执行第一参考数量次下采样处理,以得到目标图像对应的第一目标特征。目标图像对应的第一目标特征是指通过对原始图像数据和重建参考数据的融合数据进行下采样处理后得到的深层次的特征。
第一参考数量根据经验设置,或者根据应用场景灵活调整,本申请实施例对此不加以限定。例如,第一参考数量次为三次,或者,第一参考数量次为四次。在示例性实施例中,通过合理设置目标分割模型的模型结构实现基于融合数据依次执行第一参考数量次下采样处理的过程。
在示例性实施例中,任一次下采样处理包括一次卷积处理和一次池化处理。以第一参考数量次为三次为例,基于原始图像数据和重建参考数据的融合数据,依次执行第一参考数量次下采样处理,得到目标图像对应的第一目标特征的过程包括以下步骤a至步骤c。
步骤a:对原始图像数据和重建参考数据的融合数据进行第一卷积处理,得到目标图像对应的第一卷积特征;对第一卷积特征进行第一池化处理,得到目标图像对应的第一池化特征。
本申请实施例对第一卷积处理的实现方式不加以限定,示例性地,第一卷积处理的实现方式为:通过两个级联的卷积层对原始图像数据和重建参考数据的融合数据进行特征提取。在示例性实施例中,每个卷积层由卷积函数、BN(Batch Normalization,批量标准化)函数和ReLU(Rectified Linear Unit,线性整流单元)函数构成。示例性地,当目标图像为三维图像数,级联的卷积层为3D卷积层。本申请实施例对卷积层的卷积核的大小不加以限定,示例性地,卷积层的卷积核的大小为3×3×3。
在得到第一卷积特征后,对第一卷积特征进行第一池化处理,以减小第一卷积特征的尺寸。在示例性实施例中,第一池化处理的方式为:通过最大池化层对第一卷积特征进行特征提取。示例性地,最大池化层的核大小为2×2×2。
步骤b:对第一池化特征进行第二卷积处理,得到目标图像对应的第二卷积特征;对第二卷积特征进行第二池化处理,得到目标图像对应的第二池化特征。
步骤c:对第二池化特征进行卷积处理,得到目标图像对应的第三卷积特征;对第三卷积特征进行池化处理,得到目标图像对应的第一目标特征。
上述步骤b和步骤c的实现方式参见步骤a的实现方式,此处不再赘述。需要说明的是,第一卷积处理、第二卷积处理和第三卷积处理的处理参数可以相同,也可以不同,本申请实施例对此不加以限定。在示例性实施例中,第一卷积处理、第二卷积处理和第三卷积处理的处理参数不同,以使经过不同的卷积处理后提取的特征对应的特征维数不同。在示例性实施例中,第一池化处理、第二池化处理和第三池化处理的处理参数相同,均用于以相同的比例减小特征的尺寸。
步骤2022:基于第一目标特征对应的目标卷积特征,依次执行第一参考数量次上采样处理,得到目标图像对应的第二目标特征。
第一目标特征对应的目标卷积特征是指对第一目标特征进行卷积处理后得到的特征,对第一目标特征进行卷积处理的过程由目标分割模型的模型结构决定,本申请实施例对此不加以限定。在示例性实施例中,对第一目标特征进行卷积处理的过程为:通过两个级联的卷积层对第一目标特征进行特征提取。
在得到第一目标特征对应的目标卷积特征后,依次执行第一参考数量次上采样处理,以得到目标图像对应的第二目标特征。需要说明的是,此处基于目标卷积特征执行的上采样处理的次数与步骤2021中基于原始图像数据和重建参数数据的融合数据依次执行的下采样处理的次数相同。在一些实施例中,基于目标卷积特征执行的上采样处理的次数与步骤2021中基于原始图像数据和重建参数数据的融合数据依次执行的下采样处理的次数也可以不同。
在示例性实施例中,任一次下采样处理包括一次反卷积处理和一次卷积处理,对于第一参考数量次为三次的情况,基于第一目标特征对应的目标卷积特征,依次执行第一参考数量次上采样处理,得到目标图像对应的第二目标特征的过程包括以下步骤A至步骤C。
步骤A:对第一目标特征对应的目标卷积特征进行第一反卷积处理,得到目标图像对应的第一上采样特征;对第一上采样特征和第三卷积特征的拼接特征进行第四卷积处理,得到目标图像对应的第四卷积特征。
通过对目标卷积特征进行第一反卷积处理,能够扩大目标卷积特征的尺寸且减少特征维数。本申请实施例对第一反卷积处理的实现方式不加以限定,示例性地,第一反卷积处理的实现方式为:通过反卷积层对目标卷积特征进行反卷积。
对目标卷积特征进行第一反卷积处理后,得到第一上采样特征。第一上采样特征与步骤2021中的步骤c中得到的第三卷积特征的尺寸和特征维数均相同,因此,第一上采样特征和第三卷积特征可以进行拼接,从而得到第一上采样特征和第三卷积特征的拼接特征。在示例性实施例中,第一上采样特征和第三卷积特征的拼接特征是将第一上采样特征和第三卷积特征在特征维数上进行拼接后得到的特征。
在得到第一上采样特征和第三卷积特征的拼接特征后,对第一上采样特征和第三卷积特征的拼接特征进行第四卷积处理,得到目标图像对应的第四卷积特征。在示例性实施例中,第四卷积处理的方式为:通过两个级联的卷积层对第一上采样特征和第三卷积特征的拼接特征进行特征提取。
步骤B:对第四卷积特征进行第二反卷积处理,得到目标图像对应的第二上采样特征;对第二上采样特征和第二卷积特征的拼接特征进行第五卷积处理,得到目标图像对应的第五卷积特征。
步骤C:对第五卷积特征进行第三反卷积处理,得到目标图像对应的第三上采样特征;对第三上采样特征和第一卷积特征的拼接特征进行第六卷积处理,得到目标图像对应的第二 目标特征。
上述步骤B和步骤C的实现方式参见步骤A的实现方式,此处不再赘述。在示例性实施例中,第一反卷积处理、第二反卷积处理和第三反卷积处理的处理参数不同,以使经过不同的反卷积处理后得到特征的特征维数不同。在示例性实施例中,第四卷积处理、第五卷积处理和第六卷积处理的处理参数不同,以使经过不同的卷积处理后得到特征的特征维数不同。
步骤2023:对第二目标特征进行目标卷积处理,得到目标图像对应的目标分割结果。
在得到目标图像对应的第二目标特征后,对第二目标特征进行目标卷积处理,得到目标图像对应的目标分割结果。对第二目标特征进行目标卷积处理的过程由目标分割模型的模型结构决定,本申请实施例对此不加以限定。在示例性实施例中,对第二目标特征进行目标卷积处理的方式与对其他特征进行卷积处理的方式不同,对第二目标特征进行目标卷积处理是为了得到能够指示出目标图像中的各个像素点的目标类别的目标分割结果。
示例性地,调用目标分割模型,基于原始图像数据和重建参考数据,获取目标图像对应的目标分割结果的过程如图4所示。在将原始图像数据和重建参考数据输入目标分割模型后,对原始图像数据和重建参考数据的融合数据401进行第一卷积处理,得到目标图像对应的第一卷积特征402;对第一卷积特征402进行第一池化处理,得到目标图像对应的第一池化特征403;对第一池化特征403进行第二卷积处理,得到目标图像对应的第二卷积特征404;对第二卷积特征404进行第二池化处理,得到目标图像对应的第二池化特征405;对第二池化特征405进行第三卷积处理,得到目标图像对应的第三卷积特征406;对第三卷积特征406进行第三池化处理,得到目标图像对应的第一目标特征407。
在得到第一目标特征407后,通过对第一目标特征407进行卷积处理,得到第一目标特征对应的目标卷积特征408。对目标卷积特征408进行第一反卷积处理,得到目标图像对应的第一上采样特征409;对第一上采样特征409和第三卷积特征406的拼接特征进行第四卷积处理,得到目标图像对应的第四卷积特征410。对第四卷积特征410进行第二反卷积处理,得到目标图像对应的第二上采样特征411;对第二上采样特征411和第二卷积特征404的拼接特征进行第五卷积处理,得到目标图像对应的第五卷积特征412。对第五卷积特征412进行第三反卷积处理,得到目标图像对应的第三上采样特征413,对第三上采样特征413和第一卷积特征402的拼接特征进行第六卷积处理,得到目标图像对应的第二目标特征414。在得到第二目标特征414后,对第二目标特征414进行目标卷积处理,得到目标图像对应的目标分割结果415。
需要说明的是,图4中的各个特征上标记的数字表示各个特征的特征维数。例如,第一卷积特征402上标记的数字48表示第一卷积特征402的特征维数为48;第二卷积特征404上标记的数字96表示第二卷积特征404的特征维数为96。需要进一步说明的是,目标分割结果415上标记的数字2表示目标分割结果的维数为2,也就是说,目标分割结果中包括目标图像中的各个像素点分别对应的属于目标树状组织的概率值和不属于目标树状组织的概率值。
在一种可能实现方式中,在获取目标图像对应的目标分割结果之后,还包括:调用目标分类模型,基于原始图像数据和目标分割结果,获取目标重建置信度信息。目标重建置信度信息用于指示基于目标分割结果得到的完整重建结果的可靠性。
示例性地,目标重建置信度信息包括目标分割结果为正确分割结果的概率值和目标分割结果为错误分割结果的概率值,目标分割结果为正确分割结果的概率值和目标分割结果为错 误分割结果的概率值的和为1。若目标分割结果为正确分割的概率值不小于目标分割结果为错误分割结果的概率值,则认为基于目标分割结果得到的完整重建结果的可靠性较高;若目标分割结果为正确分割的概率值小于目标分割结果为错误分割结果的概率值,则认为基于目标分割结果确定的完整重建结果的可靠性较低,此时说明基于目标分割结果确定的目标树状组织在目标图像中的完整重建结果可能有误,需要人为纠正。
本申请实施例对目标分类模型的模型结构不加以限定,只要能够根据原始图像数据和目标分割结果确定出目标重建置信度信息即可。示例性地,目标分类模型的模型结构为CNN(Convolutional Neural Network,卷积神经网络)结构。在示例性实施例中,对于目标图像为三维图像的情况,目标分类模型的模型结构为3D-CNN(三维卷积神经网络)结构。示例性地,当目标分类模型的模型结构为3D-CNN结构时,目标分类模型为3D-VGG11(3D Visual Geometry Group 11,三维视觉几何组11)模型。需要说明的是,目标分类模型的模型结构并不局限于此。
在一种可能实现方式中,目标分类模型包括依次连接的至少一个卷积子模型、至少一个全连接子模型和一个置信度预测子模型。此种情况下,参见图5,调用目标分类模型,基于原始图像数据和目标分割结果,获取目标重建置信度信息的过程包括以下步骤501至步骤505。
步骤501:将原始图像数据和目标分割结果输入目标分类模型中的第一个卷积子模型进行处理,得到第一个卷积子模型输出的分类特征。
在示例性实施例中,第一个卷积子模型包括依次连接的至少一个卷积层和一个池化层,将原始图像数据和目标分割结果输入目标分类模型中的第一个卷积子模型进行处理的过程为:基于依次连接的至少一个卷积层和一个池化层对原始图像数据和目标分割结果的融合数据进行处理。本申请实施例对第一卷积子模型中包括的卷积层的数量、每个卷积层的卷积核大小、池化层的类型以及池化层的核大小均不加以限定。例如,第一卷积子模型中包括的卷积层的数量为1个,每个卷积层的卷积和大小为3×3×3,池化层的类型为最大池化层,池化层的核大小为2×2×2。
步骤502:从第二个卷积子模型起,将上一个卷积子模型输出的分类特征输入下一个卷积子模型进行处理,得到下一个卷积子模型输出的分类特征。
在得到第一个卷积子模型输出的分类特征后,将第一个卷积子模型输出的分类特征输入第二个卷积子模型进行处理,得到第二个卷积子模型输出的分类特征,以此类推,直至得到最后一个卷积子模型输出的分类特征。
需要说明的是,本申请实施例对目标分类模型包括的卷积子模型的数量不加以限定,对于目标分类模型包括多个卷积子模型的情况,不同的卷积子模型中包括的卷积层的数量可能相同,也可能不同,本申请实施例对此不加以限定。需要进一步说明的是,本申请实施例对卷积层以及池化层的处理参数的设置方式不加以限定,不同的处理参数能够得到不同维数的特征。
步骤503:将最后一个卷积子模型输出的分类特征输入第一个全连接子模型进行处理,得到第一个全连接子模型输出的全连接特征。
在得到最后一个卷积子模型输出的分类特征后,将最后一个卷积子模型输出的分类特征作为第一个全连接子模型的输入,进而由第一个全连接子模型对最后一个卷积子模型输出的分类特征进行处理,得到第一个全连接子模型输出的全连接特征。
示例性地,第一个全连接子模型包括一个全连接层,通过该全连接层对最后一个卷积子模型输出的分类特征进行处理。本申请实施例对第一全连接子模型包括的全连接层的处理参数的设置方式不加以限定,可以根据经验设置。
步骤504:从第二个全连接子模型起,将上一个全连接子模型输出的全连接特征输入下一个全连接子模型进行处理,得到下一个全连接子模型输出的全连接特征。
在得到第一个全连接子模型输出的全连接特征后,将第一个全连接子模型输出的全连接特征输入第二个全连接子模型进行处理,得到第二个全连接子模型输出的全连接特征,以此类推,直至得到最后一个全连接子模型输出的全连接特征。
需要说明的是,本申请实施例对目标分类模型包括的全连接子模型的数量不加以限定,对于目标分类模型包括多个全连接子模型的情况,不同的全连接子模型中的全连接层的处理参数可能相同,也可能不同,本申请实施例对此不加以限定。
步骤505:将最后一个全连接子模型输出的全连接特征输入置信度预测子模型进行处理,得到置信度预测子模型输出的目标重建置信度信息。
在得到最后一个全连接子模型输出的全连接特征后,将最后一个全连接子模型输出的全连接特征作为置信度预测子模型的输入,进而由置信度预测子模型对最后一个全连接子模型输出的全连接特征进行处理,得到置信度预测子模型输出的目标重建置信度信息。本申请实施例对置信度预测子模型的结构不加以限定,只要能够输出重建置信度信息即可。示例性地,置信度预测子模型包括一个全连接层,通过该全连接层的处理输出目标重建置信度信息。
在示例性实施例中,目标分类模型中所使用的激活函数为ReLU函数。
在示例性实施例中,以目标图像为三维图像为例,如图6所示,目标分类模型包括依次连接的五个卷积子模型、两个全连接子模型和一个置信度预测子模型。五个卷积子模型中的每个卷积子模型中包括的卷积层的数量分别为1、1、2、2和2,每个卷积子模型中均包括一个池化层。示例性地,所有卷积层的卷积核大小均为3×3×3,所有池化层均为核大小为2×2×2的池化层,池化层为最大池化层或者平均池化层等,本申请实施例对此不加以限定。也就是说,第一个卷积子模型601中包括一个卷积层和一个池化层,第二个卷积子模型602中包括一个卷积层和一个池化层,第三个卷积子模型603中包括两个卷积层和一个池化层,第四个卷积子模型604中包括两个卷积层和一个池化层,第五个卷积子模型605中包括两个卷积层和一个池化层。
目标分类模型的输入为具有双通道的三维图像数据(即原始图像数据和目标重建结果),假设目标图像的尺寸为32×32×32,则输入目标分类模型的双通道的三维图像数据的大小为32×32×32×2,目标分类模型的输出为用于指示基于目标分割结果得到的完整重建结果的可靠性高低的目标重建置信度信息。
双通道的三维图像数据经过第一个卷积子模型601中的卷积层后,在每个像素点上会提取64维特征,各个方向上的尺寸会通过池化层下降为原本的1/2,即经过第一卷积子模型的处理后,输出的分类特征的大小为16×16×16×64。此后,每个卷积子模型输出的分类特征的特征维数依次为128、256、512和512。在最后通过两个输出特征维数分别为4096和4096的全连接子模型以及输出特征维数为2的置信度预测子模型的处理后,得到目标重建置信度信息。
在示例性实施例中,对于获取目标分割结果后,无需调用目标分类模型获取目标重建置信度信息的情况,在调用目标分割模型获取目标分割结果之前,需要先训练得到目标分割模 型。在一种可能实现方式中,此种情况下,训练得到目标分割模型的过程包括以下步骤1-1和步骤1-2。
步骤1-1:获取至少一个样本图像、至少一个样本图像分别对应的原始样本图像数据、至少一个样本图像分别对应的重建参考样本数据以及至少一个样本图像分别对应的标准分割结果。
样本图像是指用于对分割模型进行训练的图像,一个样本图像对应一个样本树状组织,不同的样本图像可以对应相同的样本树状组织,也可以对应不同的样本树状组织,本申请实施例对此不加以限定。任一样本图像对应的样本树状组织在该任一样本图像中真实的完整重建结果是已知的,将任一样本图像对应的样本树状组织在该任一样本图像中真实的完整重建结果作为该任一样本图像对应的样本树状组织在任一样本图像中的标准完整重建结果。
本申请实施例对样本图像的获取方式不加以限定,只要保证样本图像对应的样本树状组织在该样本图像中的标准完整重建结果已知即可。
在示例性实施例中,样本图像为整个图像中的局部图像,样本图像的尺寸为32×32×32。在确定出样本图像后,能够直接获取样本图像对应的原始样本图像数据。
任一样本图像对应的重建参考样本数据用于为调用分割模型获取该任一样本图像的预测分割结果提供数据参考。示例性地,任一样本图像对应的重建参考样本数据基于该任一样本图像对应的样本树状组织在该任一样本图像中的标准完整重建结果中的局部重建结果确定。
保留任一样本图像对应的样本树状组织在任一样本图像中的标准完整重建结果中的局部重建结果,基于该保留的局部重建结果,确定任一样本图像对应的重建参考样本数据。标准完整重建结果能够指示出全部属于任一样本图像对应的样本树状组织的像素点,保留的局部重建结果能够指示出部分属于任一样本图像对应的样本树状组织的像素点,利用保留的部分属于任一样本图像对应的样本树状组织的像素点能够为调用分割模型实现在样本树状组织的重建过程提供数据参考。
本申请实施例对保留的局部重建结果与标准完整重建结果的关系不加以限定,示例性地,保留的局部重建结果指示出的部分属于任一样本图像对应的样本树状组织的像素点为标准完整重建结果指示出的全部属于任一样本图像对应的样本树状组织的像素点中的前参考数量个像素点。示例性地,全部属于任一样本图像对应的样本树状组织的像素点中的前参考数量个像素点是指全部属于任一样本图像对应的样本树状组织的像素点中的距离样本图像的起始像素点距离最近的前参考数量个像素点。参考数量根据经验设置,或者根据全部属于任一样本图像对应的样本树状组织的像素点的数量灵活调整,本申请实施例对此不加以限定。示例性地,参考数量为全部属于任一样本图像对应的样本树状组织的像素点的数量的一半。
基于任一样本图像对应的样本树状组织在任一样本图像中的标准完整重建结果中的局部重建结果,确定任一样本图像对应的重建参考样本数据的方式参见步骤201中获取目标图像对应的重建参考数据的方式,此处不再赘述。
任一样本图像对应的标准分割结果用于指示任一样本图像中的各个像素点分别对应的标准类别,任一像素点对应的标准类别用于指示该任一像素点实际上是否属于该任一样本图像对应的样本树状组织。任一样本图像对应的标准分割结果能够基于任一样本图像对应的样本树状组织在任一样本图像中的标准完整重建结果直接确定。
步骤1-2:基于至少一个样本图像分别对应的原始样本图像数据、至少一个样本图像分别对应的重建参考样本数据以及至少一个样本图像分别对应的标准分割结果对初始分割模型 进行监督训练,得到目标分割模型。
初始分割模型是指需要进行训练的分割模型,目标分割模型是指训练好的分割模型。在一种可能实现方式中,基于至少一个样本图像分别对应的原始样本图像数据、至少一个样本图像分别对应的重建参考样本数据以及至少一个样本图像分别对应的标准分割结果对初始分割模型进行监督训练,得到目标分割模型的过程为:1、调用初始分类模型,基于至少一个样本图像中的目标样本图像对应的原始样本图像数据和重建参考样本数据,获取目标样本图像对应的预测分割结果,目标样本图像为至少一个样本图像中用于更新一次分类模型的参数的样本图像;2、基于目标样本图像对应的预测分割结果和目标样本图像对应的标准分割结果,确定目标损失函数,基于目标损失函数反向更新初始分类模型的参数,得到更新参数后的分类模型;3、响应于未满足参数更新终止条件,基于更新参数后的分类模型执行步骤1和步骤2,直至满足参数更新终止条件,得到目标分类模型。
需要说明的是,基于更新参数后的分类模型执行步骤1和步骤2的过程中利用的目标样本图像可以与基于初始分类模型执行步骤1和步骤2的过程中利用的目标样本图像相同,也可以不同,本申请实施例对此不加以限定。每次执行步骤1和步骤2的过程中利用的目标样本图像的数量可以为一个或多个,本申请实施例对此不加以限定。在示例性实施例中,每次执行步骤1和步骤2的过程中利用的目标样本图像的数量相同。
在一种可能实现方式中,满足参数更新终止条件包括但不限于以下任一种:目标损失函数收敛、目标损失函数小于参考损失阈值、参数更新次数达到次数阈值。在示例性实施例中,在训练分类模型的过程中,每次利用各个样本图像中的小批量样本图像更新一次模型参数,当全部样本图像均参与一次模型参数的更新时,完成一次完整训练(即一次epoch(时期)),此种情况下,满足参数更新终止条件还包括:完整训练的次数达到指定阈值。例如,指定阈值为50。
在示例性实施例中,基于目标样本图像对应的预测分割结果和目标样本图像对应的标准分割结果,确定目标损失函数的过程基于公式1实现。
L 1=||G(z,I)-y|| 2  (公式1)
其中,L 1表示目标损失函数;z表示目标样本图像对应的重建参考样本数据;I表示目标样本图像对应的原始样本图像数据;G(z,I)表示目标样本图像对应的预测分割结果;y表示目标样本图像对应的标准分割结果。
需要说明的是,确定目标损失函数的方式还可以基于其他方式实现,本申请实施例对此不加以限定。
在示例性实施例中,对于获取目标分割结果后,还调用目标分类模型获取目标重建置信度信息的情况,除了需要获取训练得到的目标分割模型外,还需要获取训练得到的目标分类模型。需要说明的是,目标分割模型和目标分类模型可以通过对抗训练的方式统一训练得到,也可分开训练得到,本申请实施例对此不加以限定。
在一种可能实现方式中,对于目标分割模型和目标分类模型通过对抗训练的方式统一训练得到的情况,训练得到目标分割模型和目标分类模型的过程包括以下步骤2-1和步骤2-2。
步骤2-1:获取至少一个样本图像、至少一个样本图像分别对应的原始样本图像数据、至少一个样本图像分别对应的重建参考样本数据以及至少一个样本图像分别对应的标准分割 结果。
此步骤2-1的实现方式参见上述步骤1-1,此处不再赘述。
步骤2-2:基于至少一个样本图像分别对应的原始样本图像数据、至少一个样本图像分别对应的重建参考样本数据以及至少一个样本图像分别对应的标准分割结果对初始分割模型和初始分类模型进行对抗训练,得到目标分割模型和目标分类模型。
初始分割模型是指需要训练的分割模型,初始分类模型是指需要训练的分类模型;目标分割模型是指训练好的分割模型,目标分类模型是指训练好的分类模型。在一种可能实现方式中,该步骤2-2的实现过程包括以下步骤2-2a至步骤2-2g。
2-2a:调用初始分割模型,基于至少一个样本图像中的第一样本图像对应的原始样本图像数据和第一样本图像对应的重建参考样本数据,获取第一样本图像对应的预测分割结果。
第一样本图像是指至少一个样本图像中用于在一次对抗训练中更新一次分类模型的参数的样本图像。第一样本图像的数量可以为一个或多个,本申请实施例对此不加以限定。该步骤2-2a的实现过程参见步骤202中调用目标分割模型获取目标图像对应的目标分割结果的过程,此处不再赘述。
2-2b:调用初始分类模型,基于第一样本图像对应的原始样本图像数据和第一样本图像对应的预测分割结果,获取第一重建置信度信息;基于第一样本图像对应的原始样本图像数据和第一样本图像对应的标准分割结果,获取第二重建置信度信息。
第一重建置信度信息是指初始分类模型基于第一样本图像对应的原始样本图像数据和第一样本图像对应的预测分割结果预测的用于指示根据第一样本图像对应的预测分割结果得到的重建结果的可靠性高低的信息;第二重建置信度信息是指初始分类模型基于第一样本图像对应的原始样本图像数据和第一样本图像对应的标准分割结果预测的用于指示根据第一样本图像对应的标准分割结果得到的重建结果的可靠性高低的信息。
示例性地,第一重建置信度信息包括第一样本图像对应的预测分割结果为正确分割结果的概率值和第一样本图像对应的预测分割结果为错误分割结果的概率值;第二重建置信度信息包括第一样本图像对应的标准分割结果为正确分割结果的概率值和第一样本图像对应的标准分割结果为错误分割结果的概率值。
此步骤2-2b的实现过程参见步骤202中调用目标分类模型获取目标置信度信息的过程,此处不再赘述。
2-2c:基于第一重建置信度信息和第二重建置信度信息,确定第一损失函数;基于第一损失函数更新初始分类模型的参数;响应于初始分类模型的参数的更新过程满足第一终止条件,得到第一分类模型。
示例性地,基于第一重建置信度信息和第二重建置信度信息,确定第一损失函数的过程基于公式2实现。
L 2=E y[logD(y,I)]+E z[log(1-D(G(z,I)))]  (公式2)
其中,L 2表示第一损失函数;z表示第一样本图像对应的重建参考样本数据;I表示第一样本图像对应的原始样本图像数据;G(z,I)表示第一样本图像对应的预测分割结果;D(G(z,I))表示第一重建置信度信息中包括的第一样本图像对应的预测分割结果为正确分 割结果的概率值;y表示第一样本图像对应的标准分割结果;D(y,I)表示第二重建置信度信息中包括的第一样本图像对应的标准分割结果为正确分割结果的概率值。
在基于第一损失函数更新初始分类模型的参数的过程中,更新目标为最大化第一损失函数,也就是说,更新目标为使分类模型将第一样本图像对应的标准分割结果为正确分割结果的概率值尽量预测为1,将第一样本图像对应的预测分割结果为正确分割结果的概率值尽量预测为0。
需要说明的是,在基于第一损失函数更新初始分类模型的参数的过程中,初始分割模型的参数保持不变。
在基于第一损失函数更新一次初始分类模型的参数后,判断初始分类模型的参数的更新过程是否满足第一终止条件。当初始分类模型的参数的更新过程满足第一终止条件时,得到第一分类模型,进而执行后续的步骤2-2d。当初始分类模型的参数的更新过程未满足第一终止条件时,继续基于上述步骤2-2a至步骤2-2c的步骤对初始分类模型的参数进行再次更新,直至初始分类模型的参数的更新过程满足第一终止条件,再执行后续的步骤2-2d。
在示例性实施例中,初始分类模型的参数的更新过程满足第一终止条件是指初始分类模型的参数的更新次数达到第一阈值。第一阈值根据经验设置或者根据应用场景灵活调整,本申请实施例对此不加以限定。
当初始分类模型的参数的更新过程满足第一终止条件时,完成一次对抗训练中对分类模型的训练。一次完整的对抗训练除包括对分类模型的训练外,还包括对分割模型的训练,一次完整的对抗训练中对分割模型的训练基于后续步骤2-2d至步骤2-2f实现。
2-2d:调用初始分割模型,基于至少一个第一样本图像中的第二样本图像对应的原始样本图像数据和第二样本图像对应的重建参考样本数据,获取第二样本图像对应的预测分割结果。
第二样本图像是指至少一个样本图像中用于在一次对抗训练中更新一次分割模型的参数的样本图像。第二样本图像可以与第一样本图像相同,也可以与第一样本图像不同,本申请实施例对此不加以限定。此外,第二样本图像的数量可以为一个或多个。该步骤2-2d的实现过程参见步骤202中调用目标分割模型获取目标图像对应的目标分割结果的过程,此处不再赘述。
2-2e:调用第一分类模型,基于第二样本图像对应的原始样本图像数据和第二样本图像对应的预测分割结果,获取第三重建置信度信息。
第三重建置信度信息是指第一分类模型基于第二样本图像对应的原始样本图像数据和第二样本图像对应的预测分割结果预测的用于指示根据第二样本图像对应的预测分割结果得到的重建结果的可靠性高低的信息。示例性地,第三重建置信度信息包括第二样本图像对应的预测分割结果为正确分割结果的概率值和第二样本图像对应的预测分割结果为错误分割结果的概率值。此步骤2-2e的实现过程参见步骤202中调用目标分类模型获取目标置信度信息的过程,此处不再赘述。
2-2f:基于第三重建置信度信息、第二样本图像对应的预测分割结果和第二样本图像对应的标准分割结果,确定第二损失函数;基于第二损失函数更新初始分割模型的参数;响应于初始分割模型的参数的更新过程满足第二终止条件,得到第一分割模型。
示例性地,基于第三重建置信度信息、第二样本图像对应的预测分割结果和第二样本图像对应的标准分割结果,确定第二损失函数的过程基于公式3实现。
L 3=E z[log(1-D(G(z,I)))]+||G(z,I)-y|| 2  (公式3)
其中,L 3表示第二损失函数;z表示第二样本图像对应的重建参考样本数据;I表示第二样本图像对应的原始样本图像数据;G(z,I)表示第二样本图像对应的预测分割结果;y表示第二样本图像对应的标准分割结果;D(G(z,I))表示第三重建置信度信息中包括的第二样本图像对应的预测分割结果为正确分割结果的概率值。
在基于第二损失函数更新初始分割模型的参数的过程中,更新目标为最小化第二损失函数,也就是说,更新目标为使分类模型将第二样本图像对应的预测分割结果为正确分割结果的概率值尽量预测为1,使分割模型预测的预测分割结果尽量接近标准分割结果。
基于公式3确定的第二损失函数,不仅利用分类模型的反馈来训练分割模型,还利用标准重建结果的约束来训练分割模型,有利于提高分割模型的训练精度。
需要说明的是,在基于第二损失函数更新初始分割模型的参数的过程中,第一分类模型的参数保持不变。
在基于第二损失函数更新一次初始分割模型的参数后,判断初始分割模型的参数的更新过程是否满足第二终止条件。当初始分割模型的参数的更新过程满足第二终止条件时,得到第一分割模型,进而执行后续的步骤2-2g。当初始分割模型的参数的更新过程未满足第二终止条件时,继续基于上述步骤2-2d至步骤2-2f的步骤对初始分割模型的参数进行再次更新,直至初始分割模型的参数的更新过程满足第二终止条件,再执行后续的步骤2-2g。
在示例性实施例中,初始分割模型的参数的更新过程满足第二终止条件是指初始分割模型的参数的更新次数达到第二阈值。第二阈值根据经验设置或者根据应用场景灵活调整,本申请实施例对此不加以限定。第二阈值与第一阈值相同,或者与第一阈值不同,本申请实施例对此不加以限定。
当初始分割模型的参数的更新过程满足第二终止条件时,完成一次完整的对抗训练中对分类模型的训练。完成一次完整的对抗训练后,得到第一分类模型和第一分割模型。
2-2g:响应于对抗训练过程未满足目标终止条件,继续对第一分类模型和第一分割模型进行对抗训练,直至对抗训练过程满足目标终止条件,得到目标分类模型和目标分割模型。
在得到第一分类模型和第一分割模型,即完成一次完整的对抗训练后,判断对抗训练过程是否满足目标终止条件。当对抗训练过程满足目标终止条件时,直接将第一分割模型作为目标分割模型,将第一分类模型作为目标分类模型。
当对抗训练过程未满足目标终止条件时,继续基于步骤2-2a至步骤2-2f对第一分类模型和第二分割模型进行对抗训练,直至对抗训练过程满足目标终止条件,将满足目标终止条件时得到的分割模型作为目标分割模型,将满足目标终止条件时得到的分类模型作为目标分类模型。
在示例性实施例中,对抗过程满足目标终止条件包括但不限于以下任一种:对抗训练的次数达到第三阈值;指定损失函数收敛;指定损失函数不大于指定损失阈值。其中,指定损失函数是指在一次完整对抗训练完成时的第二损失函数。
示例性地,在基于对抗训练的方式得到目标分类模型和目标分割模型的过程中,分割模型和分类模型构成一个GAN(Generative Adversary Network,生成对抗网络)网络框架。 分割模型的输入为原始样本图像数据和重建参考样本数据,输出为样本图像对应的预测分割结果;分类模型的输入为原始样本图像数据、标准分割结果和分割网络输出的预测分割结果,输出为重建置信度信息。基于预测分割结果确定的重建置信度信息包括的预测分割结果为正确分割结果的概率值越接近于0,说明分类模型认为基于分割模型预测的分割结果得到的重建结果的可靠性越差;基于预测分割结果确定的重建置信度信息包括预测分割结果为正确分割结果的概率越接近于1,说明分类模型认为基于分割模型预测的分割结果得到重建结果的可靠性越高。通过此种GAN网络框架,不仅能够提供分割模型的分割精度,还能够给出较为精准的重建置信度信息,从而能够帮助研究人员快速定位可能重建错误的区域。
示例性地,利用D代表分类模型,利用G代表分割模型,I代表原始样本图像数据,y代表标准分割结果,z表示重建参考样本数据,则对抗训练分类模型和分割模型的整体优化函数表示为公式4。
Figure PCTCN2021121600-appb-000001
V(D,G)=E y[logD(y,I)]+E z[log(1-D(G(z,I)))]+||G(z,I)-y|| 2
公式4中涉及的参数的含义参见公式2和公式3。在基于公式4所示的整体优化函数进行对抗训练的过程中,先固定分割模型G的参数,更新分类模型D的参数,以使分类模型D将基于预测分割结果确定的重建置信度信息包括的预测分割结果为正确分割结果的概率值尽量接近于0,将基于标准分割结果确定的重建置信度信息包括标准分割结果为正确分割结果的概率尽量接近于1。然后固定分类模型D的参数,更新分割模型G的参数,以使分割模型G预测的预测分割结果尽量接近标准分割结果,能够使分类模型D将基于预测分割结果确定的重建置信度信息包括的预测分割结果为正确分割结果的概率值接近1。
需要说明的是,以上所述仅为训练得到目标分类模型和目标分割模型的方式的一种示例性描述,本申请实施例并不局限于此。示例性地,目标分割模型和目标分类模型还可以通过分开训练的方式得到,只要能够保证目标分割模型能够基于原始图像数据和重建参考数据预测出精准的分割结果,保证目标分类模型能够基于原始图像数据和分割结果预测出精准的重建置信度信息即可。
在示例性实施例中,本申请实施例在训练过程中采用基于Adam(一种随机优化算法)优化算法的梯度下降法更新模型的参数。Adam中的betas(衰减)=(0.95,0.9995),即一阶矩估计的指数衰减率为0.95,二阶矩估计的指数衰减率为0.995。不使用weight decay(权重衰减)。训练时,初始学习率设置为0.10001,每经过10个epoch缩小为原来的十分之一,总共训练50个epoch。为了避免过拟合,在任两个全连接层中间加入drop out(退出)层,drop out rate(退出率)设为0.5,即在每次迭代的时候,只有随机挑选的50%的特征被用于训练。
在步骤203中,基于目标分割结果,在目标图像中对目标树状组织进行重建,得到目标树状组织在目标图像中的完整重建结果。
目标分割结果能够指示出目标图像中的各个像素点是否属于目标树状组织,在得到目标分割结果后,能够基于目标分割结果,在目标图像中对目标树状组织进行自动重建,将重建后得到的结果作为目标树状组织在目标图像中的完整重建结果。
在一种可能实现方式中,基于目标分割结果,在目标图像中对目标树状组织进行重建的方式包括但不限于以下两种。
方式一:直接基于目标分割结果,在目标图像中对目标树状组织进行重建。
在此种方式一下,直接基于目标分割结果指示的各个像素点的目标类别,确定全部属于目标树状组织的像素点,每个属于目标树状组织的像素点均对应目标树状组织的一个节点。直接根据全部属于目标树状组织的像素点,在目标图像中标记出目标树状组织的节点,以实现直接基于目标分割结果在目标图像中对目标树状组织进行重建的过程。在示例性实施例中,除了在目标图像中标记出目标树状组织的节点外,还标记出目标树状组织的节点之间的连接关系,以实现直接基于目标分割结果在目标图像中对目标树状组织进行重建的过程。
需要说明的是,当目标树状组织为目标神经元时,目标树状组织的节点是指目标神经元的神经元节点;当目标树状组织为目标血管时,目标树状组织的节点是指目标血管的血管节点。
方式二:基于目标分割结果和目标树状组织在目标图像中的局部重建结果,在目标图像中对目标树状组织进行重建。
在此种方式二下,基于目标分割结果,在目标树状组织在目标图像中的局部重建结果的基础上,实现在目标图像中对目标树状组织进行重建的过程。基于此,在目标图像中对目标树状组织进行重建的过程相当于对目标树状组织在目标图像中的局部重建结果指示的标记结果进行补充的过程。
在一种可能实现方式中,基于目标分割结果和目标树状组织在目标图像中的局部重建结果,在目标图像中对目标树状组织进行重建的过程为:基于目标分割结果,确定全部属于目标树状组织的像素点;基于目标树状组织在目标图像中的局部重建结果,确定已重建像素点;在目标树状组织在目标图像中的局部重建结果指示的标记结果的基础上,基于全部属于目标树状组织的像素点中除已重建像素点外的其他像素点,在目标图像中标记出目标树状组织的其他节点。在示例性实施例中,除在目标图像中标记出目标树状组织的其他节点外,还标记出其他节点之间的连接关系。
无论基于上述哪种方式进行重建,在完成在目标图像中对目标树状组织进行重建的过程之后,均能够得到目标标记结果,该目标标记结果为目标树状组织在目标图像中的完整标记结果。在得到目标标记结果后,基于目标标记结果,获取目标树状组织在目标图像中的完整重建结果。在示例性实施例中,基于目标标记结果,获取目标树状组织在目标图像中的完整重建结果的方式为:将包括目标标记结果的目标图像作为目标树状组织在目标图像中的完整重建结果。在另一种示例性实施例中,基于目标标记结果,得到目标树状组织在目标图像中的完整重建结果的方式为:基于目标标记结果,确定目标树状组织的各个节点的相关数据,将包括各个节点的相关数据的文件作为目标树状组织在目标图像中的完整重建结果。
在示例性实施例中,目标树状组织可能为目标神经元,也可能为目标血管,本申请实施例以目标树状组织为目标神经元为例进行说明。当目标树状组织为目标神经元时,任一像素点的目标类别用于指示该任一像素点属于目标神经元或者任一像素点不属于目标神经元。在此种情况下,基于目标分割结果,在目标图像中对目标树状组织进行重建,得到目标树状组织在目标图像中的完整重建结果的过程包括:基于目标分割结果,在目标图像中的各个像素点中确定属于目标神经元的目标像素点;基于目标像素点,在目标图像中标记出目标神经元的神经元节点以及目标神经元的神经元节点之间的连接关系,得到目标标记结果;基于目标标记结果,获取目标神经元在目标图像中的完整重建结果。
目标像素点是指目标图像中的各个像素点中全部属于目标神经元的像素点。在一种可能 实现方式中,基于目标像素点,在目标图像中标记出目标神经元的神经元节点以及目标神经元的神经元节点之间的连接关系,得到目标标记结果的实现方式为:直接根据目标像素点,在目标图像中标记出目标神经元的全部神经元节点以及目标神经元的全部神经元节点之间的连接关系,得到目标标记结果。
在另一种可能实现方式中,基于目标像素点,在目标图像中标记出目标神经元的神经元节点以及目标神经元的神经元节点之间的连接关系,得到目标标记结果的实现方式为:基于目标神经元在目标图像中的局部重建结果,确定已重建像素点;在目标神经元在目标图像中的局部重建结果指示的标记结果的基础上,基于目标像素点中除已重建像素点外的其他像素点,在目标图像中标记出目标神经元的其他神经元节点以及其他神经元节点之间的连接关系,得到目标标记结果。
在示例性实施例中,对图像中的树状组织进行重建的过程如图7所示。将目标图像对应的原始图像数据和重建参考数据输入目标分割模型进行处理,得到目标分割模型输出的目标图像对应的目标分割结果;将目标图像对应的目标分割结果和目标图像对应的原始图像数据输入目标分类模型进行处理,得到目标分类模型输出的目标重建置信度信息;基于目标图像对应的目标分割结果,在目标图像中对目标树状组织进行重建,得到目标树状组织在目标图像中的完整重建结果。根据目标重建置信度信息评判目标树状组织在目标图像中的完整重建结果的可靠性高低,进而由研究人员对可靠性较低的完整重建结果进行修正。
在示例性实施例中,目标树状组织在目标图像中的完整重建结果指示的标记结果相比于目标树状组织在目标图像中的局部重建结果指示的标记结果增加了新的节点。示例性地,不包括任何标记结果的目标图像、包括目标树状组织在目标图像中的部分重建结果指示的标记结果的目标图像以及包括目标树状组织在目标图像中的完整重建结果指示的标记结果的目标图像的示意图分别如图8的(1)、图8中的(2)和图8中的(3)所示。图8中的(3)所示的目标图像中标记出的节点的数量比图8中的(2)所示的目标图像中标记出的节点的数量多。
在一种可能实现方式中,目标图像为包括完整目标树状组织的初始图像中与目标树状组织对应的起始局部图像,也就是说,目标图像中包括完整目标树状组织中的起始部分。此种情况下,在得到目标树状组织在目标图像中的完整重建结果之后,还包括以下步骤204至步骤205。
步骤204:响应于目标树状组织在目标图像中的完整重建结果未满足重建终止条件,基于目标树状组织在目标图像中的完整重建结果,在初始图像中获取与目标树状组织对应的下一个局部图像;获取目标树状组织在下一个局部图像中的完整重建结果。
在得到目标树状组织在目标图像中的完整重建结果之后,判断目标树状组织在目标图像中的完整重建结果是否满足重建终止条件。判断目标树状组织在目标图像中的完整重建结果是否满足重建终止条件的方式根据经验设置,或者根据应用场景灵活调整,本申请实施例对此不加以限定。
在示例性实施例中,判断目标树状组织在目标图像中的完整重建结果是否满足重建终止条件的方式为:响应于目标树状组织在目标图像中的完整重建结果中不存在除目标树状组织在目标图像中的局部重建结果外的补充重建结果,确定目标树状组织在目标图像中的完整重建结果满足重建终止条件;响应于目标树状组织在目标图像中的完整重建结果中存在除目标树状组织在目标图像中的局部重建结果外的补充重建结果,确定目标树状组织在目标图像中 的完整重建结果未满足重建终止条件。
若目标树状组织在目标图像中的完整重建结果未满足重建终止条件,说明需要继续对目标树状组织进行重建,才能得到目标树状组织在初始图像中的完整重建结果,对目标树状组织继续进行重建的方式为:基于目标树状组织在目标图像中的完整重建结果,在初始图像中获取与目标树状组织对应的下一个局部图像;获取目标树状组织在下一个局部图像中的完整重建结果。
在示例性实施例中,基于目标树状组织在目标图像中的完整重建结果,在初始图像中获取与目标树状组织对应的下一个局部图像的方式为:确定目标树状组织在目标图像中的完整重建结果指示出的各个属于目标树状组织的像素点中距离初始图像的起始像素点最远的像素点作为指定像素点,以该指定像素点为中心点,截取目标尺寸的图像作为下一个局部图像。
获取目标树状组织在下一个局部图像中的完整重建结果的过程参见步骤201至步骤203中所述的获取目标树状组织在目标图像中的完整重建结果的过程,此处不再赘述。
在示例性实施例中,在确定目标树状组织在目标图像中的完整重建结果满足重建终止条件时,直接基于目标树状组织在目标图像中的完整重建结果,获取目标图像在初始图像中的完整重建结果。由于目标图像为目标树状组织在初始图像中对应的起始图像,所以此时直接将目标树状组织在目标图像中的完整重建结果作为目标树状组织在初始图像中的完整重建结果。
步骤205:响应于目标树状组织在下一个局部图像中的完整重建结果满足重建终止条件,基于已获取的目标树状组织在各个局部图像中的完整重建结果,获取目标树状组织在初始图像中的完整重建结果。
在获取目标树状组织在下一个局部图像中的完整重建结果之后,判断目标树状组织在下一个局部图像中的完整重建结果是否满足重建终止条件,当目标树状组织在下一个局部图像中的完整重建结果满足重建终止条件时,基于已获取的目标树状组织在各个局部图像中的完整重建结果,获取目标树状组织在初始图像中的完整重建结果。
目标树状组织在初始图像中的完整重建结果是指在初始图像中将目标树状组织完整标记出来后得到的结果。已获取的目标树状组织在各个局部图像中的完整重建结果均能够指示出在初始图像中标记出的局部的目标树状组织,需要说明的是,目标树状组织在相邻的两个局部图像中的完整重建结果可能存在重叠部分。
在一种可能实现方式中,基于已获取的目标树状组织在各个局部图像中的完整重建结果,获取目标树状组织在初始图像中的完整重建结果的方式为:将已获取的目标树状组织在各个局部图像中的完整重建结果按照各个局部图像之间的关联关系进行合并处理,得到目标树状组织在初始图像中的完整重建结果。
当目标树状组织在下一个局部图像中的完整重建结果未满足重建终止条件时,继续基于目标树状组织在下一个局部图像中的完整重建结果,在初始图像中获取与目标树状组织对应的再下一个局部图像,以及获取目标树状组织在再下一下局部图像中的完整重建结果,以此类推,直至目标树状组织在某个局部图像中的完整重建结果满足重建终止条件,基于已获取的目标树状组织在各个局部图像中的完整重建结果,获取目标树状组织在初始图像中的完整重建结果。
在一种可能实现方式中,在获取目标树状组织在初始图像中的完整重建结果后,将目标树状组织在初始图像中的完整重建结果进行存储,以便于研究人员直接提取并进行进一步的 研究。本申请实施例对存储目标树状组织在初始图像中的完整重建结果的方式不加以限定。
在示例性实施例中,以树状组织为神经元为例,神经元的重建是基于显微镜下高分辨率的大脑三维图像进行的,重建后的神经元的某一维长度可达数千像素,即使只截取神经元所在区域的图像也会占用上T(容量单位)的空间,而单个神经元只占了图像中极小的位置,基于此,使用SWC(文件类型的名称)文件存储神经元的完整重建结果。每个SWC文件代表一个神经元,SWC文件中的每行代表神经元中的一个神经元节点。
例如,用于存储神经元的完整重建结果的SWC文件如图9所示,在图9所示的SWC文件中,包括神经元节点在图像中的坐标(x,y,z),神经元节点的类型,神经元节点的半径(r)、神经元节点的编号(id)及神经元节点的父节点的编号(pid)。神经元节点的编号(id)及神经元节点的父节点的编号(pid)能够体现神经元节点之间的连接关系。需要说明的是,对于神经元而言,神经元节点的类型为轴突或树突,在存储神经元节点的类型时可以利用数字标识代替具体的类型。在示例性实施例中,基于存储神经元的完整重建结果的SWC文件能够生成如图10所示的三维点云数据。
在本申请实施例中,根据树状组织的局部重建结果指示出的已有重建节点中的最后一个节点为中心点,截取一张较小的图像(例如,32×32×32),根据截取的图像对应的原始图像数据以及重建参考数据,预测树状组织的后续的节点,进而根据后续的节点来截取下一张图像,继续补全,直至树状组织完整重建出来。预测后的分割结果能够输入到分类模型中获取重建置信度,以判断基于分割结果得到的重建结果是否可靠。基于本申请实施例提供的方法,能够实现树状组织的自动重建,能够提升重建的速度和精度,此外,还能给出分割结果的重建置信度信息,以便帮助研究人员判定重建是否可靠,并迅速定位可能出错、需要人工修正的图像区域。
在本申请实施例中,先基于目标图像对应的原始图像数据和重建参考数据自动获取目标图像对应的目标分割结果,然后基于目标分割结果自动得到目标树状组织在目标图像中的完整重建结果。基于此种过程,能够实现对树状组织的自动重建,树状组织的重建过程无需依赖人工,有利于提高对图像中的树状组织进行重建的效率,得到的树状组织的重建结果的可靠性较高。
参见图11,本申请实施例提供了一种对图像中的树状组织进行重建的装置,该装置包括:
第一获取单元1101,用于获取目标树状组织对应的目标图像、目标图像对应的原始图像数据以及目标图像对应的重建参考数据,重建参考数据基于目标树状组织在目标图像中的局部重建结果确定;
第二获取单元1102,用于调用目标分割模型,基于原始图像数据和重建参考数据,获取目标图像对应的目标分割结果,目标分割结果用于指示目标图像中的各个像素点的目标类别,任一像素点的目标类别用于指示任一像素点属于目标树状组织或者任一像素点不属于目标树状组织;
重建单元1103,用于基于目标分割结果,在目标图像中对目标树状组织进行重建,得到目标树状组织在目标图像中的完整重建结果。
在一种可能实现方式中,第二获取单元1102,用于调用目标分割模型,基于原始图像数据和重建参考数据的融合数据,依次执行第一参考数量次下采样处理,得到目标图像对应的第一目标特征;基于第一目标特征对应的目标卷积特征,依次执行第一参考数量次上采样处 理,得到目标图像对应的第二目标特征;对第二目标特征进行目标卷积处理,得到目标图像对应的目标分割结果。
在一种可能实现方式中,第一参考数量次为三次,任一次下采样处理包括一次卷积处理和一次池化处理;第二获取单元1102,还用于对原始图像数据和重建参考数据的融合数据进行第一卷积处理,得到目标图像对应的第一卷积特征;对第一卷积特征进行第一池化处理,得到目标图像对应的第一池化特征;对第一池化特征进行第二卷积处理,得到目标图像对应的第二卷积特征;对第二卷积特征进行第二池化处理,得到目标图像对应的第二池化特征;对第二池化特征进行第三卷积处理,得到目标图像对应的第三卷积特征;对第三卷积特征进行第三池化处理,得到目标图像对应的第一目标特征。
在一种可能实现方式中,任一次上采样处理包括一次反卷积处理和一次卷积处理;第二获取单元1102,还用于对第一目标特征对应的目标卷积特征进行第一反卷积处理,得到目标图像对应的第一上采样特征;对第一上采样特征和第三卷积特征的拼接特征进行第四卷积处理,得到目标图像对应的第四卷积特征;对第四卷积特征进行第二反卷积处理,得到目标图像对应的第二上采样特征;对第二上采样特征和第二卷积特征的拼接特征进行第五卷积处理,得到目标图像对应的第五卷积特征;对第五卷积特征进行第三反卷积处理,得到目标图像对应的第三上采样特征;对第三上采样特征和第一卷积特征的拼接特征进行第六卷积处理,得到目标图像对应的第二目标特征。
在一种可能实现方式中,参见图12,该装置还包括:
第三获取单元1104,用于调用目标分类模型,基于原始图像数据和目标分割结果,获取目标重建置信度信息。
在一种可能实现方式中,目标分类模型包括依次连接的至少一个卷积子模型、至少一个全连接子模型和一个置信度预测子模型;第三获取单元1104,用于将原始图像数据和目标分割结果输入目标分类模型中的第一个卷积子模型进行处理,得到第一个卷积子模型输出的分类特征;从第二个卷积子模型起,将上一个卷积子模型输出的分类特征输入下一个卷积子模型进行处理,得到下一个卷积子模型输出的分类特征;将最后一个卷积子模型输出的分类特征输入第一个全连接子模型进行处理,得到第一个全连接子模型输出的全连接特征;从第二个全连接子模型起,将上一个全连接子模型输出的全连接特征输入下一个全连接子模型进行处理,得到下一个全连接子模型输出的全连接特征;将最后一个全连接子模型输出的全连接特征输入置信度预测子模型进行处理,得到置信度预测子模型输出的目标重建置信度信息。
在一种可能实现方式中,第一获取单元1101,还用于获取至少一个样本图像、至少一个样本图像分别对应的原始样本图像数据、至少一个样本图像分别对应的重建参考样本数据以及至少一个样本图像分别对应的标准分割结果;
参见图12,该装置还包括:
训练单元1105,用于基于至少一个样本图像分别对应的原始样本图像数据、至少一个样本图像分别对应的重建参考样本数据以及至少一个样本图像分别对应的标准分割结果对初始分割模型进行监督训练,得到目标分割模型。
在一种可能实现方式中,第一获取单元1101,还用于获取至少一个样本图像、至少一个样本图像分别对应的原始样本图像数据、至少一个样本图像分别对应的重建参考样本数据以及至少一个样本图像分别对应的标准分割结果;
训练单元1105,还用于基于至少一个样本图像分别对应的原始样本图像数据、至少一个 样本图像分别对应的重建参考样本数据以及至少一个样本图像分别对应的标准分割结果对初始分割模型和初始分类模型进行对抗训练,得到目标分割模型和目标分类模型。
在一种可能实现方式中,训练单元1105,还用于调用初始分割模型,基于至少一个样本图像中的第一样本图像对应的原始样本图像数据和第一样本图像对应的重建参考样本数据,获取第一样本图像对应的预测分割结果;调用初始分类模型,基于第一样本图像对应的原始样本图像数据和第一样本图像对应的预测分割结果,获取第一重建置信度信息;基于第一样本图像对应的原始样本图像数据和第一样本图像对应的标准分割结果,获取第二重建置信度信息;基于第一重建置信度信息和第二重建置信度信息,确定第一损失函数;基于第一损失函数更新初始分类模型的参数;响应于初始分类模型的参数的更新过程满足第一终止条件,得到第一分类模型;调用初始分割模型,基于至少一个第一样本图像中的第二样本图像对应的原始样本图像数据和第二样本图像对应的重建参考样本数据,获取第二样本图像对应的预测分割结果;调用第一分类模型,基于第二样本图像对应的原始样本图像数据和第二样本图像对应的预测分割结果,获取第三重建置信度信息;基于第三重建置信度信息、第二样本图像对应的预测分割结果和第二样本图像对应的标准分割结果,确定第二损失函数;基于第二损失函数更新初始分割模型的参数;响应于初始分割模型的参数的更新过程满足第二终止条件,得到第一分割模型;响应于对抗训练过程未满足目标终止条件,继续对第一分类模型和第一分割模型进行对抗训练,直至对抗训练过程满足目标终止条件,得到目标分类模型和目标分割模型。
在一种可能实现方式中,第一获取单元1101,还用于响应于目标树状组织在目标图像中的完整重建结果未满足重建终止条件,基于目标树状组织在目标图像中的完整重建结果,在初始图像中获取与目标树状组织对应的下一个局部图像;获取目标树状组织在下一个局部图像中的完整重建结果;响应于目标树状组织在下一个局部图像中的完整重建结果满足重建终止条件,基于已获取的目标树状组织在各个局部图像中的完整重建结果,获取目标树状组织在初始图像中的完整重建结果。
在一种可能实现方式中,目标树状组织为目标神经元,目标图像从包含目标神经元的大脑三维图像中获取得到。
在一种可能实现方式中,任一像素点的目标类别用于指示任一像素点属于目标神经元或者任一像素点不属于目标神经元;
重建单元1103,用于基于目标分割结果,在目标图像中的各个像素点中确定属于目标神经元的目标像素点;基于目标像素点,在目标图像中标记出目标神经元的神经元节点以及目标神经元的神经元节点之间的连接关系,得到目标标记结果;基于目标标记结果,确定目标神经元在目标图像中的完整重建结果。
在本申请实施例中,先基于目标图像对应的原始图像数据和重建参考数据自动获取目标图像对应的目标分割结果,然后基于目标分割结果自动得到目标树状组织在目标图像中的完整重建结果。基于此种过程,能够实现对树状组织的自动重建,树状组织的重建过程无需依赖人工,有利于提高对图像中的树状组织进行重建的效率,得到的树状组织的重建结果的可靠性较高。
需要说明的是,上述实施例提供的装置在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实 施例提供的装置与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图13是本申请实施例提供的一种服务器的结构示意图,该服务器可因配置或性能不同而产生比较大的差异,可以包括一个或多个处理器(Central Processing Units,CPU)1301和一个或多个存储器1302,其中,该一个或多个存储器1302中存储有至少一条程序代码,该至少一条程序代码由该一个或多个处理器1301加载并执行,以实现上述各个方法实施例提供的对图像中的树状组织进行重建的方法。当然,该服务器还可以具有有线或无线网络接口、键盘以及输入输出接口等部件,以便进行输入输出,该服务器还可以包括其他用于实现设备功能的部件,在此不做赘述。
图14是本申请实施例提供的一种终端的结构示意图。示例性地,该终端是:智能手机、平板电脑、笔记本电脑或台式电脑。终端还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端包括有:处理器1401和存储器1402。
处理器1401可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1401可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1401也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1401可以集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1401还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1402可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1402还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1402中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1401所执行以实现本申请中方法实施例提供的对图像中的树状组织进行重建的方法。
在一些实施例中,终端还可选包括有:外围设备接口1403和至少一个外围设备。处理器1401、存储器1402和外围设备接口1403之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1403相连。具体地,外围设备包括:射频电路1404、显示屏1405、摄像头组件1406、音频电路1407、定位组件1408和电源1409中的至少一种。
外围设备接口1403可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1401和存储器1402。在一些实施例中,处理器1401、存储器1402和外围设备接口1403被集成在同一芯片或电路板上;在一些其他实施例中,处理器1401、存储器1402和外围设备接口1403中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路1404用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1404通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1404将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路1404包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路1404可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:城域网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路1404还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
显示屏1405用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏1405是触摸显示屏时,显示屏1405还具有采集在显示屏1405的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1401进行处理。此时,显示屏1405还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏1405可以为一个,设置在终端的前面板;在另一些实施例中,显示屏1405可以为至少两个,分别设置在终端的不同表面或呈折叠设计;在另一些实施例中,显示屏1405可以是柔性显示屏,设置在终端的弯曲表面上或折叠面上。甚至,显示屏1405还可以设置成非矩形的不规则图形,也即异形屏。显示屏1405可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件1406用于采集图像或视频。可选地,摄像头组件1406包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件1406还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路1407可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1401进行处理,或者输入至射频电路1404以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1401或射频电路1404的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1407还可以包括耳机插孔。
定位组件1408用于定位终端的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件1408可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统、俄罗斯的格雷纳斯系统或欧盟的伽利略系统的定位组件。
电源1409用于为终端中的各个组件进行供电。电源1409可以是交流电、直流电、一次性电池或可充电电池。当电源1409包括可充电电池时,该可充电电池可以支持有线充电或无 线充电。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端还包括有一个或多个传感器1410。该一个或多个传感器1410包括但不限于:加速度传感器1411、陀螺仪传感器1412、压力传感器1413、指纹传感器1414、光学传感器1415以及接近传感器1416。
加速度传感器1411可以检测以终端建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器1411可以用于检测重力加速度在三个坐标轴上的分量。处理器1401可以根据加速度传感器1411采集的重力加速度信号,控制显示屏1405以横向视图或纵向视图进行用户界面的显示。加速度传感器1411还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器1412可以检测终端的机体方向及转动角度,陀螺仪传感器1412可以与加速度传感器1411协同采集用户对终端的3D动作。处理器1401根据陀螺仪传感器1412采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器1413可以设置在终端的侧边框和/或显示屏1405的下层。当压力传感器1413设置在终端的侧边框时,可以检测用户对终端的握持信号,由处理器1401根据压力传感器1413采集的握持信号进行左右手识别或快捷操作。当压力传感器1413设置在显示屏1405的下层时,由处理器1401根据用户对显示屏1405的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器1414用于采集用户的指纹,由处理器1401根据指纹传感器1414采集到的指纹识别用户的身份,或者,由指纹传感器1414根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器1401授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器1414可以被设置在终端的正面、背面或侧面。当终端上设置有物理按键或厂商Logo时,指纹传感器1414可以与物理按键或厂商Logo集成在一起。
光学传感器1415用于采集环境光强度。在一个实施例中,处理器1401可以根据光学传感器1415采集的环境光强度,控制显示屏1405的显示亮度。具体地,当环境光强度较高时,调高显示屏1405的显示亮度;当环境光强度较低时,调低显示屏1405的显示亮度。在另一个实施例中,处理器1401还可以根据光学传感器1415采集的环境光强度,动态调整摄像头组件1406的拍摄参数。
接近传感器1416,也称距离传感器,通常设置在终端的前面板。接近传感器1416用于采集用户与终端的正面之间的距离。在一个实施例中,当接近传感器1416检测到用户与终端的正面之间的距离逐渐变小时,由处理器1401控制显示屏1405从亮屏状态切换为息屏状态;当接近传感器1416检测到用户与终端的正面之间的距离逐渐变大时,由处理器1401控制显示屏1405从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图14中示出的结构并不构成对终端的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在示例性实施例中,还提供了一种计算机设备,该计算机设备包括处理器和存储器,该存储器中存储有至少一条程序代码。该至少一条程序代码由一个或者一个以上处理器加载并执行,以实现上述任一种对图像中的树状组织进行重建的方法。
在示例性实施例中,还提供了一种计算机可读存储介质,该计算机可读存储介质中存储 有至少一条程序代码,该至少一条程序代码由计算机设备的处理器加载并执行,以实现上述任一种对图像中的树状组织进行重建的方法。
在一种可能实现方式中,上述计算机可读存储介质可以是只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)、磁带、软盘和光数据存储设备等。
在示例性实施例中,还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述任一种对图像中的树状组织进行重建的方法。
需要说明的是,本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以上示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
以上所述仅为本申请的示例性实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (26)

  1. 一种对图像中的树状组织进行重建的方法,其特征在于,所述方法包括:
    获取目标树状组织对应的目标图像、所述目标图像对应的原始图像数据以及所述目标图像对应的重建参考数据,所述重建参考数据基于所述目标树状组织在所述目标图像中的局部重建结果确定;
    调用目标分割模型,基于所述原始图像数据和所述重建参考数据,获取所述目标图像对应的目标分割结果,所述目标分割结果用于指示所述目标图像中的各个像素点的目标类别,任一像素点的目标类别用于指示所述任一像素点属于所述目标树状组织或者所述任一像素点不属于所述目标树状组织;
    基于所述目标分割结果,在所述目标图像中对所述目标树状组织进行重建,得到所述目标树状组织在所述目标图像中的完整重建结果。
  2. 根据权利要求1所述的方法,其特征在于,所述调用目标分割模型,基于所述原始图像数据和所述重建参考数据,获取所述目标图像对应的目标分割结果,包括:
    调用目标分割模型,基于所述原始图像数据和所述重建参考数据的融合数据,依次执行第一参考数量次下采样处理,得到所述目标图像对应的第一目标特征;
    基于所述第一目标特征对应的目标卷积特征,依次执行所述第一参考数量次上采样处理,得到所述目标图像对应的第二目标特征;
    对所述第二目标特征进行目标卷积处理,得到所述目标图像对应的目标分割结果。
  3. 根据权利要求2所述的方法,其特征在于,所述第一参考数量次为三次,任一次下采样处理包括一次卷积处理和一次池化处理;所述基于所述原始图像数据和所述重建参考数据的融合数据,依次执行第一参考数量次下采样处理,得到所述目标图像对应的第一目标特征,包括:
    对所述原始图像数据和所述重建参考数据的融合数据进行第一卷积处理,得到所述目标图像对应的第一卷积特征;对所述第一卷积特征进行第一池化处理,得到所述目标图像对应的第一池化特征;
    对所述第一池化特征进行第二卷积处理,得到所述目标图像对应的第二卷积特征;对所述第二卷积特征进行第二池化处理,得到所述目标图像对应的第二池化特征;
    对所述第二池化特征进行第三卷积处理,得到所述目标图像对应的第三卷积特征;对所述第三卷积特征进行第三池化处理,得到所述目标图像对应的第一目标特征。
  4. 根据权利要求3所述的方法,其特征在于,任一次上采样处理包括一次反卷积处理和一次卷积处理;所述基于所述第一目标特征对应的目标卷积特征,依次执行所述第一参考数量次上采样处理,得到所述目标图像对应的第二目标特征,包括:
    对所述第一目标特征对应的目标卷积特征进行第一反卷积处理,得到所述目标图像对应的第一上采样特征;对所述第一上采样特征和所述第三卷积特征的拼接特征进行第四卷积处理,得到所述目标图像对应的第四卷积特征;
    对所述第四卷积特征进行第二反卷积处理,得到所述目标图像对应的第二上采样特征;对所述第二上采样特征和所述第二卷积特征的拼接特征进行第五卷积处理,得到所述目标图像对应的第五卷积特征;
    对所述第五卷积特征进行第三反卷积处理,得到所述目标图像对应的第三上采样特征;对所述第三上采样特征和所述第一卷积特征的拼接特征进行第六卷积处理,得到所述目标图 像对应的第二目标特征。
  5. 根据权利要求1-4任一所述的方法,其特征在于,所述调用目标分割模型,基于所述原始图像数据和所述重建参考数据,获取所述目标图像对应的目标分割结果之后,所述方法还包括:
    调用目标分类模型,基于所述原始图像数据和所述目标分割结果,获取目标重建置信度信息。
  6. 根据权利要求5所述的方法,其特征在于,所述目标分类模型包括依次连接的至少一个卷积子模型、至少一个全连接子模型和一个置信度预测子模型;所述调用目标分类模型,基于所述原始图像数据和所述目标分割结果,获取目标重建置信度信息,包括:
    将所述原始图像数据和所述目标分割结果输入所述目标分类模型中的第一个卷积子模型进行处理,得到所述第一个卷积子模型输出的分类特征;
    从第二个卷积子模型起,将上一个卷积子模型输出的分类特征输入下一个卷积子模型进行处理,得到下一个卷积子模型输出的分类特征;
    将最后一个卷积子模型输出的分类特征输入第一个全连接子模型进行处理,得到所述第一个全连接子模型输出的全连接特征;
    从第二个全连接子模型起,将上一个全连接子模型输出的全连接特征输入下一个全连接子模型进行处理,得到下一个全连接子模型输出的全连接特征;
    将最后一个全连接子模型输出的全连接特征输入所述置信度预测子模型进行处理,得到所述置信度预测子模型输出的所述目标重建置信度信息。
  7. 根据权利要求1-4任一所述的方法,其特征在于,所述调用目标分割模型,基于所述原始图像数据和所述重建参考数据,获取所述目标图像对应的目标分割结果之前,所述方法还包括:
    获取至少一个样本图像、所述至少一个样本图像分别对应的原始样本图像数据、所述至少一个样本图像分别对应的重建参考样本数据以及所述至少一个样本图像分别对应的标准分割结果;
    基于所述至少一个样本图像分别对应的原始样本图像数据、所述至少一个样本图像分别对应的重建参考样本数据以及所述至少一个样本图像分别对应的标准分割结果对初始分割模型进行监督训练,得到所述目标分割模型。
  8. 根据权利要求5所述的方法,其特征在于,所述调用目标分割模型,基于所述原始图像数据和所述重建参考数据,获取所述目标图像对应的目标分割结果之前,所述方法还包括:
    获取至少一个样本图像、所述至少一个样本图像分别对应的原始样本图像数据、所述至少一个样本图像分别对应的重建参考样本数据以及所述至少一个样本图像分别对应的标准分割结果;
    基于所述至少一个样本图像分别对应的原始样本图像数据、所述至少一个样本图像分别对应的重建参考样本数据以及所述至少一个样本图像分别对应的标准分割结果对初始分割模型和初始分类模型进行对抗训练,得到所述目标分割模型和所述目标分类模型。
  9. 根据权利要求8所述的方法,其特征在于,所述基于所述至少一个样本图像分别对应的原始样本图像数据、所述至少一个样本图像分别对应的重建参考样本数据以及所述至少一个样本图像分别对应的标准分割结果对初始分割模型和初始分类模型进行对抗训练,得到所述目标分割模型和所述目标分类模型,包括:
    调用所述初始分割模型,基于所述至少一个样本图像中的第一样本图像对应的原始样本图像数据和所述第一样本图像对应的重建参考样本数据,获取所述第一样本图像对应的预测分割结果;
    调用所述初始分类模型,基于所述第一样本图像对应的原始样本图像数据和所述第一样本图像对应的预测分割结果,获取第一重建置信度信息;基于所述第一样本图像对应的原始样本图像数据和所述第一样本图像对应的标准分割结果,获取第二重建置信度信息;
    基于所述第一重建置信度信息和所述第二重建置信度信息,确定第一损失函数;基于所述第一损失函数更新所述初始分类模型的参数;响应于所述初始分类模型的参数的更新过程满足第一终止条件,得到第一分类模型;
    调用所述初始分割模型,基于所述至少一个第一样本图像中的第二样本图像对应的原始样本图像数据和所述第二样本图像对应的重建参考样本数据,获取所述第二样本图像对应的预测分割结果;
    调用所述第一分类模型,基于所述第二样本图像对应的原始样本图像数据和所述第二样本图像对应的预测分割结果,获取第三重建置信度信息;
    基于所述第三重建置信度信息、所述第二样本图像对应的预测分割结果和所述第二样本图像对应的标准分割结果,确定第二损失函数;基于所述第二损失函数更新所述初始分割模型的参数;响应于所述初始分割模型的参数的更新过程满足第二终止条件,得到第一分割模型;
    响应于对抗训练过程未满足目标终止条件,继续对所述第一分类模型和所述第一分割模型进行对抗训练,直至对抗训练过程满足所述目标终止条件,得到所述目标分类模型和所述目标分割模型。
  10. 根据权利要求1-4任一所述的方法,其特征在于,所述目标图像为初始图像中与所述目标树状组织对应的起始局部图像;所述基于所述目标分割结果,在所述目标图像中对所述目标树状组织进行重建,得到所述目标树状组织在所述目标图像中的完整重建结果之后,所述方法还包括:
    响应于所述目标树状组织在所述目标图像中的完整重建结果未满足重建终止条件,基于所述目标树状组织在所述目标图像中的完整重建结果,在所述初始图像中获取与所述目标树状组织对应的下一个局部图像;获取所述目标树状组织在所述下一个局部图像中的完整重建结果;
    响应于所述目标树状组织在所述下一个局部图像中的完整重建结果满足所述重建终止条件,基于已获取的所述目标树状组织在各个局部图像中的完整重建结果,获取所述目标树状组织在所述初始图像中的完整重建结果。
  11. 根据权利要求1-4任一所述的方法,其特征在于,所述目标树状组织为目标神经元,所述目标图像从包含所述目标神经元的大脑三维图像中获取得到。
  12. 根据权利要求11所述的方法,其特征在于,所述任一像素点的目标类别用于指示所述任一像素点属于所述目标神经元或者所述任一像素点不属于所述目标神经元;
    所述基于所述目标分割结果,在所述目标图像中对所述目标树状组织进行重建,得到所述目标树状组织在所述目标图像中的完整重建结果,包括:
    基于所述目标分割结果,在所述目标图像中的各个像素点中确定属于所述目标神经元的目标像素点;
    基于所述目标像素点,在所述目标图像中标记出所述目标神经元的神经元节点以及所述目标神经元的神经元节点之间的连接关系,得到目标标记结果;
    基于所述目标标记结果,获取所述目标神经元在所述目标图像中的完整重建结果。
  13. 一种对图像中的树状组织进行重建的装置,其特征在于,所述装置包括:
    第一获取单元,用于获取目标树状组织对应的目标图像、所述目标图像对应的原始图像数据以及所述目标图像对应的重建参考数据,所述重建参考数据基于所述目标树状组织在所述目标图像中的局部重建结果确定;
    第二获取单元,用于调用目标分割模型,基于所述原始图像数据和所述重建参考数据,获取所述目标图像对应的目标分割结果,所述目标分割结果用于指示所述目标图像中的各个像素点的目标类别,任一像素点的目标类别用于指示所述任一像素点属于所述目标树状组织或者所述任一像素点不属于所述目标树状组织;
    重建单元,用于基于所述目标分割结果,在所述目标图像中对所述目标树状组织进行重建,得到所述目标树状组织在所述目标图像中的完整重建结果。
  14. 根据权利要求13所述的装置,其特征在于,所述第二获取单元,用于调用目标分割模型,基于所述原始图像数据和所述重建参考数据的融合数据,依次执行第一参考数量次下采样处理,得到所述目标图像对应的第一目标特征;基于所述第一目标特征对应的目标卷积特征,依次执行所述第一参考数量次上采样处理,得到所述目标图像对应的第二目标特征;对所述第二目标特征进行目标卷积处理,得到所述目标图像对应的目标分割结果。
  15. 根据权利要求14所述的装置,其特征在于,所述第一参考数量次为三次,任一次下采样处理包括一次卷积处理和一次池化处理;所述第二获取单元,还用于对所述原始图像数据和所述重建参考数据的融合数据进行第一卷积处理,得到所述目标图像对应的第一卷积特征;对所述第一卷积特征进行第一池化处理,得到所述目标图像对应的第一池化特征;对所述第一池化特征进行第二卷积处理,得到所述目标图像对应的第二卷积特征;对所述第二卷积特征进行第二池化处理,得到所述目标图像对应的第二池化特征;对所述第二池化特征进行第三卷积处理,得到所述目标图像对应的第三卷积特征;对所述第三卷积特征进行第三池化处理,得到所述目标图像对应的第一目标特征。
  16. 根据权利要求15所述的装置,其特征在于,任一次上采样处理包括一次反卷积处理和一次卷积处理;所述第二获取单元,还用于对所述第一目标特征对应的目标卷积特征进行第一反卷积处理,得到所述目标图像对应的第一上采样特征;对所述第一上采样特征和所述第三卷积特征的拼接特征进行第四卷积处理,得到所述目标图像对应的第四卷积特征;对所述第四卷积特征进行第二反卷积处理,得到所述目标图像对应的第二上采样特征;对所述第二上采样特征和所述第二卷积特征的拼接特征进行第五卷积处理,得到所述目标图像对应的第五卷积特征;对所述第五卷积特征进行第三反卷积处理,得到所述目标图像对应的第三上采样特征;对所述第三上采样特征和所述第一卷积特征的拼接特征进行第六卷积处理,得到所述目标图像对应的第二目标特征。
  17. 根据权利要求13-16任一所述的装置,其特征在于,所述装置还包括:
    第三获取单元,用于调用目标分类模型,基于所述原始图像数据和所述目标分割结果,获取目标重建置信度信息。
  18. 根据权利要求17所述的装置,其特征在于,所述目标分类模型包括依次连接的至少一个卷积子模型、至少一个全连接子模型和一个置信度预测子模型;所述第三获取单元,用 于将所述原始图像数据和所述目标分割结果输入所述目标分类模型中的第一个卷积子模型进行处理,得到所述第一个卷积子模型输出的分类特征;从第二个卷积子模型起,将上一个卷积子模型输出的分类特征输入下一个卷积子模型进行处理,得到下一个卷积子模型输出的分类特征;将最后一个卷积子模型输出的分类特征输入第一个全连接子模型进行处理,得到所述第一个全连接子模型输出的全连接特征;从第二个全连接子模型起,将上一个全连接子模型输出的全连接特征输入下一个全连接子模型进行处理,得到下一个全连接子模型输出的全连接特征;将最后一个全连接子模型输出的全连接特征输入所述置信度预测子模型进行处理,得到所述置信度预测子模型输出的所述目标重建置信度信息。
  19. 根据权利要求13-16任一所述的装置,其特征在于,所述第一获取单元,还用于获取至少一个样本图像、所述至少一个样本图像分别对应的原始样本图像数据、所述至少一个样本图像分别对应的重建参考样本数据以及所述至少一个样本图像分别对应的标准分割结果;
    所述装置还包括:
    训练单元,用于基于所述至少一个样本图像分别对应的原始样本图像数据、所述至少一个样本图像分别对应的重建参考样本数据以及所述至少一个样本图像分别对应的标准分割结果对初始分割模型进行监督训练,得到所述目标分割模型。
  20. 根据权利要求17所述的装置,其特征在于,所述第一获取单元,还用于获取至少一个样本图像、所述至少一个样本图像分别对应的原始样本图像数据、所述至少一个样本图像分别对应的重建参考样本数据以及所述至少一个样本图像分别对应的标准分割结果;
    所述训练单元,还用于基于所述至少一个样本图像分别对应的原始样本图像数据、所述至少一个样本图像分别对应的重建参考样本数据以及所述至少一个样本图像分别对应的标准分割结果对初始分割模型和初始分类模型进行对抗训练,得到所述目标分割模型和所述目标分类模型。
  21. 根据权利要求20所述的装置,其特征在于,所述训练单元,还用于调用所述初始分割模型,基于所述至少一个样本图像中的第一样本图像对应的原始样本图像数据和所述第一样本图像对应的重建参考样本数据,获取所述第一样本图像对应的预测分割结果;调用所述初始分类模型,基于所述第一样本图像对应的原始样本图像数据和所述第一样本图像对应的预测分割结果,获取第一重建置信度信息;基于所述第一样本图像对应的原始样本图像数据和所述第一样本图像对应的标准分割结果,获取第二重建置信度信息;基于所述第一重建置信度信息和所述第二重建置信度信息,确定第一损失函数;基于所述第一损失函数更新所述初始分类模型的参数;响应于所述初始分类模型的参数的更新过程满足第一终止条件,得到第一分类模型;调用所述初始分割模型,基于所述至少一个第一样本图像中的第二样本图像对应的原始样本图像数据和所述第二样本图像对应的重建参考样本数据,获取所述第二样本图像对应的预测分割结果;调用所述第一分类模型,基于所述第二样本图像对应的原始样本图像数据和所述第二样本图像对应的预测分割结果,获取第三重建置信度信息;基于所述第三重建置信度信息、所述第二样本图像对应的预测分割结果和所述第二样本图像对应的标准分割结果,确定第二损失函数;基于所述第二损失函数更新所述初始分割模型的参数;响应于所述初始分割模型的参数的更新过程满足第二终止条件,得到第一分割模型;响应于对抗训练过程未满足目标终止条件,继续对所述第一分类模型和所述第一分割模型进行对抗训练,直至对抗训练过程满足所述目标终止条件,得到所述目标分类模型和所述目标分割模型。
  22. 根据权利要求13-16任一所述的装置,其特征在于,所述第一获取单元,还用于响应于所述目标树状组织在所述目标图像中的完整重建结果未满足重建终止条件,基于所述目标树状组织在所述目标图像中的完整重建结果,在所述初始图像中获取与所述目标树状组织对应的下一个局部图像;获取所述目标树状组织在所述下一个局部图像中的完整重建结果;响应于所述目标树状组织在所述下一个局部图像中的完整重建结果满足所述重建终止条件,基于已获取的所述目标树状组织在各个局部图像中的完整重建结果,获取所述目标树状组织在所述初始图像中的完整重建结果。
  23. 根据权利要求13-16任一所述的装置,其特征在于,所述目标树状组织为目标神经元,所述目标图像从包含所述目标神经元的大脑三维图像中获取得到。
  24. 根据权利要求23所述的装置,其特征在于,所述任一像素点的目标类别用于指示所述任一像素点属于所述目标神经元或者所述任一像素点不属于所述目标神经元;
    所述重建单元,用于基于所述目标分割结果,在所述目标图像中的各个像素点中确定属于所述目标神经元的目标像素点;基于所述目标像素点,在所述目标图像中标记出所述目标神经元的神经元节点以及所述目标神经元的神经元节点之间的连接关系,得到目标标记结果;基于所述目标标记结果,确定所述目标神经元在所述目标图像中的完整重建结果。
  25. 一种计算机设备,其特征在于,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条程序代码,所述至少一条程序代码由所述处理器加载并执行,以实现如权利要求1至12任一所述的对图像中的树状组织进行重建的方法。
  26. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有至少一条程序代码,所述至少一条程序代码由处理器加载并执行,以实现如权利要求1至12任一所述的对图像中的树状组织进行重建的方法。
PCT/CN2021/121600 2020-11-09 2021-09-29 对图像中的树状组织进行重建的方法、设备及存储介质 WO2022095640A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21888337.9A EP4181061A4 (en) 2020-11-09 2021-09-29 TREE-SHAPED TISSUE RECONSTRUCTION METHOD, DEVICE AND STORAGE MEDIUM
US17/964,705 US20230032683A1 (en) 2020-11-09 2022-10-12 Method for reconstructing dendritic tissue in image, device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011238994.2A CN112037305B (zh) 2020-11-09 2020-11-09 对图像中的树状组织进行重建的方法、设备及存储介质
CN202011238994.2 2020-11-09

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/964,705 Continuation US20230032683A1 (en) 2020-11-09 2022-10-12 Method for reconstructing dendritic tissue in image, device and storage medium

Publications (1)

Publication Number Publication Date
WO2022095640A1 true WO2022095640A1 (zh) 2022-05-12

Family

ID=73572797

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/121600 WO2022095640A1 (zh) 2020-11-09 2021-09-29 对图像中的树状组织进行重建的方法、设备及存储介质

Country Status (4)

Country Link
US (1) US20230032683A1 (zh)
EP (1) EP4181061A4 (zh)
CN (1) CN112037305B (zh)
WO (1) WO2022095640A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037305B (zh) * 2020-11-09 2021-03-19 腾讯科技(深圳)有限公司 对图像中的树状组织进行重建的方法、设备及存储介质
CN114494711B (zh) * 2022-02-25 2023-10-31 南京星环智能科技有限公司 一种图像特征的提取方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140254899A1 (en) * 2013-03-06 2014-09-11 Toshiba Medical Systems Corporation Image segmentation apparatus, medical image device and image segmentation method
CN107862695A (zh) * 2017-12-06 2018-03-30 电子科技大学 一种基于全卷积神经网络的改进型图像分割训练方法
CN109345538A (zh) * 2018-08-30 2019-02-15 华南理工大学 一种基于卷积神经网络的视网膜血管分割方法
US20190080456A1 (en) * 2017-09-12 2019-03-14 Shenzhen Keya Medical Technology Corporation Method and system for performing segmentation of image having a sparsely distributed object
CN110533113A (zh) * 2019-09-04 2019-12-03 湖南大学 一种数字图像中树状结构的分支点检测方法
CN112037305A (zh) * 2020-11-09 2020-12-04 腾讯科技(深圳)有限公司 对图像中的树状组织进行重建的方法、设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10430949B1 (en) * 2018-04-24 2019-10-01 Shenzhen Keya Medical Technology Corporation Automatic method and system for vessel refine segmentation in biomedical images using tree structure based deep learning model
CN109871896B (zh) * 2019-02-26 2022-03-25 北京达佳互联信息技术有限公司 数据分类方法、装置、电子设备及存储介质
CN110009656B (zh) * 2019-03-05 2021-11-19 腾讯科技(深圳)有限公司 目标对象的确定方法、装置、存储介质及电子装置
CN111612068B (zh) * 2020-05-21 2023-01-06 腾讯科技(深圳)有限公司 图像标注方法、装置、计算机设备及存储介质
CN111696089B (zh) * 2020-06-05 2023-06-16 上海联影医疗科技股份有限公司 一种动静脉确定方法、装置、设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140254899A1 (en) * 2013-03-06 2014-09-11 Toshiba Medical Systems Corporation Image segmentation apparatus, medical image device and image segmentation method
US20190080456A1 (en) * 2017-09-12 2019-03-14 Shenzhen Keya Medical Technology Corporation Method and system for performing segmentation of image having a sparsely distributed object
CN107862695A (zh) * 2017-12-06 2018-03-30 电子科技大学 一种基于全卷积神经网络的改进型图像分割训练方法
CN109345538A (zh) * 2018-08-30 2019-02-15 华南理工大学 一种基于卷积神经网络的视网膜血管分割方法
CN110533113A (zh) * 2019-09-04 2019-12-03 湖南大学 一种数字图像中树状结构的分支点检测方法
CN112037305A (zh) * 2020-11-09 2020-12-04 腾讯科技(深圳)有限公司 对图像中的树状组织进行重建的方法、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4181061A4

Also Published As

Publication number Publication date
US20230032683A1 (en) 2023-02-02
EP4181061A1 (en) 2023-05-17
CN112037305B (zh) 2021-03-19
EP4181061A4 (en) 2024-01-03
CN112037305A (zh) 2020-12-04

Similar Documents

Publication Publication Date Title
WO2020228519A1 (zh) 字符识别方法、装置、计算机设备以及存储介质
CN110059744B (zh) 训练神经网络的方法、图像处理的方法、设备及存储介质
CN110471858B (zh) 应用程序测试方法、装置及存储介质
KR20210111833A (ko) 타겟의 위치들을 취득하기 위한 방법 및 장치와, 컴퓨터 디바이스 및 저장 매체
CN109784351B (zh) 行为数据分类方法、分类模型训练方法及装置
WO2022095640A1 (zh) 对图像中的树状组织进行重建的方法、设备及存储介质
CN111104980B (zh) 确定分类结果的方法、装置、设备及存储介质
CN111930964B (zh) 内容处理方法、装置、设备及存储介质
CN110796248A (zh) 数据增强的方法、装置、设备及存储介质
CN111738365B (zh) 图像分类模型训练方法、装置、计算机设备及存储介质
CN111680697A (zh) 实现领域自适应的方法、装置、电子设备及介质
WO2022193973A1 (zh) 图像处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品
CN111914180A (zh) 基于图结构的用户特征确定方法、装置、设备及介质
CN111339737A (zh) 实体链接方法、装置、设备及存储介质
CN111598896A (zh) 图像检测方法、装置、设备及存储介质
CN113505256B (zh) 特征提取网络训练方法、图像处理方法及装置
CN114782296A (zh) 图像融合方法、装置及存储介质
CN112561084B (zh) 特征提取方法、装置、计算机设备及存储介质
CN111984803B (zh) 多媒体资源处理方法、装置、计算机设备及存储介质
CN110837557B (zh) 摘要生成方法、装置、设备及介质
CN112036492A (zh) 样本集处理方法、装置、设备及存储介质
CN111898535A (zh) 目标识别方法、装置及存储介质
CN113343709B (zh) 意图识别模型的训练方法、意图识别方法、装置及设备
CN111310701B (zh) 手势识别方法、装置、设备及存储介质
CN113139614A (zh) 特征提取方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21888337

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021888337

Country of ref document: EP

Effective date: 20230210

NENP Non-entry into the national phase

Ref country code: DE