WO2018180386A1 - Procédé et système d'aide au diagnostic par imagerie ultrasonore - Google Patents

Procédé et système d'aide au diagnostic par imagerie ultrasonore Download PDF

Info

Publication number
WO2018180386A1
WO2018180386A1 PCT/JP2018/009336 JP2018009336W WO2018180386A1 WO 2018180386 A1 WO2018180386 A1 WO 2018180386A1 JP 2018009336 W JP2018009336 W JP 2018009336W WO 2018180386 A1 WO2018180386 A1 WO 2018180386A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
ultrasonic
region
images
tissue
Prior art date
Application number
PCT/JP2018/009336
Other languages
English (en)
Japanese (ja)
Inventor
坂無 英徳
優大 山▲埼▼
昌也 岩田
博和 野里
高橋 栄一
Original Assignee
国立研究開発法人産業技術総合研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立研究開発法人産業技術総合研究所 filed Critical 国立研究開発法人産業技術総合研究所
Priority to JP2019509166A priority Critical patent/JP6710373B2/ja
Publication of WO2018180386A1 publication Critical patent/WO2018180386A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/13Tomography
    • A61B8/14Echo-tomography

Definitions

  • the present invention relates to an ultrasonic diagnostic imaging support method, system, and apparatus.
  • B-CAD As a lesion detection for breast ultrasound images, there is B-CAD developed by Canadian company Medipattern (registered trademark).
  • B-CAD a lesion (tumor) that appears as a dark block is targeted, and the examiner (user) specifies the approximate location of the lesion.
  • the contour of the lesion is automatically extracted based on the position information, and the malignancy is calculated based on the shape and size.
  • the measurement result is displayed as a moving image.
  • a tumor is depicted as a dark blocky shadow, and can be detected even if each frame of a moving image is treated as an independent still image.
  • the tumor can be detected even by the abnormality detection technique from the still image (pathological image) disclosed in Patent Document 1.
  • the shape of the non-mass lesion is not clear, and the texture change indicated by the mammary gland tissue must be observed. Therefore, the approach of the method described in Patent Document 1 cannot be used, and the correlation between the previous and next frames is measured.
  • the moving image pattern recognition is essential.
  • Patent Document 2 a position sensor is attached to an ultrasonic probe, and image information and position information are combined to construct three-dimensional data of the internal structure. Based on the surface area and volume ratio of the tumor, the image is a tumor. A technique for determining whether or not is is disclosed.
  • Patent Document 3 discloses a technique for estimating a position by analyzing an acquired image instead of attaching a position sensor to an ultrasonic probe.
  • Medipattern registered trademark
  • B-CAD registered trademark
  • the method using the gradient of the brightness value assume that the lesion is reflected in the input image. Always overdetect the normal area. Therefore, it cannot be applied when an image showing no lesion is targeted, and is not practical in breast ultrasonography.
  • the estimated position information is displayed on the screen as a body mark, and is used only to make it easier for a doctor to grasp the examination site, and is not used for automatic detection of a lesion.
  • An object of the present invention is to increase the detection accuracy of an ultrasonic inspection system that automatically detects a lesion based on a moving image composed of a plurality of temporally continuous frames output from an ultrasonic inspection apparatus by moving an ultrasonic probe. There is to increase.
  • the ultrasonic image diagnosis support system or method shown in FIG. 1 includes a learning phase (S10) and an examination phase (S20 to S24).
  • the diagnosis unit is a diagnosis tissue (observation site) and its periphery, and the output includes a display.
  • diagnosis tissue observation site
  • the diagnosis tissue is described as a mammary gland tissue and the lesion is a tumor as an example, but the combination is not limited thereto.
  • the learning phase (S10), an image showing a previously cut out mass and other images are used as input, and a model that classifies the mass and others using the deep learning method based on those images (patch images) is created.
  • a region obtained as a tumor candidate is detected by comparing the model obtained in the learning phase with the image (S20) of each frame of the moving image (S21). Thereafter, the breast tissue is automatically extracted, and the tumor candidate region in the region other than the mammary gland is removed (S22). Further, a tumor candidate region that occurs once using the continuity of frames is removed (S23), and the finally remaining tumor candidate region is output as a detection result (S24).
  • An ultrasonic diagnostic imaging support method comprising a learning phase (S10) and an examination phase (S20 to S24), In the learning phase (S10) The image showing the previously cut out lesion and the other images are input as patch images. Create a model that classifies the lesion and others using Deep Learning method based on the patch image,
  • a moving image consisting of a plurality of frames of a diagnostic unit including a diagnostic tissue is acquired by operating an ultrasonic probe of an ultrasonic inspection apparatus (S20), Comparing the model obtained in the learning phase with the image of the frame of the moving image to detect a lesion candidate area of the image of the frame in the diagnosis unit (S21), Performing automatic extraction of the diagnostic tissue region in the diagnostic unit from the frame image of the moving image, removing the region other than the diagnostic tissue region and the lesion candidate region included therein (S22), Using the continuity of the sequence of frames to remove the lesion candidate region that occurs only once in the diagnostic tissue (S23), Only the region of the diagnostic tissue in which the lesion candidate region finally remaining in
  • a multi-resolution image composed of a plurality of resolution images is created from the moving image frame (S210), Perform detection processing of the lesion candidate area for each layer image of the multi-resolution image (S211), Converting the coordinates of the abnormal region in the image of each hierarchy into the coordinates of the original resolution, and integrating the images at each of the plurality of resolutions (S212), (2) The ultrasonic image diagnosis support method according to (2).
  • An ultrasonic diagnostic imaging support system that uses an ultrasonic moving image (hereinafter simply referred to as a moving image) obtained by the operation of an ultrasonic probe of an ultrasonic inspection apparatus, It consists of a learning phase (S10) and an inspection phase (S20-S24) In the learning phase (S10) The image showing the previously cut out lesion and the other images are input as patch images.
  • a moving image obtained by the operation of an ultrasonic probe of an ultrasonic inspection apparatus
  • a moving image consisting of a plurality of frames of a diagnostic unit including a diagnostic tissue is acquired by operating an ultrasonic probe of an ultrasonic inspection apparatus (S20), Comparing the model obtained in the learning phase with the image of the frame of the moving image to detect a lesion candidate area of the lesion in the diagnostic unit of the image of the frame (S21); Performing automatic extraction of the diagnostic tissue region in the diagnostic unit from the frame image of the moving image, removing the region other than the diagnostic tissue region and the lesion candidate region included therein (S22), Using the continuity of the sequence of frames to remove the lesion candidate region that occurs only once in the diagnostic tissue (S23), Only the region of the diagnostic tissue in which the lesion candidate region finally remaining in the image of the moving image frame is marked is output as a detection result (S24), An ultrasonic diagnostic imaging support system characterized by the above. (9) The ultrasonic diagnostic imaging support system
  • a multi-resolution image composed of a plurality of resolution images is created from the moving image frame (S210), Perform detection processing of the lesion candidate area for each layer image of the multi-resolution image (S211), Converting the coordinates of the abnormal region in the image of each hierarchy into the coordinates of the original resolution, and integrating the images at each of the plurality of resolutions (S212),
  • the accuracy of the ultrasonic image diagnosis support method / system is improved by the combination of the automatic extraction processing of the diagnostic tissue and the improved detection of excessive detection of lesions according to the present invention. As a result, more ultrasonic image diagnosis support can be accurately performed in a short period of time.
  • a patch image of a tumor is created as an abnormal image for learning by perturbing the center of gravity of the tumor within a range where the tumor does not protrude from the patch image.
  • normal patch images are created as a normal image of any size by specifying the position with a random number in a frame in which a tumor image of a breast ultrasound image taken as a moving image is not drawn. To do.
  • Model learning In order to calculate a model for classifying normal and abnormal (tumor), the proposed system uses a machine learning method.
  • a deep learning method such as Deep Belief Network (DBN) or Neural Network by Stacked Denoising Auto Encoder (SDAE) is used.
  • DBN Deep Belief Network
  • SDAE Neural Network by Stacked Denoising Auto Encoder
  • CNN ConvolutionalvolutionNeural Network
  • SVM Support ⁇ Vector Machine
  • logistic regression analysis linear discriminant analysis
  • random forest method Boosting method (AdaBoost, LogitBoost etc.)
  • AdaBoost Boosting method
  • the Neural Network shown in Fig. 3 consists of links that connect units, and automatically adjusts (updates) the link weights (parameters) based on the patch images for learning, so that normality and abnormalities are determined. Calculate the model to be classified.
  • weights are updated sequentially using optimization methods such as stochastic gradient descent, momentum method, Adam, AdaGrad, and AdaDelta.
  • a normal region (region other than a tumor) of a breast ultrasound image may be similar to a tumor, and this normal region may be erroneously determined as a tumor (overdetection).
  • a normal sample that is easily overdetected is preferentially selected, and the model is updated using the image.
  • Pre-training In the Deep Learning method, pre-training and fine tuning learning are performed in two stages.
  • weights are set by unsupervised learning. Thereafter, the weight obtained by the pre-training is used as an initial value, and the weight is updated by a normal learning method such as an error back propagation method (fine tuning).
  • DBN Deep Belief Network
  • SDAE Stacked Denoising Auto ⁇ Encode
  • an ultrasonic moving image of the subject's observation site and its surroundings for a predetermined time is acquired (S20).
  • a breast ultrasound moving image of the cut surface in the depth direction obtained when the ultrasound probe is scanned in one direction along the surface of the examination part generally called a breast is composed of multiple frame images arranged in the order of time occurrence. Is done. This image is called a breast ultrasound image.
  • FIG. 13 shows a flowchart of the tumor detection process (S21).
  • the deep learning method is applied to the local region of each frame of the input moving image, and the region determined to be abnormal (tumor) is set as a tumor candidate region.
  • a multi-resolution image composed of a plurality of resolution images is created from the input image (moving image frame) (S210).
  • the tumor candidate region detection process is performed on the images of each layer of the multi-resolution image (S211).
  • the coordinates of the abnormal area in the image of each hierarchy calculated above are converted into the coordinates of the original resolution, and the detection results at each resolution are integrated (S212). Each process is described in detail below.
  • FIG. 4 shows an example of a multi-resolution image when the magnification is set to 1 ⁇ , 0.75 ⁇ , and 0.5 ⁇ .
  • the Bicubic method was used as the image enlargement / reduction algorithm.
  • the Nearest Neighbour method or Bilinear method can be selected as an image enlargement / reduction algorithm.
  • Detection process of tumor candidate area from each layer There are two methods for detecting a tumor candidate region. First, prepare a rectangular area (search window) that has been set in advance, and perform a raster scan on the image of each layer of the multi-resolution image to determine whether each area is normal or abnormal. This is a determination method (S211a). The second is a method of dividing an image of each layer into a plurality of regions by the super pixel method and determining whether each region is normal or abnormal (S211b).
  • HOG Histograms of Oriented Gradients
  • LBP Local Binary Pattern
  • GLAC Gradient Local AutoCorrelation
  • NLAC Normal Local AutoCorrelations
  • HLAC Higher-order Local AutoCorrelation
  • Gabor feature quantities can be used.
  • While shifting the search window the process of “acquisition of feature vector” and “judgment of normal / abnormal” is repeated to scan the entire area in the input image.
  • the search window is moved by dx (pixel) in the horizontal direction and dy (pixel) in the vertical direction, and the above-mentioned “feature vector acquisition” and “normal / abnormal determination” are repeated.
  • the coordinates of each search window and the labels at each search window position are accumulated.
  • the moving widths dx and dy of the search window are made smaller than the size hxw of the search window (for example, dx is half of w and dy is half of h) so as to include a certain amount of overlap with the area once determined.
  • Move the search window As a result, a single tumor can be determined using a plurality of search windows whose positions are shifted, so that it can be expected that the detection accuracy of the tumor is improved.
  • the region determined to be abnormal above is set as a tumor candidate region, and the upper left coordinate (x0, y0) and the lower right coordinate (x1, y1) of the region are acquired.
  • a superpixel method is applied to the image to divide the image into a plurality of non-overlapping regions (superpixels).
  • a feature vector (hxw dimension) is obtained by rearranging the pixel values of the rectangular area of the vertical h and horizontal w centered on the center of gravity of the superpixel into one line.
  • HOG Histograms of Oriented Gradients
  • LBP Local Binary Pattern
  • GLAC Gradient Local AutoCorrelation
  • NLAC Normal Local Auto-Correlations
  • HLAC Higher-order Local AutoCorrelation
  • Gabor feature quantities can be used.
  • the “feature vector acquisition” and “normal / abnormal determination” processes are applied.
  • the coordinates of each search window and the labels at each search window position are accumulated.
  • the region determined to be abnormal above is set as a tumor candidate region, and the upper left coordinates (x0, y0) and lower right coordinates (x1, y1) of the region are acquired.
  • FIG. 14 shows a flowchart of the overdetection suppression process (S22) other than the mammary gland tissue. Since the tumor occurs in the mammary gland tissue, the region detected in the tissue other than the mammary gland is overdetected in the region detected in the tumor detection process (S21).
  • mammary gland tissue is automatically extracted from the breast ultrasound image, and tumor candidate areas other than the mammary gland are removed (S22).
  • S22 There are two methods of automatic extraction of breast tissue: “Otsu's binarization and automatic extraction of mammary tissue by graph cut (S221)” and “ZCA whitening and automatic extraction of mammary tissue by CRF (S222)”. Any of the methods can be selected.
  • the breast ultrasound image is divided into strip-shaped regions of width w, and Otsu's binarization is applied to each strip-shaped region.
  • the average u and the variance ⁇ of the luminance values of the original image in an area (area determined to be white) that is higher than the threshold that is automatically set when Otsu's binarization is applied are acquired.
  • the breast ultrasound image is divided into M local regions of length h and width w.
  • V represents a set of M local regions
  • Ni represents an adjacent region in the local region i (in this system, there are 8 adjacent regions).
  • ⁇ u (y i ) is called a data term
  • ⁇ p (y i , y j ) is called a smoothing term, which are defined as follows in this system.
  • the first is ZCA whitening, which reduces the correlation between depth and luminance values in order to reduce the effect of depth on luminance values.
  • the depth is a distance from the upper end of the image to the pixel. Since the ultrasonic wave is attenuated as it goes deeper into the tissue from the skin and the reflected wave is weakened, the phenomenon that the echo level (brightness on the ultrasonic image) is reduced is reduced.
  • the brightness value in an arbitrary range is enhanced using a piecewise linear function.
  • ZCA whitening is a process of making the correlation between variables close to zero in order to eliminate the bias in a plurality of variables having strong correlations.
  • d i 1,..., L.
  • ZCA whitening to.
  • T represents transposition.
  • ZCA whitening first, principal component analysis is applied to L vector groups [t 1 , ..., t L ] so that a matrix U having eigenvectors as columns and a pair of eigenvalues ⁇ as diagonal elements. An angle matrix ⁇ is calculated.
  • diag ( ⁇ 1 , ⁇ 2 ) (where ⁇ 1 > ⁇ 2 ).
  • calculate the transformation matrix of ZCA whitening then the vector after ZCA whitening
  • v i (with superscript bar) is used as the luminance value after applying ZCA whitening.
  • the luminance value z i after the linear conversion is calculated by converting the luminance value by the following equation.
  • Luminance histogram acquisition (S222b) In order to capture the brightness in the mammary tissue, a histogram of luminance values is used as a feature vector. A breast ultrasound image is divided into M rectangular regions (patch images), and a histogram of luminance values is calculated from each patch image. Since the image has been converted to G gradation, the G-dimensional feature vector is obtained from the patch image of each region divided into M pieces.
  • each conditional random field (CRF) that can estimate the tissue (label) depicted in each patch image is used.
  • the mammary gland likelihood is calculated from the region.
  • CRF is defined by the following probability model.
  • E (X, Y, w) is called an energy function
  • V represents a set of patch images
  • Ni represents n neighbors to the patch image i (note that the number of neighbors n can be set arbitrarily) Yes, usually 8 neighbors are used).
  • ⁇ u is called a data term
  • ⁇ p is called a pair-wise term (smoothing term).
  • ⁇ u is defined as follows.
  • w [w u , w p ] is a learning parameter of CRF.
  • w (with a hat) is calculated by solving the following equation: Use.
  • the following equation can be solved by an optimization method such as the stochastic gradient descent method, the momentum method, Adam, AdaGrad, or AdaDelta.
  • a larger value means that the degree of mammary glandness is higher.
  • an area of 0.5 or more is determined as a mammary gland tissue.
  • the shadow generated by the influence of speckle noise since the shadow generated by the influence of speckle noise has no volume, it is not drawn at the same position in multiple consecutive frames, but is only detected as a single tumor candidate region. Should be regarded as overdetection.
  • the tumor candidate region that occurs at the same position in consecutive frames is treated as the final tumor region.
  • the masses in a plurality of continuous frames (reference frames) photographed before the frame being observed (target frame) in order to detect the tumor are removed. The position information of the candidate area is used.
  • the candidate tumor regions in a plurality of consecutive frames are removed, and only the candidate tumor regions that are detected in multiple locations in the spatial and temporal directions are finally obtained.
  • a typical tumor area As a typical tumor area.
  • the number of reference frames used for overdetection suppression can be set arbitrarily. As the number of reference frames used increases, the effect of overdetection suppression increases. For example, if the number of reference frames used for overdetection suppression is only two frames including the frame of interest, the ultrasound images drawn in those frames are similar, and there is a high possibility that overdetection will occur at the same position. Detection may not be removed.
  • the number of reference frames used for overdetection suppression is more than 2 frames (for example, 5 frames including the frame of interest)
  • the shape (pattern) of the drawn tissue varies, so all frames are the same
  • the possibility of occurrence of overdetection at the position is reduced, and overdetection can be appropriately removed.
  • Increasing the number of reference frames to be used increases the effect of overdetection removal, but there is a risk that the tumor candidate area in which the tumor is detected may be erroneously removed. For example, when the number of reference frames used for overdetection suppression is 5 frames, if only 2 frames are drawn, only 2 frames of the tumor candidate area are detected. There is a risk of removal.
  • the first contrivance is (S211a) "Detection of tumor candidate area by sliding window", the search windows are moved so as to overlap each other, and a plurality of tumor candidate areas are detected at the same position as much as possible.
  • ⁇ Mean shift clustering is performed based on the obtained center coordinates and frame number of the tumor candidate area, and the tumor candidate areas are grouped.
  • the number of elements of each group is calculated, and all tumor candidate regions belonging to groups whose number of elements is equal to or less than a preset threshold value are removed.
  • the remaining tumor candidate area is finally determined as a tumor area.
  • T the currently displayed frame number and s be the number of frames of interest.
  • the center coordinates (Xc, Yc) and the frame number (Tc) of the tumor candidate region c in the T-th s frames (FIG. 8) from the T-s + 1-th frame in the moving image are acquired.
  • the center coordinates are calculated as shown in FIG. 9 from the upper left coordinates (x0, y0) and lower right coordinates (x1, y1) of the tumor candidate region and the frame number T.
  • the following mean shift clustering is executed using the center coordinates and frame numbers of all the tumor candidate regions as input vectors, and K group numbers ⁇ g 1 ,..., G K ⁇ are assigned to all the input vectors.
  • the number K of groups is automatically determined by the mean shift clustering algorithm.
  • a clustering method that can automatically adjust the number of clusters such as the x-means method and Infinite Gaussian Mixture Model (IGMM) can be applied.
  • IGMM Infinite Gaussian Mixture Model
  • the number of elements in each group is acquired for ⁇ g 1 , ..., g K ⁇ for the K groups calculated above. If the number of elements is equal to or less than a preset threshold value Th n , all abnormal areas belonging to the group are deleted. In the present invention, the value 5 is set as the threshold value Th n .
  • the network structure based on the Deep Learning method has five layers (the number of units in each layer is 2500, 900, 625, 225, 2 in order from the input layer), and the search window size is 50 pixels vertically, 50 pixels horizontally, and the search window moves The width was 25 pixels in the y direction and 25 pixels in the x direction.
  • the multi-resolution image has three layers (1x, 0.75x, 0.5x), and in order to verify the effect of priority learning, the over-detection suppression process in steps S22 and S23 is not introduced in the examination phase, and the tumor in step S21 Only candidate regions were used.
  • FIG. 10 shows the average overdetection number and detection rate per frame. It can be seen that by introducing priority learning, the number of overdetections is reduced and the detection rate is further increased. From these results, it is considered that priority learning is effective as a technique for improving detection accuracy.
  • the network structure based on the Deep Learning method has four layers (the number of units in each layer is 625, 500, 500, 2 in order from the input layer), the search window size is 50 pixels long, 50 pixels wide, and the movement width of the search window is 25 pixels in the y direction and 25 pixels in the x direction.
  • the 50x50 image in the search window was reduced to 25x25 by the Bicubic method when it entered the Deep Learning network.
  • the multi-resolution image has 3 layers (1x, 0.75x, 0.5x).
  • step S23 In order to verify the effect of automatic extraction of breast tissue, priority learning in the learning phase and overdetection suppression processing in step S23 are not introduced, detection of a tumor candidate region in step S21 and overdetection suppression processing for other than breast tissue in step S22 Only used.
  • the automatic extraction method of the mammary gland tissue in addition, in the automatic extraction method of the mammary gland tissue, the automatic extraction method of the mammary gland tissue by binarization of Otsu in step S221 and graph cut was adopted.
  • Fig. 11 shows a comparison of the number of over-detected experimental results. It can be seen that the automatic extraction technique of the mammary gland tissue is effective in reducing overdetection in tissues other than the mammary gland (non-mammary gland) and suppressing overdetection.
  • the network structure based on the Deep Learning method has four layers (the number of units in each layer is 625, 500, 500, 2 in order from the input layer), the search window size is 50 pixels long, 50 pixels wide, and the movement width of the search window is 25 pixels in the y direction and 25 pixels in the x direction.
  • the 50x50 image in the search window was reduced to 25x25 by the Bicubic method when it entered the Deep Learning network.
  • the multi-resolution image has 3 layers (1x, 0.75x, 0.5x).
  • the priority detection in the learning phase and the overdetection suppression process other than the mammary gland tissue in step S22 are not introduced, and the detection of the tumor candidate region in step S21 and the step Only the overdetection suppression process using the frame continuity of S23 was used.
  • FIG. 12 shows a comparison of the average overdetection number per frame. It can be seen that the overdetection suppression process using the continuity of frames is applied to reduce overdetection.
  • Abnormality judgment area (lesion candidate area) 2 Abnormality judgment area not included in the area of the observation site 3 Region of observation site 4 Area that is not the area of the observation site 5 Anomaly judgment area that is in the region of the observation site and is not drawn continuously in the still image frame 6 Patch image 7 Learning model DB 8 Observation site and its surroundings (diagnostic part)

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un système d'examen par ultrasons dans lequel la précision de détection impliquant la détection automatique de lésions est augmentée sur la base d'images animées comprenant une pluralité de séquences de trames temporellement continues délivrées par un équipement d'examen par ultrasons par le déplacement d'une sonde ultrasonore. Dans une phase d'apprentissage, un modèle est créé par l'entrée d'images préalablement découpées sur lesquelles une tumeur apparaît conjointement avec des images différentes des images susmentionnées et, sur la base des images susmentionnées (images de patch), les images sont classées en images tumorales et non tumorales à l'aide d'un procédé d'apprentissage profond. Dans une phase d'examen, des régions jouant le rôle de candidats de tumeur sont détectées (S21) en comparant l'image de chaque trame des images animées avec le modèle obtenu dans la phase d'apprentissage (S20). Ensuite, un tissu de glande mammaire est automatiquement extrait, et les régions candidates de tumeur dans les régions qui ne sont pas une glande mammaire sont supprimées (S22). En outre, des régions candidates de tumeur générées de manière sporadique à l'aide de la continuité de trame sont supprimées (S23), et la région candidate de tumeur finalement restante est délivrée en tant que résultat de détection.
PCT/JP2018/009336 2017-03-30 2018-03-09 Procédé et système d'aide au diagnostic par imagerie ultrasonore WO2018180386A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019509166A JP6710373B2 (ja) 2017-03-30 2018-03-09 超音波画像診断支援方法、およびシステム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017068394 2017-03-30
JP2017-068394 2017-03-30

Publications (1)

Publication Number Publication Date
WO2018180386A1 true WO2018180386A1 (fr) 2018-10-04

Family

ID=63677201

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/009336 WO2018180386A1 (fr) 2017-03-30 2018-03-09 Procédé et système d'aide au diagnostic par imagerie ultrasonore

Country Status (2)

Country Link
JP (1) JP6710373B2 (fr)
WO (1) WO2018180386A1 (fr)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784692A (zh) * 2018-12-29 2019-05-21 重庆大学 一种基于深度学习的快速安全约束经济调度方法
WO2020175356A1 (fr) * 2019-02-27 2020-09-03 学校法人慶應義塾 Support d'informations, dispositif d'aide au diagnostic d'image, dispositif d'apprentissage et procédé de génération de modèle appris
JP2021033826A (ja) * 2019-08-28 2021-03-01 龍一 中原 医用画像処理装置、医用画像処理方法および医用画像処理プログラム
CN112638279A (zh) * 2018-10-22 2021-04-09 百合医疗科技株式会社 超声波诊断系统
WO2021145584A1 (fr) * 2020-01-16 2021-07-22 성균관대학교산학협력단 Appareil pour corriger la position d'un scanner à ultrasons pour auto-diagnostic par ultrasons de type à intelligence artificielle à l'aide de lunettes à réalité augmentée, et procédé de diagnostic médical à distance l'utilisant
KR102304609B1 (ko) * 2021-01-20 2021-09-24 주식회사 딥바이오 조직 검체 이미지 정제 방법, 및 이를 수행하는 컴퓨팅 시스템
WO2021206170A1 (fr) * 2020-04-10 2021-10-14 公益財団法人がん研究会 Dispositif d'imagerie diagnostique, procédé d'imagerie diagnostique, programme d'imagerie diagnostique et modèle appris
WO2022259299A1 (fr) * 2021-06-07 2022-12-15 日本電信電話株式会社 Dispositif et procédé de détection d'objet
JP2022553979A (ja) * 2020-02-10 2022-12-27 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 医用画像処理方法、画像処理方法、医用画像処理装置、画像処理装置、コンピュータ装置およびプログラム
WO2023113414A1 (fr) * 2021-12-13 2023-06-22 주식회사 딥바이오 Procédé pour entraîner un réseau neuronal artificiel fournissant un résultat de détermination d'un spécimen pathologique, et système informatique pour sa mise en œuvre
CN116485791A (zh) * 2023-06-16 2023-07-25 华侨大学 基于吸收度的双视图乳腺肿瘤病变区自动检测方法及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102446638B1 (ko) * 2021-04-28 2022-09-26 주식회사 딥바이오 유방암 병변 영역을 판별하기 위한 인공 신경망을 학습하기 위한 학습 방법, 및 이를 수행하는 컴퓨팅 시스템

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015154918A (ja) * 2014-02-19 2015-08-27 三星電子株式会社Samsung Electronics Co.,Ltd. 病変検出装置及び方法
WO2016088758A1 (fr) * 2014-12-01 2016-06-09 国立研究開発法人産業技術総合研究所 Système et procédé d'examen par ultrasons

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015154918A (ja) * 2014-02-19 2015-08-27 三星電子株式会社Samsung Electronics Co.,Ltd. 病変検出装置及び方法
WO2016088758A1 (fr) * 2014-12-01 2016-06-09 国立研究開発法人産業技術総合研究所 Système et procédé d'examen par ultrasons

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112638279A (zh) * 2018-10-22 2021-04-09 百合医疗科技株式会社 超声波诊断系统
CN109784692B (zh) * 2018-12-29 2020-11-24 重庆大学 一种基于深度学习的快速安全约束经济调度方法
CN109784692A (zh) * 2018-12-29 2019-05-21 重庆大学 一种基于深度学习的快速安全约束经济调度方法
JP7185242B2 (ja) 2019-02-27 2022-12-07 株式会社フィックスターズ プログラム及び画像診断補助装置
WO2020175356A1 (fr) * 2019-02-27 2020-09-03 学校法人慶應義塾 Support d'informations, dispositif d'aide au diagnostic d'image, dispositif d'apprentissage et procédé de génération de modèle appris
JPWO2020175356A1 (fr) * 2019-02-27 2020-09-03
JP2021033826A (ja) * 2019-08-28 2021-03-01 龍一 中原 医用画像処理装置、医用画像処理方法および医用画像処理プログラム
JP7418730B2 (ja) 2019-08-28 2024-01-22 龍一 中原 医用画像処理装置、医用画像処理方法および医用画像処理プログラム
WO2021145584A1 (fr) * 2020-01-16 2021-07-22 성균관대학교산학협력단 Appareil pour corriger la position d'un scanner à ultrasons pour auto-diagnostic par ultrasons de type à intelligence artificielle à l'aide de lunettes à réalité augmentée, et procédé de diagnostic médical à distance l'utilisant
JP2022553979A (ja) * 2020-02-10 2022-12-27 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 医用画像処理方法、画像処理方法、医用画像処理装置、画像処理装置、コンピュータ装置およびプログラム
WO2021206170A1 (fr) * 2020-04-10 2021-10-14 公益財団法人がん研究会 Dispositif d'imagerie diagnostique, procédé d'imagerie diagnostique, programme d'imagerie diagnostique et modèle appris
WO2022158843A1 (fr) * 2021-01-20 2022-07-28 주식회사 딥바이오 Procédé d'affinage d'image d'échantillon de tissu, et système informatique le mettant en œuvre
KR102304609B1 (ko) * 2021-01-20 2021-09-24 주식회사 딥바이오 조직 검체 이미지 정제 방법, 및 이를 수행하는 컴퓨팅 시스템
WO2022259299A1 (fr) * 2021-06-07 2022-12-15 日本電信電話株式会社 Dispositif et procédé de détection d'objet
WO2023113414A1 (fr) * 2021-12-13 2023-06-22 주식회사 딥바이오 Procédé pour entraîner un réseau neuronal artificiel fournissant un résultat de détermination d'un spécimen pathologique, et système informatique pour sa mise en œuvre
CN116485791A (zh) * 2023-06-16 2023-07-25 华侨大学 基于吸收度的双视图乳腺肿瘤病变区自动检测方法及系统
CN116485791B (zh) * 2023-06-16 2023-09-29 华侨大学 基于吸收度的双视图乳腺肿瘤病变区自动检测方法及系统

Also Published As

Publication number Publication date
JPWO2018180386A1 (ja) 2019-11-07
JP6710373B2 (ja) 2020-06-17

Similar Documents

Publication Publication Date Title
JP6710373B2 (ja) 超音波画像診断支援方法、およびシステム
Shaziya et al. Automatic lung segmentation on thoracic CT scans using U-net convolutional network
JP4739355B2 (ja) 統計的テンプレートマッチングによる高速な物体検出方法
Zhu et al. Detection of the optic disc in images of the retina using the Hough transform
CN106999161B (zh) 超声波检查系统
Hossain Microc alcification segmentation using modified u-net segmentation network from mammogram images
JP2006346465A (ja) 心臓境界、胸郭境界及び横隔膜境界を検出する方法、装置及び記憶媒体
David et al. Robust classification of brain tumor in MRI images using salient structure descriptor and RBF kernel-SVM
Farag et al. Automatic detection and recognition of lung abnormalities in helical CT images using deformable templates
Strisciuglio et al. Multiscale blood vessel delineation using B-COSFIRE filters
CN111784701B (zh) 结合边界特征增强和多尺度信息的超声图像分割方法及系统
Pham et al. A comparison of texture models for automatic liver segmentation
Li et al. Sublingual vein extraction algorithm based on hyperspectral tongue imaging technology
Ratheesh et al. Advanced algorithm for polyp detection using depth segmentation in colon endoscopy
Azam et al. Segmentation of breast microcalcification using hybrid method of Canny algorithm with Otsu thresholding and 2D Wavelet transform
CN112488996A (zh) 非齐次三维食管癌能谱ct弱监督自动标注方法与系统
Saha et al. A review on various image segmentation techniques for brain tumor detection
Gong et al. An automatic pulmonary nodules detection method using 3d adaptive template matching
CN111311586A (zh) 基于非线性健康分析系统数据多指标动态整合算法和系统
Selvy et al. A proficient clustering technique to detect CSF level in MRI brain images using PSO algorithm
Abid Fourati et al. Trabecular bone image segmentation using wavelet and marker‐controlled watershed transformation
JP2006175036A (ja) 肋骨形状推定装置、肋骨形状推定方法およびそのプログラム
JP2005160916A (ja) 石灰化陰影判定方法、石灰化陰影判定装置及びプログラム
KR102393390B1 (ko) 서로 다른 의료 영상에 기반한 상관 정보를 이용한 타겟 데이터 예측 방법
CN116777893B (zh) 一种基于乳腺超声横纵切面特征结节的分割与识别方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18774821

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019509166

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18774821

Country of ref document: EP

Kind code of ref document: A1