WO2023204457A1 - Procédé de fourniture d'informations sur une image ultrasonore en mode m, et dispositif de fourniture d'informations sur une image ultrasonore en mode m à l'aide de celui-ci - Google Patents

Procédé de fourniture d'informations sur une image ultrasonore en mode m, et dispositif de fourniture d'informations sur une image ultrasonore en mode m à l'aide de celui-ci Download PDF

Info

Publication number
WO2023204457A1
WO2023204457A1 PCT/KR2023/003736 KR2023003736W WO2023204457A1 WO 2023204457 A1 WO2023204457 A1 WO 2023204457A1 KR 2023003736 W KR2023003736 W KR 2023003736W WO 2023204457 A1 WO2023204457 A1 WO 2023204457A1
Authority
WO
WIPO (PCT)
Prior art keywords
mode ultrasound
ultrasound image
regions
determining
providing information
Prior art date
Application number
PCT/KR2023/003736
Other languages
English (en)
Korean (ko)
Inventor
정성희
심학준
정다운
최안네스
Original Assignee
주식회사 온택트헬스
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020220050236A external-priority patent/KR20230150623A/ko
Application filed by 주식회사 온택트헬스 filed Critical 주식회사 온택트헬스
Publication of WO2023204457A1 publication Critical patent/WO2023204457A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Definitions

  • the present invention relates to a method for providing information on an M-mode ultrasound image, a device for providing information on an M-mode ultrasound image using the same, a method for providing information on an M-mode ultrasound image, and a device using the same.
  • Echocardiography is performed by projecting ultrasound waves onto the three-dimensional heart in multiple planes to obtain images of the heart and measure hemodynamic variables.
  • the medical staff places the ultrasound probe in a location where it is easy to obtain ultrasound images to obtain multifaceted images through anatomical structures around the heart, such as between the ribs, and finds the appropriate tomography through rotation and tilt to record the images. .
  • the M-mode ultrasound image is an ultrasound image that sets a line of interest on a part of the object to be observed and displays the internal structure of the tissue included in the line over time.
  • M-mode ultrasound imaging of the heart can be mainly used to measure the thickness of tissues including the left or right ventricle of the heart.
  • LVIDd letf ventricle internal dimension diastole
  • LVIDs left ventricle internal dimension systole
  • M-mode ultrasound images of objects that move periodically, such as the heart have periodicity, and it is possible to derive microscopic measurements with clinical significance, such as tissue thickness or blood vessel diameter.
  • measurement values may have large differences depending on the skill level of the medical staff, so there is a continuous need for the development of a new information provision system that can derive highly accurate measurement values from M-mode images.
  • the inventors of the present invention attempted to develop an information provision system based on an artificial neural network learned to segment cardiac tomography regions for M-mode ultrasound images.
  • the inventors of the present invention applied an artificial neural network to detect fault areas that are difficult to distinguish with the naked eye (e.g., right ventricular anterior wall, left atrium) regardless of the type of M-mode (e.g., LA-Ao, or LV). It was recognized that high-accuracy discrimination of the posterior wall, etc.) was possible.
  • M-mode e.g., LA-Ao, or LV
  • the inventors of the present invention developed an information provision system based on an artificial neural network.
  • the inventors of the present invention provide measurements such as aorta diameter or left atrium diameter (LA diameter) based on the area divided by the artificial neural network network. ) was designed to automatically determine.
  • LA diameter left atrium diameter
  • the inventors of the present invention attempted to design the measurement value to be determined in different ways depending on whether ECG (electrocardiogram) data provides information in determining the measurement value.
  • the information provision system was built to automatically determine each measurement value by determining end-diastole and end-systole within the segmented image when ECG data exists.
  • the information provision system was built to automatically determine each measurement value using the three methods below when ECG data does not exist.
  • the inventors of the present invention were able to expect that it would be possible to secure M-mode based measurements even in secondary and tertiary hospitals where it is difficult to secure ECG data.
  • the inventors of the present invention were able to expect that it would be possible to provide highly reliable analysis results for M-mode ultrasound images regardless of the skill level of the medical staff.
  • the problem to be solved by the present invention is to classify a plurality of cardiac tomography regions from the received M-mode ultrasound images using an artificial neural network-based segmentation model and to determine measurement values from the M-mode ultrasound images.
  • the purpose is to provide a method of providing information and a device using the same.
  • the method is a method of providing information on an M-mode ultrasound image implemented by a processor, comprising: receiving an M-mode ultrasound image of an object; dividing the M-mode ultrasound image into a plurality of cardiac tomographic regions as an input; It includes the steps of segmenting a plurality of cardiac tomography regions in an M-mode ultrasound image, respectively, using the learned segmentation model, and determining measurements for the plurality of divided cardiac tomography regions.
  • the method includes, after the segmenting step, receiving an electrocardiogram (ECG) for the subject, and determining end-diastole and end-systole based on the ECG. Additional steps may be included.
  • the step of determining the measurement value may include determining the measurement value based on end-diastole and end-systole.
  • the method may further include determining periodicity for a plurality of cardiac tomographic regions after the segmenting step.
  • the step of determining the measurement value may include determining the measurement value based on periodicity.
  • the step of determining periodicity includes calculating the degree of auto-correlation for a plurality of cardiac tomography regions and determining the peak based on the degree of auto-correlation. , and determining the periodicity based on the peak.
  • the step of determining periodicity includes calculating the degree of cross-correlation for adjacent regions of a plurality of cardiac tomographic regions, and determining the peak based on the degree of cross-correlation. , and determining the periodicity based on the peak.
  • the method may further include, after the segmentation step, determining a plurality of local maxima and a plurality of local minima for a plurality of cardiac tomographic regions. .
  • the step of determining the measured value may include determining the measured value based on the local maximum and minimum values.
  • the method may include, after the segmentation step, calculating a gradient for a plurality of cardiac tomographic regions and determining a measurement value based on the gradient.
  • the method may further include the step of selectively receiving the ECG for the subject after the segmenting step.
  • the method when the ECG is received, the method further includes, after the receiving step, determining end-diastole and end-systole based on the ECG. can do. Furthermore, determining the measurement may include determining the measurement based on end-diastole and end-systole.
  • the method further comprises, if the ECG is not received, after segmenting, determining the periodicity for the plurality of cardiac tomographic regions, and determining the measurements. may include determining a measurement based on periodicity.
  • the method determines a plurality of local maxima and a plurality of local minima for a plurality of cardiac tomographic regions after the segmentation step when the ECG is not received. Further comprising the step of determining the measurement value, the step of determining the measurement value may include determining the measurement value based on a plurality of local maximum values and a plurality of local minimum values.
  • the method further comprises, after segmenting, if the ECG is not received, calculating gradients for a plurality of cardiac tomographic regions, and determining the measurements. may include determining a measurement value based on the slope.
  • the method may further include determining an entropy value, and verifying the measured value based on the entropy value.
  • the multiple cardiac tomography regions include: right ventricle anterior wall, right ventricle (RV), anterior wall of aorta, aorta, posterior aorta wall ( It is at least two regions selected from the posterior wall of the aorta, the left atrium (LA), and the posterior wall of the left atrium, and the measured value may be the aorta diameter or the left atrium diameter (LA diameter).
  • the plurality of cardiac tomography regions include the RV anterior wall, right ventricle, interventricular septum (IVS), left ventricle (LV), and LV posterior wall.
  • IVS interventricular septum
  • LV left ventricle
  • LV posterior wall There are at least two areas selected from among, and the measured value may be at least one of interventricular septal thickness, left ventricular inner diameter (LVID), and left ventricular posterior wall thickness.
  • the interventricular septal thickness includes the interventricular septal thickness in diastole and the interventricular septal thickness in systole
  • the left ventricular inner thickness includes the left ventricular inner thickness in diastole and the left ventricular inner thickness in systole
  • the left ventricle Posterior wall thickness may include left ventricular posterior wall thickness in diastole and left ventricular posterior wall thickness in systole.
  • the method removes segmented areas other than the M-mode area or removes holes in the segmented area to obtain a plurality of post-processed cardiac tomographic regions.
  • a filling step may be further included.
  • the step of determining the measurement value may include determining the measurement value based on a plurality of post-processed cardiac tomographic regions.
  • the device includes a communication unit configured to receive an M-mode ultrasound image of an object, and a processor functionally connected to the communication unit.
  • the processor uses a segmentation model learned to divide the M-mode ultrasound image into a plurality of cardiac tomography regions by taking the M-mode ultrasound image as an input, and divides each of the plurality of cardiac tomography regions in the M-mode ultrasound image into a plurality of divided cardiac tomography regions. and configured to determine measurements of the cardiac tomography area.
  • the communication unit may be further configured to receive an electrocardiogram (ECG) for an entity.
  • ECG electrocardiogram
  • the processor may be further configured to determine end-diastole and end-systole based on the ECG, and determine a measurement value based on end-diastole and end-systole.
  • the processor may be further configured to determine periodicity for a plurality of cardiac tomography regions and determine measurements based on the periodicity.
  • the processor calculates the degree of auto-correlation for a plurality of cardiac tomographic regions, determines a peak based on the degree of auto-correlation, and determines the periodicity based on the peak. It can be further configured to determine .
  • the processor calculates the degree of cross-correlation for adjacent regions of a plurality of cardiac tomographic regions, determines a peak based on the degree of cross-correlation, and determines the periodicity based on the peak. It can be further configured to determine .
  • the processor determines a plurality of local maxima and a plurality of local minimas for a plurality of cardiac tomography regions, and provides measurements based on the plurality of local maxima and the plurality of minima. It can be further configured to decide.
  • the processor may be further configured to calculate gradients for a plurality of cardiac tomography regions and determine measurements based on the gradients.
  • the communication unit may be further configured to selectively receive an ECG for an entity.
  • the processor determines end-diastole and end-systole based on the ECG and determines end-diastole and end-systole measurements based on the end-diastole and end-systole. It may be further configured to determine .
  • the processor when an ECG is not received, the processor may be further configured to determine periodicity for a plurality of cardiac tomography regions and determine measurements based on the periodicity.
  • the processor determines a plurality of local maxima, a plurality of local minima for a plurality of cardiac tomography regions, and determines a measurement based on the multiple local maxima and the plurality of local minima. It may be further configured to do so.
  • the processor when an ECG is not received, the processor may be further configured to calculate gradients for a plurality of cardiac tomography regions and determine measurements based on the gradients.
  • the processor may be further configured to determine an entropy value and verify the measurement based on the entropy value.
  • the processor removes a segmented region other than the M-mode region or fills a hole in the segmented region to obtain a plurality of post-processed cardiac tomographic regions, and obtains a plurality of post-processed cardiac tomographic regions. and may be further configured to determine the measurement based on the cardiac tomography area.
  • the present invention can provide highly reliable echocardiographic diagnostic results by providing a system for providing information on M-mode ultrasound images based on an artificial neural network network configured to segment cardiac tomography regions using M-mode ultrasound images.
  • the present invention can provide echocardiographic diagnostic results more quickly and accurately by providing an information provision system configured to automatically measure measurements based on divided cardiac tomography regions.
  • the present invention can provide highly reliable analysis results for M-mode ultrasound images regardless of the skill level of the medical staff, and can contribute to establishing more accurate decision-making and treatment plans in the image analysis stage.
  • the present invention provides an information provision system with different methods for determining measurement values depending on whether ECG is present, so that it is possible to secure M-mode based measurements even in secondary and tertiary hospitals where it is difficult to secure ECG data.
  • Figure 1 illustrates a system for providing information on M-mode ultrasound images using a device for providing information on M-mode ultrasound images according to an embodiment of the present invention .
  • Figure 2a is a block diagram showing the configuration of a medical staff device according to an embodiment of the present invention.
  • Figure 2b is a block diagram showing the configuration of an information providing server according to an embodiment of the present invention.
  • Figure 3 illustrates the procedure of a method for providing information on an M-mode ultrasound image according to an embodiment of the present invention .
  • Figure 4 exemplarily illustrates the procedure of a method for providing information on an M-mode ultrasound image according to an embodiment of the present invention .
  • Figures 5a and 5b exemplarily illustrate the procedure of a method for providing information on an M-mode ultrasound image according to another embodiment of the present invention .
  • Figure 6 exemplarily shows the procedure of a method for providing information on an M-mode ultrasound image according to another embodiment of the present invention .
  • FIG. 7 exemplarily illustrates a post-correction procedure in a method for providing information on an M-mode ultrasound image according to various embodiments of the present invention .
  • 8A and 8B exemplarily show training data of a segmentation model used in various embodiments of the present invention.
  • 8C and 8D exemplarily show measurement values determined in a method for providing information on an M-mode ultrasound image according to various embodiments of the present invention.
  • 9A to 9F show evaluation results for automatic measurement according to an information provision method according to various embodiments.
  • Figure 10 is a schematic diagram of a system for providing information on M-mode ultrasound images using an apparatus for providing information on M-mode ultrasound images according to an embodiment of the present invention.
  • Figure 11 is a block diagram showing the configuration of a medical staff device according to an embodiment of the present invention.
  • Figure 12 is a block diagram showing the configuration of an information providing server according to an embodiment of the present invention.
  • Figure 13 is a flowchart of a method for providing information on an M-mode ultrasound image according to an embodiment of the present invention.
  • Figures 14 to 19 are exemplary diagrams of a method for providing information on an M-mode ultrasound image according to an embodiment of the present invention.
  • the term “subject” may refer to any object that wishes to receive information about an M-mode ultrasound image. Meanwhile, the entity disclosed in this specification may be any mammal except human, but is not limited thereto.
  • M-mode ultrasound image refers to setting a line of interest in the part to be observed in an object and examining the internal structure of the tissue included in the line (in particular, the cardiac tomographic structure). This may refer to an ultrasound image displayed over time.
  • the M-mode ultrasound image may be an M-mode ultrasound image of the heart region of an object, but is not limited thereto.
  • the M-mode ultrasound image may be an M-mode image for left atrium-aorta (LA-Ao) analysis or left ventricle (LV) analysis.
  • LA-Ao left atrium-aorta
  • LV left ventricle
  • pluricity of cardiac tomography regions may refer to a plurality of regions of cardiac structures that can be identified in M-mode images.
  • the plurality of cardiac fault regions include the right ventricle anterior wall, right ventricle (RV), anterior wall of the aorta, aorta, and posterior aorta wall. of aorta), the left atrium (LA), and the posterior wall of the left atrium (Posterior wall of LA).
  • the plurality of cardiac tomography regions include the RV anterior wall, right ventricle, interventricular septum (IVS), left ventricle (LV), and LV posterior wall. There may be at least two areas selected.
  • segmentation model may be a model configured to output a cardiac tomography region using an M-mode ultrasound image as input.
  • the segmentation model may be a model configured to segment a plurality of cardiac tomographic regions using an M-mode ultrasound image (particularly, an M-mode region) as input.
  • the segmentation model may be a model configured to stochastically segment and output the anterior wall of the right ventricle, right ventricle, anterior aortic wall, aorta, posterior aortic wall, left atrium, and left atrium posterior wall using the M-mode ultrasound image as input.
  • the segmentation model may be a model configured to probabilistically segment and output the right ventricle, right ventricle, interventricular septum, left ventricle, and left ventricular posterior wall using the M-mode ultrasound image as input.
  • the segmentation models include DNN (deep neural networks) such as DenseNet-121, U-net, VGG net, DenseNet, FCN (Fully Convolutional Network) with encoder-decoder structure, SegNet, DeconvNet, DeepLAB V3+, SqueezeNet, It may be based on at least one algorithm selected from Alexnet, ResNet18, MobileNet-v2, GoogLeNet, Resnet-v2, Resnet50, RetinaNet, Resnet101, and Inception-v3. Furthermore, the segmentation model may be an ensemble model based on at least two algorithm models among the above-described algorithms.
  • the term “measurement value” may mean a value such as thickness or diameter that can be measured from a plurality of cardiac tomography regions.
  • the measurement may be the aorta diameter or the left atrium diameter (LA diameter).
  • the measured value may be at least one of interventricular septal thickness, left ventricular inner diameter (LVID), and left ventricular posterior wall thickness.
  • the interventricular septal thickness may include the interventricular septal thickness in diastole and the interventricular septal thickness in systole
  • the left ventricular inner thickness may include the left ventricular inner thickness in diastole and the left ventricular inner thickness in systole
  • the left ventricular posterior wall thickness includes the left ventricular posterior wall thickness in diastole and the left ventricular posterior wall thickness in systole.
  • the measurement value may be determined from an electrocardiogram (ECG) and a segmentation result of a plurality of cardiac tomography regions for an M-mode ultrasound image.
  • ECG electrocardiogram
  • the measurements may be determined based on end-diastole and end-systole for the cardiac tomography area determined from the ECG.
  • the measurement value can be determined only as a result of segmentation of a plurality of cardiac tomographic regions on an M-mode ultrasound image.
  • the measurement may be determined based on the periodicity of the segmented cardiac tomography region.
  • the periodicity may be determined based on the degree of auto-correlation for a plurality of cardiac tomography regions and the peak based on the degree of autocorrelation.
  • the present invention is not limited thereto, and the periodicity may be determined based on the degree of cross-correlation of adjacent regions of a plurality of cardiac tomographic regions.
  • the measurement value may be determined based on a plurality of local maxima and a plurality of local minima of the divided cardiac tomography region.
  • the measurement value may be determined based on the slope value of the divided cardiac tomography region.
  • an information providing system for M-mode ultrasound images and an M-mode ultrasound image using a device for providing information on M-mode ultrasound images according to an embodiment of the present invention will be described. Describes a device for providing information.
  • FIG. 1 illustrates a system for providing information on M-mode ultrasound images using a device for providing information on M-mode ultrasound images according to an embodiment of the present invention.
  • FIG. 2A exemplarily shows the configuration of a medical staff device that receives information about an M-mode ultrasound image according to an embodiment of the present invention.
  • FIG. 2B exemplarily shows the configuration of a device for providing information on M-mode ultrasound images according to an embodiment of the present invention.
  • the information providing system 1000 may be a system configured to provide information related to an M-mode ultrasound image based on the M-mode ultrasound image of an object.
  • the information providing system 1000 includes a medical staff device 100 that receives information related to an M-mode ultrasound image, an ultrasound image diagnosis device 200 that provides an M-mode ultrasound image, and a received M-mode ultrasound image. It may be configured as an information providing server 300 that generates information about M-mode ultrasound images based on .
  • the medical staff device 100 is an electronic device that provides a user interface for displaying information related to an M-mode ultrasound image, and is at least one of a smartphone, a tablet PC (personal computer), a laptop, and/or a PC. It can contain one.
  • the medical staff device 100 may receive a prediction result associated with an M-mode ultrasound image of an object from the information provision server 300 and display the received result through a display unit to be described later.
  • the information provision server 300 is an M-mode ultrasound image provided from an ultrasound imaging device 200, such as an ultrasound diagnosis device, and further provides an ECG associated with an M-mode ultrasound image based on an ECG provided from an ECG measurement device (not shown). It may include general-purpose computers, laptops, and/or data servers that perform various operations to determine information. At this time, the information providing server 300 may be a device for accessing a web server that provides web pages or a mobile web server that provides a mobile web site, but is not limited to this.
  • the information provision server 300 receives an M-mode ultrasound image from the ultrasound imaging device 200 and divides the cardiac tomography region within the received M-mode ultrasound image. At this time, the information provision server 300 may segment the cardiac tomography region from the M-mode ultrasound image using a prediction model.
  • the information provision server 300 may determine measurements such as wall thickness, tube diameter, etc. based on the segmented cardiac tomography area and/or received ECG data.
  • the information provision server 300 may provide the determined measurement value and further segmentation results to the medical staff device 100.
  • the information provided from the information providing server 300 may be provided as a web page through a web browser installed on the medical staff device 100, or may be provided in the form of an application or program. In various embodiments, such data may be provided as part of a platform in a client-server environment.
  • medical staff device 100 may include a memory interface 110, one or more processors 120, and a peripheral interface 130.
  • the various components within medical device 100 may be connected by one or more communication buses or signal lines.
  • the memory interface 110 is connected to the memory 150 and can transmit various data to the processor 120.
  • the memory 150 is a flash memory type, hard disk type, multimedia card micro type, card type memory (e.g. SD or XD memory, etc.), RAM, SRAM, ROM, EEPROM, PROM, network storage storage, and cloud. , may include at least one type of storage medium among blockchain data.
  • memory 150 includes operating system 151, communications module 152, graphical user interface module (GUI) 153, sensor processing module 154, telephony module 155, and application module 156. ) can be stored.
  • the operating system 151 may include instructions for processing basic system services and instructions for performing hardware tasks.
  • Communication module 152 may communicate with at least one of one or more other devices, computers, and servers.
  • a graphical user interface module (GUI) 153 can handle graphical user interfaces.
  • Sensor processing module 154 may process sensor-related functions (e.g., processing voice input received using one or more microphones 192).
  • Telephone module 155 can handle telephone-related functions.
  • Application module 156 may perform various functions of a user application, such as electronic messaging, web browsing, media processing, navigation, imaging, and other processing functions.
  • the medical staff device 100 may store one or more software applications 156-1 and 156-2 (eg, information provision applications) associated with one type of service in the memory 150.
  • memory 150 may store a digital assistant client module 157 (hereinafter referred to as the DA client module), thereby storing various user data 158 and instructions for performing the functions of the client side of the digital assistant. can be saved.
  • DA client module digital assistant client module 157
  • the DA client module 157 uses the user's voice input, text input, touch input, and/or gesture through various user interfaces (e.g., I/O subsystem 140) provided in the medical staff device 100. Input can be obtained.
  • various user interfaces e.g., I/O subsystem 140
  • the DA client module 157 can output data in audiovisual and tactile forms.
  • the DA client module 157 may output data consisting of a combination of at least two or more of voice, sound, notification, text message, menu, graphics, video, animation, and vibration.
  • the DA client module 157 can communicate with a digital assistant server (not shown) using the communication subsystem 180.
  • DA client module 157 may collect additional information about the surrounding environment of medical staff device 100 from various sensors, subsystems, and peripheral devices to construct a context associated with user input. .
  • the DA client module 157 may provide context information along with user input to the digital assistant server to infer the user's intent.
  • context information that may accompany the user input may include sensor information, for example, lighting, ambient noise, ambient temperature, images of the surrounding environment, video, etc.
  • the contextual information may include the physical state of the medical staff device 100 (e.g., device orientation, device location, device temperature, power level, speed, acceleration, motion pattern, cellular signal strength, etc.) .
  • the context information may include information related to the software state of the medical staff device 100 (e.g., processes running on the medical staff device 100, installed programs, past and current network activity, background services, error logs, resource usage, etc.).
  • memory 150 may include added or deleted instructions, and further, medical device 100 may include additional elements other than those shown in FIG. 2A or exclude some elements.
  • the processor 120 can control the overall operation of the medical staff device 100 and runs an application or program stored in the memory 150 to execute various commands to implement an interface that provides information related to the M-mode ultrasound image. It can be done.
  • the processor 120 may correspond to a computing device such as a CPU (Central Processing Unit) or AP (Application Processor). Additionally, the processor 120 may be implemented in the form of an integrated chip (IC) such as a system on chip (SoC) in which various computing devices such as a neural processing unit (NPU) are integrated.
  • IC integrated chip
  • SoC system on chip
  • NPU neural processing unit
  • the peripheral interface 130 is connected to various sensors, subsystems, and peripheral devices, and can provide data so that the medical staff device 100 can perform various functions.
  • what function the medical staff device 100 performs may be understood as being performed by the processor 120.
  • the peripheral interface 130 may receive data from the motion sensor 160, the light sensor (light sensor) 161, and the proximity sensor 162, through which the medical staff device 100 can detect orientation, light, and proximity. It can perform detection functions, etc.
  • the peripheral interface 130 may receive data from other sensors 163 (positioning system-GPS receiver, temperature sensor, biometric sensor), through which the medical staff device 100 can use other sensors. (163) It is possible to perform functions related to .
  • the medical staff device 100 may include a camera subsystem 170 connected to the peripheral interface 130 and an optical sensor 171 connected thereto, through which the medical staff device 100 can take pictures and video. You can perform various shooting functions such as clip recording.
  • medical staff device 100 may include a communication subsystem 180 coupled with peripheral interface 130 .
  • the communication subsystem 180 is comprised of one or more wired/wireless networks and may include various communication ports, radio frequency transceivers, and optical transceivers.
  • medical staff device 100 includes an audio subsystem 190 coupled with a peripheral interface 130, which includes one or more speakers 191 and one or more microphones 192.
  • clinician device 100 can perform voice-activated functions, such as voice recognition, voice replication, digital recording, and telephony functions.
  • medical staff device 100 may include an I/O subsystem 140 coupled with a peripheral interface 130 .
  • the I/O subsystem 140 may control the touch screen 143 included in the medical staff device 100 through the touch screen controller 141.
  • the touch screen controller 141 uses any one of a plurality of touch sensing technologies such as capacitive, resistive, infrared, surface acoustic wave technology, proximity sensor array, etc. to sense the user's contact and movement or contact. and cessation of movement can be detected.
  • I/O subsystem 140 may control other input/control devices 144 included in medical staff device 100 through other input controller(s) 142.
  • other input controller(s) 142 may control one or more buttons, rocker switches, thumb-wheels, infrared ports, USB ports, and pointer devices such as a stylus.
  • the information providing server 300 may include a communication interface 310, a memory 320, an I/O interface 330, and a processor 340, each component being one. They can communicate with each other through one or more communication buses or signal lines.
  • the communication interface 310 is connected to the medical staff device 100, the ultrasound imaging device 200, and an ECG measurement device (not shown) through a wired/wireless communication network to exchange data.
  • the communication interface 310 may receive an M-mode ultrasound image from the ultrasound imaging device 200 and an ECG from an ECG measurement device (not shown), and provide information associated with the measurement value determined therefrom to the medical staff. It can be transmitted to device 100.
  • the communication interface 310 that enables transmission and reception of such data includes a communication pod 311 and a wireless circuit 312, where the wired communication port 311 is connected to one or more wired interfaces, for example, Ethernet, May include Universal Serial Bus (USB), Firewire, etc.
  • the wireless circuit 312 can transmit and receive data with an external device through RF signals or optical signals.
  • wireless communications may use at least one of a plurality of communication standards, protocols and technologies, such as GSM, EDGE, CDMA, TDMA, Bluetooth, Wi-Fi, VoIP, Wi-MAX, or any other suitable communication protocol.
  • the memory 320 can store various data used in the information provision server 300.
  • the memory 320 may store an M-mode ultrasound image or a segmentation model learned to segment a cardiac tomography region within an M-mode ultrasound image.
  • memory 320 may include volatile or non-volatile recording media capable of storing various data, instructions, and information.
  • the memory 320 may be a flash memory type, hard disk type, multimedia card micro type, card type memory (e.g. SD or XD memory, etc.), RAM, SRAM, ROM, EEPROM, PROM, network storage storage.
  • cloud, and blockchain data may include at least one type of storage medium.
  • memory 320 may store configuration of at least one of operating system 321, communication module 322, user interface module 323, and one or more applications 324.
  • Operating system 321 e.g., embedded operating system such as LINUX, UNIX, MAC OS, WINDOWS, VxWorks, etc.
  • controls and manages general system operations e.g., memory management, storage device control, power management, etc.
  • general system operations e.g., memory management, storage device control, power management, etc.
  • the communication module 323 may support communication with other devices through the communication interface 310.
  • the communication module 320 may include various software components for processing data received by the wired communication port 311 or the wireless circuit 312 of the communication interface 310.
  • the user interface module 323 may receive a user's request or input from a keyboard, touch screen, microphone, etc. through the I/O interface 330 and provide a user interface on the display.
  • Applications 324 may include programs or modules configured to be executed by one or more processors 330 .
  • an application for providing information related to M-mode ultrasound images may be implemented on a server farm.
  • the I/O interface 330 may connect at least one of input/output devices (not shown) of the information providing server 300, such as a display, keyboard, touch screen, and microphone, to the user interface module 323.
  • the I/O interface 330 may receive user input (eg, voice input, keyboard input, touch input, etc.) together with the user interface module 323 and process commands according to the received input.
  • the processor 340 is connected to the communication interface 310, the memory 320, and the I/O interface 330 to control the overall operation of the information server 300, and operates the application or The program can execute various commands to provide information.
  • the processor 340 may correspond to a computing device such as a CPU (Central Processing Unit) or AP (Application Processor). Additionally, the processor 340 may be implemented in the form of an integrated chip (IC) such as a system on chip (SoC) in which various computing devices are integrated. Alternatively, the processor 340 may include a module for calculating an artificial neural network model, such as a Neural Processing Unit (NPU).
  • a computing device such as a CPU (Central Processing Unit) or AP (Application Processor).
  • IC integrated chip
  • SoC system on chip
  • the processor 340 may include a module for calculating an artificial neural network model, such as a Neural Processing Unit (NPU).
  • NPU Neural Processing Unit
  • processor 340 may be configured to segment and provide cardiac tomography regions within an M-mode ultrasound image using prediction models.
  • processor 340 may be configured to provide information about measurements obtainable from M-mode ultrasound images.
  • Figure 3 illustrates the procedure of a method for providing information on an M-mode ultrasound image according to an embodiment of the present invention.
  • Figure 4 exemplarily illustrates the procedure of a method for providing information on an M-mode ultrasound image according to an embodiment of the present invention.
  • Figures 5A and 5B exemplarily illustrate procedures of a method for providing information on an M-mode ultrasound image according to another embodiment of the present invention.
  • Figure 6 exemplarily illustrates the procedure of a method for providing information on an M-mode ultrasound image according to another embodiment of the present invention.
  • the information provision procedure is as follows. First, an M-mode ultrasound image of the object is received (S310). Next, the plurality of cardiac tomographic regions are divided by the segmentation model (S320). Next, measurements are determined based on the segmented cardiac tomography region (S330).
  • the M-mode ultrasound image of the target area that is, the heart area
  • the M-mode ultrasound image of the target area that is, the heart area
  • an M-mode ultrasound image in which the M-mode region is cropped may be received. That is, before the step (S310) in which the M-mode ultrasound image is received, cropping of the essential area for segmentation of the M-mode ultrasound image may be performed.
  • a step (S320) in which the cardiac tomographic region is divided is performed.
  • the right ventricle anterior wall, right ventricle (RV), and anterior aortic wall are divided into M-mode ultrasound images by a segmentation model. At least two regions selected from the anterior wall of the aorta, the aorta, the posterior wall of the aorta, the left atrium (LA), and the posterior wall of the left atrium (Posterior wall of LA) are divided.
  • the RV anterior wall, right ventricle, interventricular septum (IVS), and left ventricle (LV) are divided by the division model. and at least two selected regions of the LV posterior wall are divided.
  • post-processing may be further performed after the step (S320) of dividing the cardiac tomographic region.
  • post-processing may be further performed, such as removing a divided area other than the M-mode area or filling a hole in the divided area.
  • an electrocardiogram (ECG) for the object is received, and based on the ECG, end-diastole and end-systole ( End-Systole) can be determined.
  • step S330 where the measurement value is determined, the measurement value is determined based on end-diastole and end-systole.
  • step S320 of dividing the cardiac tomography region when the M-mode ultrasound image 412 is input to the segmentation model 420, a plurality of cardiac tomography regions 422 are output. do.
  • an electrocardiogram (ECG) for the subject is received, and End-diastole (EDd) and End-Systole (EDs) are determined for a plurality of cardiac tomography regions 422 based on the ECG. .
  • measurements e.g., aorta diameter, left atrial diameter (LA diameter), interventricular septal thickness, left ventricular inner thickness (LV) are determined based on end-diastole and end-systole. Internal diameter (LVID) or left ventricular posterior wall thickness) is automatically determined.
  • LA diameter left atrial diameter
  • LVID left ventricular inner thickness
  • the determination of the measurement value is not limited to this and may be determined only by the segmented cardiac tomography area.
  • the periodicity of the plurality of cardiac tomography regions is determined.
  • the periodicity may be determined by calculating the degree of auto-correlation for a plurality of cardiac tomographic regions, determining a peak based on the degree of auto-correlation, and determining the peak.
  • the periodicity may be determined by calculating the degree of cross-correlation for adjacent regions of a plurality of cardiac tomographic regions and determining a peak based on the degree of cross-correlation.
  • the measurement value is determined based on the periodicity of the cardiac tomographic region determined in step S330 in which the measurement value is determined.
  • the step S320 of dividing the cardiac tomography region when the M-mode ultrasound image 412 is input to the segmentation model 420, a plurality of cardiac tomography regions 422 are output. do.
  • the autocorrelation of each of the plurality of cardiac tomographic regions 422 is calculated and then the peak is detected.
  • periodicity 436 can be detected. That is, the measurement value 442 can be determined based on the periodicity 436 without determining end-diastole and end-systole based on the ECG.
  • the measurements may be determined by determining a plurality of local maxima and a plurality of minima 437 for a partition, or calculating a slope 438 for a partition.
  • the determination of the measured value may involve at least one of determining the periodicity, determining the maximum and minimum values, and calculating the slope.
  • a specific point with a slope close to 0 is determined, and then the determined specific point may be clustered.
  • the measured value e.g., aorta diameter, left atrial diameter (LA diameter), interventricular septal thickness
  • LA diameter left atrial diameter
  • interventricular septal thickness aorta diameter, left atrial diameter (LA diameter), interventricular septal thickness
  • the present invention provides an information provision system with a different method of determining measurement values depending on whether ECG is present, so that it is possible to secure M-mode based measurements even in secondary and tertiary hospitals where it is difficult to secure ECG data.
  • the entropy value is determined, and the measurement value can be verified based on the entropy value.
  • the measurement may be excluded.
  • medical staff can easily determine measurements from the acquired M-mode ultrasound images according to the information provision method according to various embodiments of the present invention, so they can quickly proceed with the ultrasound analysis step and make more accurate decisions and treatment plans. can be established.
  • the step of selectively receiving ECG data (S630) is performed, and when the ECG is received, end-diastole and end-systole are determined based on the ECG. ) is determined (S6302), and a measurement value is determined based on this (S640). Meanwhile, when the ECG is not received, a step (S6304) in which the periodicity of a plurality of cardiac tomography regions is determined is performed, and a measurement value is determined based on the periodicity (S640).
  • 8A and 8B exemplarily show training data of a segmentation model used in various embodiments of the present invention.
  • 8C and 8D exemplarily show measurement values determined in a method for providing information on an M-mode ultrasound image according to various embodiments of the present invention.
  • the segmentation model can use M-mode ultrasound images for learning, each of which is labeled with seven regions: the anterior wall of the right ventricle, the right ventricle, the anterior wall of the aorta, the aorta, the posterior wall of the aorta, the left atrium, and the posterior wall of the left atrium.
  • the segmentation model may be a model learned to segment the region for left atrium-aorta (LA-Ao) analysis.
  • LA-Ao left atrium-aorta
  • the segmentation model can use M-mode ultrasound images for learning in which five regions of the right ventricle anterior wall, right ventricle, middle ventricle, left ventricle, and left ventricle posterior wall are respectively labeled.
  • the segmentation model may be a model learned to segment the region for left ventricle (LV) analysis.
  • LV left ventricle
  • M-mode images for learning are not limited to this, and the labeling area may be different depending on the area for segmentation and analysis goal.
  • the measured values may be the left atrial thickness corresponding to the systolic phase and the aortic thickness corresponding to the end diastole.
  • the measurements may be the interventricular septum (IVS) thickness, LV internal diameter (LVID), and LV posterior wall (LVPW) thickness, corresponding to end-systole and end-diastole, respectively. there is.
  • IVS interventricular septum
  • LVID LV internal diameter
  • LVPW LV posterior wall
  • the measurement values are not limited to those described above and can be set in more diverse ways depending on the analysis goal.
  • FIG. 9A in the left atrium-aorta analysis, the correlation coefficient between measurements manually determined by an expert and measurements automatically determined according to the information provision method according to various embodiments of the present invention is shown.
  • the correlation coefficient for the left atrial thickness measurement is 0.91
  • the correlation coefficient for the aortic thickness measurement is 0.97, showing a high correlation with the expert's measurement.
  • the correlation coefficient is 0.93
  • the correlation coefficient is 0.95
  • the correlation coefficient is 0.81.
  • the correlation coefficient is 0.80, for the end-systolic left ventricular medial thickness, the correlation coefficient is 0.97, and for the end-systolic left ventricular posterior wall thickness, the correlation coefficient is 0.85.
  • results may mean that there is a high correlation with the measurements of measurement experts determined according to the information provision method according to various embodiments of the present invention, and further, that clinical reliability is high.
  • the present invention can provide highly reliable analysis results for M-mode ultrasound images regardless of the skill level of the medical staff, and can contribute to establishing more accurate decision-making and treatment plans in the image analysis stage.
  • the present invention provides an information provision system with different methods for determining measurement values depending on whether ECG is present, so that it is possible to secure M-mode based measurements even in secondary and tertiary hospitals where it is difficult to secure ECG data.
  • Figure 10 is a schematic diagram of a system for providing information on M-mode ultrasound images using an apparatus for providing information on M-mode ultrasound images according to an embodiment of the present invention.
  • the system 1000 for providing information on M-mode ultrasound images includes a medical staff device 100, an ultrasound image diagnosis device 300, and a user interface providing server 300 (hereinafter referred to as M-mode ultrasound images). It includes an information providing server 300).
  • the system 1000 for providing information about the M-mode ultrasound image may provide information related to the M-mode ultrasound image based on the M-mode ultrasound image of the object.
  • the information providing system 1000 for M-mode ultrasound images includes a medical staff device 100 that receives information related to M-mode ultrasound images, an ultrasound image diagnosis device 300 that provides M-mode ultrasound images, and a reception device 1000 that provides information related to M-mode ultrasound images. It may be configured as an information providing server 300 that generates information about the M-mode ultrasound image based on the M-mode ultrasound image.
  • the medical device 100 may be a device that provides a user interface for exposing information related to M-mode ultrasound images.
  • the medical staff device 100 may receive a prediction result associated with the M-mode ultrasound image of the object from the M-mode ultrasound image information providing server 300 and display the received result.
  • the medical device 100 is an electronic device capable of capturing and outputting images, and may include a smartphone, tablet PC, PC, laptop, etc.
  • Figure 11 is a block diagram showing the configuration of a medical staff device according to an embodiment of the present invention.
  • the medical staff device 100 may include a memory interface 110, one or more processors 120, and a peripheral interface 130. Various components within the medical staff device 100 may be connected by one or more communication buses or signal lines.
  • the memory interface 110 is connected to the memory 150 and can transmit various data to the processor 120.
  • the memory 150 is a flash memory type, hard disk type, multimedia card micro type, card type memory (for example, SD or XD memory, etc.), RAM, SRAM, ROM, EEPROM, PROM, network storage, and cloud. , It may include at least one type of storage medium among the blockchain database.
  • memory 150 includes operating system 151, communications module 152, graphical user interface module (GUI) 153, sensor processing module 154, telephony module 155, and application module 156. ) can be stored.
  • the operating system 151 may include instructions for processing basic system services and instructions for performing hardware tasks.
  • the communication module 152 may communicate with at least one of one or more other devices, computers, and servers.
  • the graphical user interface module (GUI) 153 can process a graphical user interface.
  • Sensor processing module 154 may process sensor-related functions (eg, processing voice input received using one or more microphones 192).
  • the phone module 155 can process phone-related functions.
  • Application module 156 may perform various functions of a user application, such as electronic messaging, web browsing, media processing, navigation, imaging, and other processing functions.
  • the medical staff device 100 may store one or more software applications 156-1 and 156-2 associated with one type of service in the memory 150. At this time, the application 156-1 may provide information about the M-mode ultrasound image to the medical staff device 100.
  • memory 150 may store a digital assistant client module 157 (hereinafter referred to as DA client module), thereby storing various user data 158 and instructions for performing client-side functions of the digital assistant. (e.g. user-customized vocabulary data, preference data, other data such as the user's electronic address book, to-do list, shopping list, etc.).
  • DA client module digital assistant client module 157
  • user-customized vocabulary data preference data
  • other data such as the user's electronic address book, to-do list, shopping list, etc.
  • the DA client module 157 uses the user's voice input, text input, touch input, and/or gesture input through various user interfaces (e.g., I/O subsystem 140) provided in the medical staff device 100. It can be obtained.
  • various user interfaces e.g., I/O subsystem 140
  • the DA client module 157 can output data in audiovisual and tactile forms.
  • the DA client module 157 may output data consisting of a combination of at least two or more of voice, sound, notification, text message, menu, graphics, video, animation, and vibration.
  • the DA client module 157 can communicate with a digital assistant server (not shown) using the communication subsystem 180.
  • the DA client module 157 may collect additional information about the surrounding environment of the medical staff device 100 from various sensors, subsystems, and peripheral devices to construct a context associated with user input. .
  • the DA client module 157 may infer the user's intention by providing context information along with user input to the digital assistant server.
  • context information that may accompany the user input may include sensor information, for example, lighting, ambient noise, ambient temperature, images of the surrounding environment, video, etc.
  • the situation information may include the physical state of the medical staff device 100 (e.g., device orientation, device location, device temperature, power level, speed, acceleration, motion pattern, cellular signal strength, etc.).
  • the context information may include information related to the software status of the medical staff device 100 (e.g., processes running on the medical staff device 100, installed programs, past and present network activity, background services, error logs, resource usage). etc.) may be included.
  • the memory 150 may include added or deleted instructions, and further, the medical staff device 100 may also include additional components other than those shown in FIG. 11 or exclude some components.
  • the processor 120 can control the overall operation of the medical staff device 100, and executes various commands to implement an interface for providing carbon emission figures of manufacturing facilities by running an application or program stored in the memory 150. .
  • the processor 120 may correspond to a computing device such as a Central Processing Unit (CPU) or an Application Processor (AP). Additionally, the processor 120 may be implemented in the form of an integrated chip (IC) such as a system on chip (SoC) in which various computing devices such as a neural processing unit (NPU) are integrated.
  • IC integrated chip
  • SoC system on chip
  • NPU neural processing unit
  • the peripheral interface 130 is connected to various sensors, subsystems, and peripheral devices, and can provide data so that the medical staff device 100 can perform various functions.
  • any function performed by the medical staff device 100 may be understood as being performed by the processor 120.
  • the peripheral interface 130 may receive data from the motion sensor 160, the light sensor (light sensor) 161, and the proximity sensor 162, through which the medical staff device 100 can determine orientation, light, and proximity. It can perform detection functions, etc.
  • the peripheral interface 130 may receive data from other sensors 163 (positioning system-GPS receiver, temperature sensor, biometric sensor), through which the medical staff device 100 may use other sensors. Functions related to (163) can be performed.
  • the medical staff device 100 may include a camera subsystem 170 connected to the peripheral interface 130 and an optical sensor 171 connected thereto, through which the medical staff device 100 can take pictures and video. You can perform various shooting functions such as clip recording.
  • medical staff device 100 may include a communication subsystem 180 coupled to a peripheral interface 130 .
  • the communication subsystem 180 consists of one or more wired/wireless networks and may include various communication ports, radio frequency transceivers, and optical transceivers.
  • medical staff device 100 includes an audio subsystem 190 coupled to a peripheral interface 130, which includes one or more speakers 191 and one or more microphones 192. By including them, medical staff device 100 can perform voice-activated functions, such as voice recognition, voice duplication, digital recording, and telephony functions.
  • voice-activated functions such as voice recognition, voice duplication, digital recording, and telephony functions.
  • medical staff device 100 may include an I/O subsystem 140 coupled with a peripheral interface 130.
  • the I/O subsystem 140 may control the touch screen 143 included in the medical staff device 100 through the touch screen controller 141.
  • the touch screen controller 141 uses any one of a plurality of touch sensing technologies such as capacitive, resistive, infrared, surface acoustic wave technology, proximity sensor array, etc. to detect the user's touch and movement or touch. and cessation of movement can be detected.
  • the I/O subsystem 140 may control other input/control devices 144 included in the medical staff device 100 through other input controller(s) 142.
  • other input controller(s) 142 may control one or more buttons, rocker switches, thumb-wheels, infrared ports, USB ports, and pointer devices such as a stylus.
  • Figure 12 is a block diagram showing the configuration of an information providing server according to an embodiment of the present invention.
  • the information providing server 300 for M-mode ultrasound images includes a communication interface 310, a memory 320, an I/O interface 330, and an information providing server for M-mode ultrasound images ( 300), and each component may communicate with each other through one or more communication buses or signal lines.
  • the communication interface 310 can be connected to the medical staff device 100 and the ultrasound imaging device 200 through a wired/wireless communication network to exchange data.
  • the communication interface 310 may receive an M-mode ultrasound image from the ultrasound imaging device 200.
  • the communication interface 310 may transmit information related to the measurement value determined from the M-mode ultrasound image to the medical staff device 100.
  • the communication interface 310 that enables transmission and reception of such data includes a communication pod 311 and a wireless circuit 312, where the wired communication port 311 is connected to one or more wired interfaces, for example, Ethernet, May include Universal Serial Bus (USB), Firewire, etc.
  • the wireless circuit 312 can transmit and receive data with an external device through RF signals or optical signals.
  • wireless communications may use at least one of a plurality of communication standards, protocols and technologies, such as GSM, EDGE, CDMA, TDMA, Bluetooth, Wi-Fi, VoIP, Wi-MAX, or any other suitable communication protocol.
  • the memory 320 may store various data used by the server 300 for providing information about M-mode ultrasound images. For example, the memory 320 may store M-mode ultrasound images. At this time, the memory 320 may store the first model learned to segment the cardiac tomography region within the M-mode ultrasound image. Additionally, the memory 320 may store a second model learned to predict diastole and systole in an M-mode ultrasound image.
  • memory 320 may include volatile or non-volatile recording media capable of storing various data, instructions, and information.
  • the memory 320 may be a flash memory type, hard disk type, multimedia card micro type, card type memory (e.g. SD or XD memory, etc.), RAM, SRAM, ROM, EEPROM, PROM, network storage storage.
  • cloud, or blockchain database may include at least one type of storage medium.
  • memory 320 may store configuration of at least one of operating system 321, communication module 322, user interface module 323, and one or more applications 324.
  • Operating system 321 e.g., embedded operating system such as LINUX, UNIX, MAC OS, WINDOWS, VxWorks, etc.
  • Operating system 321 is a variety of software to control and manage common system tasks (e.g., memory management, storage device control, power management, etc.) It may contain components and drivers and may support communication between various hardware, firmware, and software components.
  • the communication module 323 may support communication with other devices through the communication interface 310.
  • the communication module 320 may include various software components for processing data received by the wired communication port 311 or the wireless circuit 312 of the communication interface 310.
  • the user interface module 323 may receive a user's request or input from a keyboard, touch screen, microphone, etc. through the I/O interface 330 and provide a user interface on the display.
  • Applications 324 may include programs or modules configured to be executed by one or more processors 340 .
  • an application for providing information on M-mode ultrasound images may be implemented on a server farm.
  • the I/O interface 330 connects at least one of the input and output devices (not shown) of the M-mode ultrasound image information server 300, such as a display, keyboard, touch screen, and microphone, with the user interface module 323. You can.
  • the I/O interface 330 may receive user input (eg, voice input, keyboard input, touch input, etc.) together with the user interface module 323 and process commands according to the received input.
  • the processor 340 is connected to the communication interface 310, memory 320, and I/O interface 330 to control the overall operation of the information providing server 300 for M-mode ultrasound images, and the memory ( Various commands for providing information on M-mode ultrasound images can be executed through applications or programs stored in 320).
  • the processor 340 may correspond to a computing device such as a Central Processing Unit (CPU) or an Application Processor (AP). Additionally, the processor 340 may be implemented in the form of an integrated chip (IC) such as a system on chip (SoC) in which various computing devices are integrated. Alternatively, the processor 340 may include a module for calculating an artificial neural network model, such as a Neural Processing Unit (NPU).
  • a Neural Processing Unit NPU
  • the processor 340 may receive an M-mode ultrasound image of an object from the ultrasound imaging device 200. At this time, the object may target the patient's heart.
  • the processor 340 may segment each of the plurality of cardiac tomographic regions within the M-mode ultrasound image. At this time, the processor 340 may use the first model learned to segment a plurality of cardiac tomography regions using the M-mode ultrasound image as input.
  • multiple cardiac tomographic regions include the right ventricle anterior wall, right ventricle (RV), anterior wall of the aorta, aorta, posterior wall of the aorta, and left atrium ( It may be at least two regions selected from the left atrium (LA) and the posterior wall of the left atrium (Posterior wall of LA).
  • the multiple cardiac tomography regions include at least two regions selected from the RV anterior wall, right ventricle, interventricular septum (IVS), left ventricle (LV), and LV posterior wall. It can be.
  • the processor 340 can predict diastole and systole in an M-mode ultrasound image.
  • the processor 340 may use the second model learned to predict the diastole and systole of the M-mode ultrasound image by using the M-mode ultrasound image as input.
  • the second model may be a model that predicts the end-diastole and end-systole of the M-mode ultrasound image.
  • the second model may be a model that predicts each period of diastole and systole of an M-mode ultrasound image.
  • the processor 340 may provide measurements of a plurality of divided heart tomography regions to the medical staff device 100. At this time, the processor 340 may determine measurement values for a plurality of cardiac tomographic regions divided based on the diastole and systole predicted from the second model. Here, the processor 340 may determine measurements for a plurality of cardiac tomography regions based on end-diastole and end-systole provided from the second model. As another example, the processor 340 may determine measurements for a plurality of cardiac tomographic regions at transition points of each period of diastole and systole based on each period of diastole and systole provided from the second model.
  • the configuration of the information providing server 300 for M-mode ultrasound images according to an embodiment of the present invention has been described.
  • a method of providing information on M-mode ultrasound images through the above-described M-mode ultrasound image information providing server 300 will be described.
  • Figure 13 is a flowchart of a method for providing information on an M-mode ultrasound image according to an embodiment of the present invention
  • Figures 14 to 19 are flowcharts of a method for providing information on an M-mode ultrasound image according to an embodiment of the present invention. This is an example diagram.
  • the M-mode ultrasound image information providing server 300 receives the M-mode ultrasound image of at least one object from the ultrasound image diagnosis device 200 (S410).
  • the information providing server 300 for M-mode ultrasound images may receive an M-mode ultrasound image targeting the patient's heart from the ultrasound image diagnosis device 200.
  • the server 300 providing information about the M-mode ultrasound image may receive the M-mode ultrasound image in which the M-mode region is cropped.
  • the server 300 providing information about the M-mode ultrasound image may perform cropping on the essential area for segmenting the M-mode ultrasound image.
  • the information providing server 300 for the M-mode ultrasound image divides a plurality of cardiac tomographic regions within the M-mode ultrasound image (S420).
  • the server 300 for providing information on M-mode ultrasound images may use the first model learned to segment a plurality of cardiac tomographic regions using the M-mode ultrasound images as input.
  • multiple cardiac tomographic regions include the right ventricle anterior wall, right ventricle (RV), anterior wall of the aorta, aorta, posterior wall of the aorta, and left atrium ( It may be at least two regions selected from the left atrium (LA) and the posterior wall of the left atrium (Posterior wall of LA).
  • the multiple cardiac tomography regions include at least two regions selected from the RV anterior wall, right ventricle, interventricular septum (IVS), left ventricle (LV), and LV posterior wall. It can be. However, it is not limited to this, and more diverse areas can be divided depending on the type of M-mode.
  • the information providing server 300 for the M-mode ultrasound image generates the first M-mode ultrasound image 510 in step S420 in which a plurality of cardiac tomographic regions are divided.
  • a plurality of cardiac tomography regions 540 may be output.
  • a detailed description of the first model will be provided later with reference to FIG. 15.
  • the information providing server 300 for M-mode ultrasound images may perform post-processing on a plurality of divided cardiac tomography regions.
  • the information providing server 300 for the M-mode ultrasound image may have segmented areas other than the M-mode area removed. Additionally, referring to FIG. 16B, holes in the M-mode division area can be filled.
  • the method of performing post-processing is not limited to the above-mentioned method, and may be performed in various post-processing methods.
  • the information providing server 300 for the M-mode ultrasound image predicts the diastolic phase and systolic phase within the M-mode ultrasound image (S430).
  • the information providing server 300 for the M-mode ultrasound image may use the second model learned to predict the diastole and systole of the M-mode ultrasound image by using the M-mode ultrasound image as input.
  • the second model may be a model that predicts the end-diastole and end-systole of the M-mode ultrasound image.
  • the second model may be a model that predicts each period of diastole and systole of an M-mode ultrasound image.
  • the information providing server 300 for the M-mode ultrasound image can predict the diastole and systole phases within the M-mode ultrasound image.
  • the information providing server 300 for the M-mode ultrasound image determines the end-diastole (710) and end-systole (720) of the M-mode ultrasound image. ) can be determined by predicting.
  • the information providing server 300 for an M-mode ultrasound image may predict and determine the systolic period 740 and the diastolic period 750.
  • the server 300 providing information about the M-mode ultrasound image may not reflect the area 730 in which a portion of the systole or diastole phase is received in the learning of the second model.
  • the information providing server 300 for the M-mode ultrasound image uses the M-mode ultrasound image 510 as a second model ( When input to 530), the predicted diastolic phase and systolic phase 550 may be output.
  • the first model and the second model may be composed of a sampling stage and an upsampling stage. At this time, the first model and the second model may share a sampling stage.
  • the first model and the second model may be configured in a multi-task learning method.
  • the first model and the second model may be configured in a U-Net structure.
  • the sampling stage may be composed of the processes of the first tool 610 and the second tool 620.
  • tool 1 610 may be composed of the following processes: 3x3 Convolution - Batch Normalization - Rectified Linear Unit.
  • 3x3 Convolution consists of a filter of 3x3 size
  • Batch Normalization prevents input values from being biased by adjusting the average and standard deviation of the input values
  • Rectified Linear Unit can remove values below the standard value.
  • tool 2 620 may be configured by executing 3x3 Convolution - Batch Normalization - Rectified Linear Unit twice and 2x2 Max Pool. At this time, 2x2 Max Pool can extract the largest value among the 2x2 filter areas.
  • the upsampling stage may be configured differently for the first model and the second model.
  • the filter size of the upsampling stage of the second model that is, the convolution size, may be smaller than the filter size of the upsampling stage of the first model.
  • the upsampling stage of the first model may be composed of tool number 3 630, tool number 4 640, and tool number 5 650.
  • tool 3 630 may be configured to execute Upsampling, Concatenate, and 3x3 Convolution - Batch Normalization - Rectified Linear Unit twice.
  • tool 4 640 may be configured as 3x3 Convolution.
  • tool 5 650 may be configured as Concatenate.
  • the upsampling stage of the second model may be composed of the 4th tool 630, the 5th tool 650, and the 6th tool 660.
  • the 5th tool 650 is the same as the upsampling stage of the first model, but the 4th tool 640 may be configured as a 1x1 convolution.
  • tool number 6 660 can be configured to execute Upsampling, Concatenate, and 1x1 Convolution - Batch Normalization - Rectified Linear Unit twice.
  • the first model and the second model are not limited to the above-mentioned U-Net structure, and the first model is one of the models that uses M-mode ultrasound images as input and a plurality of cardiac tomography regions as output. 2
  • the model may be composed of one of the models that takes the M-mode ultrasound image as input and predicts the diastole and systole within the M-mode ultrasound image as output.
  • the information providing server 300 for M-mode ultrasound images provides measurements of a plurality of divided cardiac tomography regions to the medical staff device 100 (S440).
  • the information providing server 300 for the M-mode ultrasound image may determine measurement values for a plurality of cardiac tomographic regions divided based on the diastolic phase and systolic phase predicted from the second model.
  • the information providing server 300 for M-mode ultrasound images may determine measurement values for a plurality of cardiac tomography regions based on the end-diastole and end-systole provided from the second model.
  • the information providing server 300 for M-mode ultrasound images provides measurements of a plurality of cardiac tomographic regions at transition points of each period of diastole and systole based on each period of diastole and systole provided from the second model. can be decided.
  • Figures 18A and 18B can be related to the results of Sinus Rhythm in normal patients.
  • sinus rhythm can be any cardiac rhythm created when myocardial depolarization begins at the sinus-atrial node. It can be seen that the second embodiment of the method for providing information on M-mode ultrasound images according to an embodiment of the present invention accurately predicted end-diastole (ED) and end-systole (ES) for normal patients.
  • ED end-diastole
  • ES end-systole
  • Figures 18C to 18E may relate to the results of patients with Atrial Fibrillation.
  • atrial fibrillation is a condition in which the contraction of the atria is lost and contracts irregularly, and may be a type of arrhythmia.
  • the distance between the predicted end-diastole (ED) and end-systole (ES) in FIGS. 18c to 18e which are drawings of a patient with atrial fibrillation, is the distance between the predicted end-diastole (ED) and end-systolic (ES), which are drawings of a patient in a normal state.
  • the irregular pattern is longer than the distance between the predicted end-diastole (ED) and end-systole (ES) of FIG. 18B
  • the second embodiment of the method for providing information from an M-mode ultrasound image provides accurate diastole It can be confirmed that end-systole (ED) and end-systole (ES) were predicted.
  • FIGS. 19A and 19B exemplarily show training data of a segmentation model used in various embodiments of the present invention
  • FIGS. 19C and 19D illustrate a method of providing information on an M-mode ultrasound image according to various embodiments of the present invention. Measurements determined in are shown as examples.
  • the segmentation model can use M-mode ultrasound images for learning, each of which is labeled with seven regions: the anterior wall of the right ventricle, the right ventricle, the anterior wall of the aorta, the aorta, the posterior wall of the aorta, the left atrium, and the posterior wall of the left atrium. That is, the segmentation model may be a model learned to segment the region for left atrium-aorta (LA-Ao) analysis.
  • LA-Ao left atrium-aorta
  • the segmentation model can use M-mode ultrasound images for learning in which the five regions of the right ventricle anterior wall, right ventricle, middle ventricle, left ventricle, and left ventricle posterior wall are each labeled. That is, the segmentation model may be a model learned to segment the region for left ventricle (LV) analysis. Meanwhile, M-mode images for learning are not limited to this, and the labeling area may be different depending on the area for segmentation and analysis goal.
  • the measured values may be the left atrial thickness corresponding to the systolic phase and the aortic thickness corresponding to the end of the diastolic phase.
  • the measurements include interventricular septum (IVS) thickness, left ventricular internal diameter (LV 71-37 2022-04-22 internal diameter; LVID), and left ventricular posterior wall (LV posterior) corresponding to end-systole and end-diastole, respectively. wall; LVPW) thickness.
  • IVS interventricular septum
  • LVID left ventricular internal diameter
  • LV posterior left ventricular posterior wall
  • LVPW left ventricular posterior wall
  • the measurement values are not limited to those described above and can be set in more diverse ways depending on the analysis goal.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Radiology & Medical Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

La présente invention concerne un procédé de fourniture d'informations sur une image ultrasonore en mode M mis en œuvre par un processeur et un dispositif l'utilisant, le procédé comprenant les étapes consistant à : recevoir une image ultrasonore en mode M d'un sujet ; segmenter une pluralité de régions tomographiques cardiaques à l'intérieur de l'image ultrasonore en mode M à l'aide d'un modèle de segmentation qui est entraîné de façon à segmenter l'image ultrasonore en mode M en une pluralité de régions tomographiques cardiaques ; et déterminer des mesures pour la pluralité de régions tomographiques cardiaques segmentées. De plus, la présente invention concerne un procédé de fourniture d'une interface utilisateur pour fournir des informations sur une image ultrasonore en mode M, qui est un procédé de fourniture des informations sur une image ultrasonore en mode M mis en œuvre par un processeur, le procédé comprenant les étapes consistant à : recevoir une image ultrasonore en mode M d'un sujet ; segmenter une pluralité de régions tomographiques cardiaques à l'intérieur de l'image ultrasonore en mode M à l'aide d'un premier modèle qui est entraîné de façon à segmenter l'image ultrasonore en mode M en une pluralité de régions tomographiques cardiaques à l'aide de l'image ultrasonore en mode M en tant qu'entrée ; déterminer une phase diastolique et une phase systolique à l'intérieur de l'image ultrasonore en mode M à l'aide d'un second modèle qui est entraîné de façon à prédire la phase diastolique et la phase systolique de l'image ultrasonore en mode M au moyen de l'image ultrasonore en mode M en tant qu'entrée ; et déterminer des mesures pour la pluralité de régions tomographiques cardiaques qui sont segmentées sur la base des phases diastolique et systolique.
PCT/KR2023/003736 2022-04-22 2023-03-21 Procédé de fourniture d'informations sur une image ultrasonore en mode m, et dispositif de fourniture d'informations sur une image ultrasonore en mode m à l'aide de celui-ci WO2023204457A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR1020220050236A KR20230150623A (ko) 2022-04-22 2022-04-22 M-모드 초음파 영상에 대한 정보 제공 방법 및 이를 이용한 m-모드 초음파 영상에 대한 정보 제공용 디바이스
KR10-2022-0050236 2022-04-22
KR10-2023-0006913 2023-01-17
KR20230006913 2023-01-17

Publications (1)

Publication Number Publication Date
WO2023204457A1 true WO2023204457A1 (fr) 2023-10-26

Family

ID=88420273

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/003736 WO2023204457A1 (fr) 2022-04-22 2023-03-21 Procédé de fourniture d'informations sur une image ultrasonore en mode m, et dispositif de fourniture d'informations sur une image ultrasonore en mode m à l'aide de celui-ci

Country Status (1)

Country Link
WO (1) WO2023204457A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014046122A (ja) * 2012-09-04 2014-03-17 Ge Medical Systems Global Technology Co Llc 医用画像表示装置及び医用画像診断システム
KR20140048740A (ko) * 2012-10-16 2014-04-24 삼성메디슨 주식회사 M-모드 초음파 이미지에서 기준 이미지를 제공하는 기준 이미지 제공 장치 및 방법
JP5773281B2 (ja) * 2010-07-14 2015-09-02 株式会社 東北テクノアーチ 血管疾患を判定するためのプログラム、媒体および装置
JP2018509229A (ja) * 2015-03-25 2018-04-05 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. セグメンテーション選択システム及びセグメンテーション選択方法
KR20190053807A (ko) * 2017-11-10 2019-05-20 지멘스 메디컬 솔루션즈 유에스에이, 인크. 초음파 이미징에서의 기계-보조 작업흐름

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5773281B2 (ja) * 2010-07-14 2015-09-02 株式会社 東北テクノアーチ 血管疾患を判定するためのプログラム、媒体および装置
JP2014046122A (ja) * 2012-09-04 2014-03-17 Ge Medical Systems Global Technology Co Llc 医用画像表示装置及び医用画像診断システム
KR20140048740A (ko) * 2012-10-16 2014-04-24 삼성메디슨 주식회사 M-모드 초음파 이미지에서 기준 이미지를 제공하는 기준 이미지 제공 장치 및 방법
JP2018509229A (ja) * 2015-03-25 2018-04-05 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. セグメンテーション選択システム及びセグメンテーション選択方法
KR20190053807A (ko) * 2017-11-10 2019-05-20 지멘스 메디컬 솔루션즈 유에스에이, 인크. 초음파 이미징에서의 기계-보조 작업흐름

Similar Documents

Publication Publication Date Title
WO2015002409A1 (fr) Procédé de partage d'informations dans une imagerie ultrasonore
WO2017065475A1 (fr) Dispositif électronique et procédé de traitement de gestes associé
WO2020235966A1 (fr) Dispositif et procédé de traitement d'une image médicale à l'aide de métadonnées prédites
WO2016133349A1 (fr) Dispositif électronique et procédé de mesure d'informations biométriques
WO2015093724A1 (fr) Méthode et appareil permettant de fournir des données d'analyse de vaisseaux sanguins en utilisant une image médicale
WO2017026743A1 (fr) Procédé pour jouer d'un instrument musical virtuel et dispositif électronique pour prendre en charge ce procédé
WO2014142468A1 (fr) Procédé de fourniture d'une copie image et appareil à ultrasons associé
WO2019164275A1 (fr) Procédé et dispositif pour reconnaître la position d'un instrument chirurgical et caméra
WO2015076508A1 (fr) Procédé et appareil d'affichage d'image ultrasonore
WO2019083227A1 (fr) Procédé de traitement d'image médicale, et appareil de traitement d'image médicale mettant en œuvre le procédé
WO2012134106A2 (fr) Procédé et appareil pour le stockage et l'affichage d'informations d'image médicale
WO2015080522A1 (fr) Méthode et appareil ultrasonore pour le marquage de tumeur sur une image élastographique ultrasonore
EP3110333A2 (fr) Procédé et appareil d'imagerie diagnostique, et support d'enregistrement associé
WO2015088277A1 (fr) Procédé et appareil d'affichage d'une image ultrasonore
WO2020076133A1 (fr) Dispositif d'évaluation de validité pour la détection de région cancéreuse
WO2023182727A1 (fr) Procédé de vérification d'image, système de diagnostic l'exécutant, et support d'enregistrement lisible par ordinateur sur lequel le procédé est enregistré
WO2017010739A1 (fr) Dispositif électronique et son procédé d'entrée/de sortie
WO2023204457A1 (fr) Procédé de fourniture d'informations sur une image ultrasonore en mode m, et dispositif de fourniture d'informations sur une image ultrasonore en mode m à l'aide de celui-ci
WO2014200265A1 (fr) Procédé et appareil pour présenter des informations médicales
EP3073930A1 (fr) Méthode et appareil ultrasonore pour le marquage de tumeur sur une image élastographique ultrasonore
WO2016093453A1 (fr) Appareil de diagnostic à ultrasons et son procédé de fonctionnement
WO2021167318A1 (fr) Procédé de détection de position, appareil, dispositif électronique et support de stockage lisible par ordinateur
WO2020061887A1 (fr) Procédé et dispositif de mesure de la fréquence cardiaque et support de stockage lisible par ordinateur
WO2023080697A1 (fr) Procédé de division de signal cardiaque et dispositif de division de signal cardiaque utilisant ledit procédé de division de signal cardiaque
WO2016047867A1 (fr) Procédé de traitement d'image à ultrasons et appareil d'imagerie à ultrasons associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23792037

Country of ref document: EP

Kind code of ref document: A1