WO2024090746A1 - Procédé et dispositif de traitement d'image ultrasonore - Google Patents

Procédé et dispositif de traitement d'image ultrasonore Download PDF

Info

Publication number
WO2024090746A1
WO2024090746A1 PCT/KR2023/011445 KR2023011445W WO2024090746A1 WO 2024090746 A1 WO2024090746 A1 WO 2024090746A1 KR 2023011445 W KR2023011445 W KR 2023011445W WO 2024090746 A1 WO2024090746 A1 WO 2024090746A1
Authority
WO
WIPO (PCT)
Prior art keywords
interest
objects
annotation
image
ultrasound image
Prior art date
Application number
PCT/KR2023/011445
Other languages
English (en)
Korean (ko)
Inventor
김덕석
최보경
Original Assignee
주식회사 엠티이지
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 엠티이지 filed Critical 주식회사 엠티이지
Publication of WO2024090746A1 publication Critical patent/WO2024090746A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image

Definitions

  • the technical field of the present disclosure relates to ultrasonic image processing methods and devices for optimal images, and to the technical field of providing a method of providing useful information to users through image processing of ultrasonic images.
  • Video media can record the scene as is and allow viewers to convey content more effectively through audio-visual effects. Due to the advantages of these video media, information delivery using video is being carried out in various fields. For example, YouTube, a leader in the video market, provides users with the information they want more effectively through a search engine for keywords.
  • the problem to be solved in this disclosure concerns an ultrasonic image processing method and device, which can improve user convenience in terms of providing a method for easily utilizing ultrasonic images.
  • a user-optimized ultrasound image processing method includes: acquiring an ultrasound image including one or more objects of interest; Obtaining an optimal image for the one or more objects of interest from the ultrasound image; Obtaining an annotation used to access the optimal image; Obtaining a time table representing a monitoring situation for the one or more objects of interest; and providing the annotation and the time table that operates in conjunction with the annotation.
  • annotation may include information about the start and end points of the optimal image.
  • the time table may include information about when the one or more objects of interest are recognized simultaneously.
  • the step of providing the time table may highlight and display on the time table a point in time when a preset number or more objects of interest among the one or more objects of interest are simultaneously recognized.
  • the step of displaying the one or more objects of interest in different ways may display the one or more objects of interest in different colors.
  • the step of acquiring the optimal image from the ultrasound image may be performed before the step of performing the segmentation.
  • the optimal image may be obtained from the ultrasound image based on a positional relationship between the one or more objects of interest.
  • the one or more objects of interest include an artery, a nervous system, and a rib
  • the step of acquiring the optimal image from the ultrasound image produces an image in which the one or more objects of interest are arranged in the order of the nervous system, the artery, and the ribs. It can be determined based on the optimal image.
  • the step of providing the time table may display editing graphics for editing the start and end points of the optimal image indicated by the annotation and removal graphics for removing the annotation in an area adjacent to the annotation.
  • An ultrasound image processing device comprising: a receiver for acquiring an ultrasound image including one or more objects of interest; and obtaining an optimal image for the one or more objects of interest from the ultrasound image, obtaining an annotation used to access the optimal image, and obtaining a time table indicating a monitoring situation for the one or more objects of interest. It may include a processor that provides annotations and the time table that operates in conjunction with the annotations.
  • annotation may include information about the start and end points of the optimal image.
  • the time table may include information about when the one or more objects of interest are recognized simultaneously.
  • the processor may further include a display that performs segmentation of the one or more objects of interest included in the ultrasound image and displays the one or more objects of interest on which the segmentation has been performed in different ways.
  • the processor may obtain the optimal image from the ultrasound image before performing the segmentation.
  • the optimal image may be obtained from the ultrasound image based on a positional relationship between the one or more objects of interest.
  • the display may further include displaying editing graphics for editing the start and end points of the optimal image indicated by the annotation and removal graphics for removing the annotation in an area adjacent to the annotation.
  • the third aspect of the present disclosure may provide a non-transitory computer-readable recording medium on which a program for implementing the method of the first aspect is recorded.
  • FIG. 1 is a diagram illustrating an example of a device or server implemented on a system according to an embodiment.
  • Figure 2 is a block diagram schematically showing the configuration of a device according to an embodiment.
  • Figure 3 is a flowchart showing each step in which a device operates according to an embodiment.
  • FIG. 4 is a diagram illustrating an example in which a device acquires an optimal image according to an embodiment.
  • FIG. 5 is a diagram illustrating an example in which a device acquires an annotation according to an embodiment.
  • FIG. 6 is a diagram illustrating an example in which a device acquires a time table according to an embodiment.
  • FIG. 7 is a diagram illustrating an example in which a device according to an embodiment provides optimal images, time tables, and annotations in conjunction with each other.
  • Spatially relative terms such as “below”, “beneath”, “lower”, “above”, “upper”, etc. are used as a single term as shown in the drawing. It can be used to easily describe the correlation between a component and other components. Spatially relative terms should be understood as terms that include different directions of components during use or operation in addition to the directions shown in the drawings. For example, if a component shown in a drawing is flipped over, a component described as “below” or “beneath” another component will be placed “above” the other component. You can. Accordingly, the illustrative term “down” may include both downward and upward directions. Components can also be oriented in other directions, so spatially relative terms can be interpreted according to orientation.
  • FIG. 1 is a diagram illustrating an example in which a device 100 or a server according to an embodiment is implemented on a system.
  • the medical information system includes a device 100, an external server 130, a storage medium 140, a communication device 150, a virtual server 160, a user terminal 170, and a network. It can be included.
  • the medical information system may further include a blockchain server (not shown) that operates in conjunction with the network.
  • a blockchain server not shown
  • those skilled in the art may understand that some of the components shown in FIG. 1 may be omitted.
  • the device 100 may obtain information related to medical procedures such as surgery from various sources.
  • the device 100 may obtain information (e.g., video) related to medical procedures such as surgery from an information acquisition device (not shown).
  • Information acquisition devices may include, but are not limited to, imaging devices, recording devices, and biological signal acquisition devices.
  • the device 100 may obtain information (eg, video) related to medical procedures such as surgery from the network.
  • Biological signals may include signals obtained from living organisms, such as body temperature signals, pulse signals, respiration signals, blood pressure signals, electromyography signals, and brain wave signals, without limitation.
  • An imaging device which is an example of an information acquisition device (not shown), includes a first imaging device (e.g., CCTV, etc.) that photographs the entire operating room situation and a second imaging device (e.g., endoscope, etc.) that focuses on photographing the surgical site. It can be done, but is not limited to this.
  • the device 100 may acquire images (videos, still images, etc.) related to medical procedures such as surgery from an information acquisition device (not shown) or a network.
  • Video can be understood as a concept that includes both moving images and still images.
  • the device 100 may perform image processing on the acquired image.
  • Image processing according to an embodiment may include naming, encoding, storage, transmission, editing, and metadata creation for each image, but is not limited thereto.
  • the device 100 may transmit medical treatment-related information obtained from an information acquisition device (not shown) or a network as is or as updated information to the network.
  • Transmission information transmitted by the device 100 to the network may be transmitted to external devices 130, 140, 150, 160, and 170 through the network.
  • the device 100 may transmit the updated video to the external server 130, storage medium 140, communication device 150, virtual server 160, user terminal 170, etc. through the network.
  • Device 100 may receive various information (eg, feedback information, update request, etc.) from external devices 130, 140, 150, 160, and 170.
  • the communication device 150 may refer to a device used for communication without limitation (eg, a gateway), and the communication device 150 may communicate with a device that is not directly connected to the network, such as the user terminal 180.
  • the device 100 may include an input unit, an output processor, memory, etc., and may also include a display device (not shown). For example, through the display device, the user can view communication status, memory usage status, power status (e.g., battery state of charge, external power supply, etc.), thumbnail images for stored videos, currently operating mode, etc. You can check, etc.
  • display devices include liquid crystal display, thin film transistor-liquid crystal display, organic light-emitting diode, flexible display, and 3D display. display), electrophoretic display, etc.
  • the display device may include two or more displays depending on the implementation type. Additionally, when the touchpad of the display has a layered structure and is configured as a touch screen, the display can be used as an input device in addition to an output device.
  • networks may communicate with each other through wired or wireless communication.
  • a network may be implemented as a type of server and may include a Wi-Fi chip, Bluetooth chip, wireless communication chip, NFC chip, etc.
  • the device 100 can communicate with various external devices using a Wi-Fi chip, Bluetooth chip, wireless communication chip, NFC chip, etc.
  • Wi-Fi chips and Bluetooth chips can communicate using Wi-Fi and Bluetooth methods, respectively.
  • various connection information such as SSID and session key are first transmitted and received, and various information can be transmitted and received after establishing a communication connection using this.
  • a wireless communication chip can perform communication according to various communication standards such as IEEE, Zigbee, 3G (3rd Generation), 3GPP (3rd Generation Partnership Project), and LTE (Long Term Evolution).
  • the NFC chip can operate in the NFC (Near Field Communication) method using the 13.56MHz band among various RF-ID frequency bands such as 135kHz, 13.56MHz, 433MHz, 860 ⁇ 960MHz, 2.45GHz, etc.
  • the input unit may refer to a means through which a user inputs data to control the device 100.
  • the input unit includes a key pad, dome switch, and touch pad (contact capacitive type, pressure resistance type, infrared detection type, surface ultrasonic conduction type, integral tension measurement type, There may be a piezo effect method, etc.), a jog wheel, a jog switch, etc., but it is not limited thereto.
  • the output unit may output an audio signal, a video signal, or a vibration signal, and the output unit may include a display device, a sound output device, and a vibration motor.
  • the user terminal 170 may include, but is not limited to, various wired and wireless communication devices such as a smartphone, SmartPad, and tablet PC.
  • the device 100 may update medical treatment-related information (eg, video) obtained from an information acquisition device (not shown). For example, the device 100 may perform naming, encoding, storage, transmission, editing, metadata creation, etc. on images acquired from an information acquisition device (not shown). As an example, the device 100 may name an image file using metadata (eg, creation time) of the acquired image. As another example, the device 100 may classify images related to medical procedures obtained from an information acquisition device (not shown). The device 100 can use learned AI to classify images related to medical procedures based on various criteria such as type of surgery, operator, and location of surgery.
  • medical treatment-related information eg, video
  • the device 100 may perform naming, encoding, storage, transmission, editing, metadata creation, etc. on images acquired from an information acquisition device (not shown).
  • the device 100 may name an image file using metadata (eg, creation time) of the acquired image.
  • the device 100 may classify images related to medical procedures obtained from an information acquisition device (not shown).
  • the device 100 can use learned AI to
  • the device 100 in FIG. 1 may be implemented as a server, and the scope of physical devices in which the device 100 can be implemented is not interpreted as being limited.
  • 'provision' can be interpreted to include not only the transmission of information but also all actions that provide any object such as information or screens, such as the action of displaying a screen, and is limited to special implementation actions. It doesn't work.
  • a selection input may be obtained as an example of a user input on a screen provided by the device 100 according to an embodiment.
  • Selection input according to one embodiment may be obtained through various methods, such as touch input, click input using a mouse, etc., and typing input using a keyboard.
  • FIG. 2 is a block diagram schematically showing the configuration of the device 100 according to an embodiment.
  • the device 100 may include a receiving unit 210, a processor 220, an output unit 230, and a memory 240.
  • the device 100 may be implemented with more components than those shown in FIG. 2 , or the device 100 may be implemented with fewer components than the components shown in FIG. 2 .
  • the device 100 may further include a communication unit (not shown) or an input unit (not shown) in addition to the receiver 210, processor 220, output unit 230, and memory 240. It may be possible. Additionally, an example of the output unit 230 may include a display (not shown).
  • the receiver 210 may obtain a selection input for a portion of the screen where a plurality of objects are provided in the video. Selective input may be provided in various ways, such as touch input, click input using a mouse, etc., and typing input using a keyboard.
  • the receiver 210 may acquire an ultrasound image including one or more objects of interest.
  • the ultrasound image may be acquired from an ultrasound image acquisition device (not shown) used to acquire the ultrasound image.
  • the receiver 210 may receive an ultrasound image wired or wirelessly using an ultrasound image acquisition device.
  • the receiver 210 may include an ultrasound image acquisition device.
  • the processor 220 acquires an optimal image for one or more objects of interest from an ultrasound image.
  • the processor 220 may learn samples for the optimal image and obtain the optimal image from the ultrasound image based on the learning result.
  • the processor 220 may obtain an optimal image from an ultrasound image based on the positional relationship between one or more objects of interest.
  • the processor 220 acquires an annotation used to access the optimal image.
  • An annotation according to an embodiment can provide a method of accessing an optimal image from an ultrasound image.
  • an annotation according to one example may provide a link to easily access a time section of 30 to 35 seconds, which is the time section corresponding to the optimal image in a 1-minute ultrasound image.
  • the processor 220 obtains a time table representing a monitoring situation for one or more objects of interest.
  • the time table contains information about when the object of interest is determined to be in the ultrasound image. Accordingly, the time table according to one embodiment may include information about when one or more objects of interest are recognized simultaneously.
  • the processor 220 provides annotations and a time table that operates in conjunction with the annotations.
  • the annotation, time table, and ultrasound image may operate in conjunction with each other.
  • FIG. 3 is a flowchart illustrating each step in which the device 100 operates according to an embodiment.
  • the device 100 acquires an ultrasound image including one or more objects of interest.
  • the ultrasound image may be acquired from an ultrasound image acquisition device (not shown) used to acquire the ultrasound image.
  • the device 100 may receive an ultrasound image wired or wirelessly from an ultrasound image acquisition device.
  • the device 100 may include an ultrasound image acquisition device.
  • An ultrasound image according to one embodiment may be provided as a file containing an image that is updated over time.
  • the ultrasound image may include an ultrasound image acquired as the depth is updated for the same viewpoint.
  • the ultrasound image may include an ultrasound image acquired as the aiming viewpoint is updated.
  • the device 100 acquires an optimal image for one or more objects of interest from an ultrasound image.
  • the device 100 can obtain an optimal image from an ultrasound image in various ways. For example, the device 100 may learn samples for the optimal image and obtain the optimal image from the ultrasound image based on the learning result. As another example, the device 100 may obtain an optimal image from an ultrasound image based on the positional relationship between one or more objects of interest.
  • a method for the device 100 according to an embodiment to obtain an optimal image from an ultrasound image based on the positional relationship between one or more objects of interest will be described.
  • One or more objects of interest may include arteries, nervous system, ribs, etc.
  • the device 100 may determine some of the ultrasound images among the plurality of ultrasound images as the optimal image based on the positional relationships of the nervous system, arteries, ribs, etc. For example, the device 100 may determine an ultrasound image arranged in the order of the nervous system, arteries, and ribs as the optimal image.
  • the arteries, nervous system, and ribs are listed in that order according to the direction from one side of the ultrasound image to the other side, and the ribs are arranged to support the arteries and nervous system, it can be determined as the optimal image.
  • the optimal image A more specific example of the optimal image will be described later in Figure 4.
  • the device 100 acquires an annotation used to access the optimal image.
  • An annotation according to an embodiment can provide a method of accessing an optimal image from an ultrasound image.
  • an annotation according to one example may provide a link to easily access a time section of 30 to 35 seconds, which is the time section corresponding to the optimal image in a 1-minute ultrasound image.
  • the device 100 may display an ultrasound image in a time section corresponding to the optimal image.
  • an annotation may include information about the start and end points of the optimal image.
  • the device 100 may display an ultrasound image corresponding to the start point of the optimal image.
  • the device 100 acquires a time table indicating a monitoring situation for one or more objects of interest.
  • the device 100 may provide a time table containing information about when an object of interest is monitored.
  • the time table contains information about when the object of interest is determined to be in the ultrasound image. Accordingly, the time table according to one embodiment may include information about when one or more objects of interest are recognized simultaneously.
  • the device 100 may highlight and display on the time table a point in time when a preset number of objects of interest or more among one or more objects of interest are recognized simultaneously.
  • the device 100 provides an annotation and a time table that operates in conjunction with the annotation.
  • the device 100 may provide annotations, time tables, and ultrasound images.
  • Ultrasound images may include optimal images.
  • the annotation, time table, and ultrasound image may operate in conjunction with each other. An example in which an annotation, a time table, and an ultrasound image are provided in conjunction with each other according to an embodiment will be described later with reference to FIG. 7 .
  • FIG. 4 is a diagram illustrating an example in which the device 100 acquires an optimal image according to an embodiment.
  • the device 100 according to an embodiment can obtain an optimal image in various ways.
  • the device 100 may acquire an optimal image by performing learning.
  • the device 100 may determine the optimal image among a plurality of ultrasound images by learning about a sample optimal image used as an example of the optimal image.
  • Various learning algorithms can be used for learning.
  • the location of the object of interest can be used as a main variable.
  • the device 100 may acquire an optimal image by performing learning.
  • the device 100 may obtain an optimal image from an ultrasound image based on the positional relationship between one or more objects of interest.
  • the device 100 may determine an ultrasound image arranged in the order of the nervous system 410, the artery 420, and the ribs 430 as the optimal image. Specifically, according to the direction from one side (e.g., upper side) of the ultrasound image to the other side (e.g., lower side), the nervous system 410, the artery 420, and the ribs 430 are listed in the order, and the ribs are connected to the artery and the nervous system. If it is arranged in a way that supports the , it can be determined as the optimal image.
  • the device 100 may perform segmentation on one or more objects of interest included in an ultrasound image. For example, the device 100 may distinguish the nervous system 410, arteries 420, and ribs 430 from each other by performing segmentation.
  • the device 100 may display a plurality of objects of interest in different ways.
  • the device 100 may display one or more objects of interest on which segmentation has been performed in different ways.
  • the plurality of objects of interest may be distinguished from each other, and the device 100 may display the plurality of distinct objects of interest in different ways.
  • the device 100 may display the nervous system 410, arteries 420, and ribs 430 in different colors, but is not limited thereto.
  • the device 100 may distinguish and display a plurality of objects of interest in a different manner not shown in FIG. 4 .
  • the device 100 may display the outlines of the nervous system 410, arteries 420, and ribs 430 by overlapping them on the ultrasound image.
  • the device 100 may exclude the ultrasound image and display an image that provides only the object of interest.
  • the device 100 may acquire an optimal image from an ultrasound image before performing segmentation.
  • the device 100 can acquire the optimal image with higher accuracy.
  • errors may occur during the segmentation process, and if these errors accumulate, inaccuracy may increase in the process of acquiring the optimal image.
  • the device 100 first acquires the optimal image before performing segmentation, and once the optimal image is acquired, higher accuracy can be secured by performing segmentation on the obtained optimal image. If segmentation is performed first, significant resource consumption occurs in the process of performing segmentation on all ultrasound images. However, if segmentation is performed only on the optimal images once acquired, resource consumption can be reduced.
  • FIG. 5 is a diagram illustrating an example in which the device 100 acquires an annotation according to an embodiment.
  • An annotation according to an embodiment can provide a method of accessing an optimal image from an ultrasound image.
  • the first annotation 510 may provide a link to easily access a time section of 16 to 17 seconds, which is the time section corresponding to the optimal image in the ultrasound image.
  • the device 100 may provide a plurality of annotations.
  • the device 100 may provide a first annotation 510, a second annotation 530, and a third annotation 530.
  • the device 100 may display an ultrasound image in a time section corresponding to the annotation to which the selection input was applied. For example, when a selection input is applied to the first annotation 510, the device 100 may display an ultrasound image corresponding to a time point of 16 seconds.
  • the ultrasound image in the time section corresponding to the annotation may be the optimal image.
  • the device 100 may generate and provide an annotation for the time section containing the optimal image for the entire time section of the ultrasound image.
  • an optimal image from an ultrasound image refer to the details described above with reference to FIG. 4 .
  • An annotation according to one embodiment may be created based on user input. For example, a new annotation may be created for a time period determined based on user input (for example, 34 to 35 seconds). For example, based on the text entered into the input area 541 and the save request obtained through the save button 542, the device 100 stores a time period determined based on the user input (e.g., 34 to 35 seconds). ) You can obtain a new annotation for.
  • the device 100 annotates the editing graphics 521 for editing the start and end points of the optimal image indicated by the annotation (e.g., the second annotation 520). )) can be displayed in the adjacent area. For example, when a user input is applied to the editing graphic 521, the device 100 provides a screen for editing the starting point and ending point of the second annotation 520, and the information obtained through the provided editing screen is displayed. The start and end points of the second annotation 520 may be updated based on user input.
  • the device 100 may display a removal graphic 522 for removing an annotation (e.g., the second annotation 520) in an area adjacent to the annotation (e.g., the second annotation 520). there is. For example, when a user input is applied to the removal graphic 522, the device 100 may delete the second annotation 520.
  • an annotation e.g., the second annotation 520
  • the device 100 may delete the second annotation 520.
  • FIG. 6 is a diagram illustrating an example in which the device 100 obtains a time table according to an embodiment.
  • the device 100 may provide a time table 600 including information on when an object of interest is monitored.
  • the time table 600 includes information about the point in time at which the object of interest is determined to be in the ultrasound image.
  • the first row 610 may indicate the point in time at which the first object of interest is determined to be in the ultrasound image
  • the second row 620 may indicate the point in time at which the second object of interest is determined to be in the ultrasound image.
  • the third row 630 may indicate a point in time at which it is determined that the third object of interest is in the ultrasound image.
  • the time table 600 may include information about when one or more objects of interest are recognized simultaneously.
  • the device 100 may highlight and display on the time table a point in time when a preset number or more of the objects of interest among one or more objects of interest are simultaneously recognized. For example, for the first time interval 641, the second time interval 642, and the third time interval 643 in which the first object of interest, the second object of interest, and the third object of interest are simultaneously monitored, the device 100 ) can be highlighted and displayed.
  • FIG. 7 is a diagram illustrating an example in which the device 100 according to an embodiment provides optimal images, time tables, and annotations in conjunction with each other.
  • the device 100 displays an ultrasound image including an optimal image in a first area, displays one or more annotations in a second area 720, and displays a time table 600 in a third area 730. ) can be displayed.
  • the positions of the first area 710, the second area 720, and the third area 730 may be provided in a manner different from that shown in FIG. 7.
  • information provided in the first area 710, the second area 720, and the third area 730 may be interconnected. For example, 34 to 35 seconds, which is the time interval for the new annotation displayed in the second area, may be displayed in the first area 710.
  • the device 100 may provide an optimal image, a time table 600, and one or more annotations in conjunction with each other. For example, when a user applies a selection input for an annotation in the second area, the corresponding ultrasound image is displayed in the first area 710, and the corresponding play bar is displayed in the third area 730. It can be.
  • Various embodiments of the present disclosure are implemented as software including one or more instructions stored in a storage medium (e.g., memory) that can be read by a machine (e.g., a display device or computer). It can be.
  • the processor of the device e.g., processor 220
  • the one or more instructions may include code generated by a compiler or code that can be executed by an interpreter.
  • a storage medium that can be read by a device may be provided in the form of a non-transitory storage medium.
  • 'non-transitory' only means that the storage medium is a tangible device and does not contain signals (e.g. electromagnetic waves). This term refers to cases where data is stored semi-permanently in the storage medium. There is no distinction between temporary storage cases.
  • methods according to various embodiments disclosed in the present disclosure may be included and provided in a computer program product.
  • Computer program products are commodities and can be traded between sellers and buyers.
  • the computer program product may be distributed in the form of a machine-readable storage medium (e.g. compact disc read only memory (CD-ROM)) or via an application store (e.g. Play StoreTM) or on two user devices (e.g. It can be distributed (e.g. downloaded or uploaded) directly between smartphones) or online.
  • a portion of the computer program product may be at least temporarily stored or temporarily created in a machine-readable storage medium, such as the memory of a manufacturer's server, an application store's server, or a relay server.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Epidemiology (AREA)
  • General Physics & Mathematics (AREA)
  • Primary Health Care (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

La présente divulgation concerne un procédé de traitement d'images ultrasonores, et un dispositif et un support d'enregistrement pour le procédé, le procédé comprenant les étapes consistant à : obtenir une image ultrasonore comprenant un ou plusieurs objets d'intérêt ; obtenir, à partir de l'image ultrasonore, une image optimale pour le ou les objets d'intérêt ; obtenir une annotation utilisée pour accéder à l'image optimale ; obtenir un calendrier représentant une situation de surveillance pour le ou les objets d'intérêt ; et fournir l'annotation et le calendrier qui fonctionne conjointement avec l'annotation.
PCT/KR2023/011445 2022-10-28 2023-08-03 Procédé et dispositif de traitement d'image ultrasonore WO2024090746A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2022-0141473 2022-10-28
KR1020220141473A KR20240065446A (ko) 2022-10-28 2022-10-28 초음파 영상 처리 방법 및 디바이스

Publications (1)

Publication Number Publication Date
WO2024090746A1 true WO2024090746A1 (fr) 2024-05-02

Family

ID=90831233

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/011445 WO2024090746A1 (fr) 2022-10-28 2023-08-03 Procédé et dispositif de traitement d'image ultrasonore

Country Status (2)

Country Link
KR (1) KR20240065446A (fr)
WO (1) WO2024090746A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060003050A (ko) * 2003-04-25 2006-01-09 올림푸스 가부시키가이샤 화상 표시 장치, 화상 표시 방법 및 화상 표시 프로그램
US20150057544A1 (en) * 2013-08-21 2015-02-26 Konica Minolta, Inc. Ultrasound diagnostic apparatus, ultrasound image processing method, and non-transitory computer readable recording medium
KR20180126167A (ko) * 2017-05-17 2018-11-27 엘지전자 주식회사 이동 단말기
KR20190124002A (ko) * 2018-04-25 2019-11-04 삼성전자주식회사 의료 영상 장치 및 의료 영상을 처리하는 방법
KR20220031599A (ko) * 2015-02-10 2022-03-11 한화테크윈 주식회사 요약 영상 브라우징 시스템 및 방법

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102258800B1 (ko) 2014-05-15 2021-05-31 삼성메디슨 주식회사 초음파 진단장치 및 그에 따른 초음파 진단 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060003050A (ko) * 2003-04-25 2006-01-09 올림푸스 가부시키가이샤 화상 표시 장치, 화상 표시 방법 및 화상 표시 프로그램
US20150057544A1 (en) * 2013-08-21 2015-02-26 Konica Minolta, Inc. Ultrasound diagnostic apparatus, ultrasound image processing method, and non-transitory computer readable recording medium
KR20220031599A (ko) * 2015-02-10 2022-03-11 한화테크윈 주식회사 요약 영상 브라우징 시스템 및 방법
KR20180126167A (ko) * 2017-05-17 2018-11-27 엘지전자 주식회사 이동 단말기
KR20190124002A (ko) * 2018-04-25 2019-11-04 삼성전자주식회사 의료 영상 장치 및 의료 영상을 처리하는 방법

Also Published As

Publication number Publication date
KR20240065446A (ko) 2024-05-14

Similar Documents

Publication Publication Date Title
WO2015065006A1 (fr) Appareil multimédia, système d'éducation en ligne et procédé associé pour fournir un contenu d'éducation
WO2021020667A1 (fr) Procédé et programme permettant de fournir un entraînement à la rééducation à distance
WO2013125801A1 (fr) Procédé pour le contrôle d'un appareil de diagnostic d'image et terminal mobile pour la mise en œuvre de ce procédé, procédé de commande d'un appareil de diagnostic d'image et appareil de diagnostic d'image pour la mise en œuvre de ce procédé
WO2014069943A1 (fr) Procédé de fourniture d'informations d'intérêt pour les utilisateurs lors d'un appel vidéo, et appareil électronique associé
EP3087455A1 (fr) Procédé de commande d'appareil médical et appareil mobile associé
WO2019117563A1 (fr) Appareil d'analyse prédictive intégrée pour télésanté interactive et procédé de fonctionnement associé
WO2021167374A1 (fr) Dispositif de recherche vidéo et système de caméra de surveillance de réseau le comprenant
WO2016108427A1 (fr) Appareil terminal d'utilisateur et procédé de pilotage d'appareil terminal d'utilisateur
WO2014104686A1 (fr) Appareil d'affichage et procédé de commande d'un tel appareil d'affichage
CN105848559A (zh) 医疗用运动图像记录再现系统以及医疗用运动图像记录再现装置
WO2013081405A1 (fr) Procédé et dispositif destinés à fournir des informations
WO2016064115A1 (fr) Procédé de commande de dispositif et dispositif associé
JP2008217294A (ja) 携帯端末を利用した医療用通信装置
WO2024090746A1 (fr) Procédé et dispositif de traitement d'image ultrasonore
WO2010120061A2 (fr) Appareil pour le traitement de données médicales d'un terminal client, et procédé pour le traitement de données d'images médicales
WO2014061905A1 (fr) Système permettant d'obtenir un signet basé sur le mouvement et la voix, et procédé s'y rapportant
WO2011053060A2 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
WO2023224433A1 (fr) Procédé et dispositif de génération d'informations
WO2023106555A1 (fr) Procédé, dispositif et système de gestion et de régulation du niveau de concentration d'un utilisateur d'un dispositif enregistré de réalité étendue
WO2024075962A1 (fr) Procédé et dispositif de fourniture d'annotations à l'aide d'une entrée vocale
WO2020045909A1 (fr) Appareil et procédé pour logiciel intégré d'interface utilisateur pour sélection multiple et fonctionnement d'informations segmentées non consécutives
EP4376402A1 (fr) Système de traitement d'informations, procédé de traitement d'informations, et programme
WO2024075961A1 (fr) Procédé et dispositif pour effectuer une inférence concernant un objet inclus dans une image
WO2018169219A1 (fr) Impression par tri automatique
WO2017030339A1 (fr) Procédé et système de transfert d'image échographique diagnostique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23882853

Country of ref document: EP

Kind code of ref document: A1