CN111383236A - Method, apparatus and computer-readable storage medium for labeling regions of interest - Google Patents

Method, apparatus and computer-readable storage medium for labeling regions of interest Download PDF

Info

Publication number
CN111383236A
CN111383236A CN202010329836.1A CN202010329836A CN111383236A CN 111383236 A CN111383236 A CN 111383236A CN 202010329836 A CN202010329836 A CN 202010329836A CN 111383236 A CN111383236 A CN 111383236A
Authority
CN
China
Prior art keywords
frame image
curve
frame
image
translation amount
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010329836.1A
Other languages
Chinese (zh)
Other versions
CN111383236B (en
Inventor
何昆仑
郭华源
刘敏超
杨菲菲
邓玉娇
李宗任
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese PLA General Hospital
Original Assignee
Chinese PLA General Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese PLA General Hospital filed Critical Chinese PLA General Hospital
Priority to CN202010329836.1A priority Critical patent/CN111383236B/en
Publication of CN111383236A publication Critical patent/CN111383236A/en
Application granted granted Critical
Publication of CN111383236B publication Critical patent/CN111383236B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The invention discloses a method for marking an interested area in an ultrasonic multi-frame image, which comprises the following steps: obtaining an ultrasonic multi-frame image, wherein the ultrasonic multi-frame image comprises a first frame image, a second frame image and a third frame image; manually generating a first labeling curve aiming at the region of interest in the first frame image; and in the third frame image, if the translation amount of the third frame image relative to the second frame image is larger than a preset threshold value, generating a third annotation curve based on the translation amount of the third frame image relative to the second frame image, the translation amount of the third frame image relative to the first frame image and the second annotation curve. The invention also provides a device and a computer readable storage medium.

Description

Method, apparatus and computer-readable storage medium for labeling regions of interest
Technical Field
The invention relates to the field of ultrasonic image processing, in particular to a method, a device and a computer-readable storage medium for marking a region of interest (ROI for short) in an ultrasonic multi-frame image.
Background
In recent years, with the rapid development of artificial intelligence, big data and cloud computing, the medical image artificial intelligence system is rapidly popularized and applied. However, the existing mainstream supervised (semi-supervised) learning mechanism needs to provide sufficient labeled images for the artificial intelligence model as training samples for deep learning. The ultrasonic examination is convenient and fast, the imaging function is strong, and the clinical application range is wide, so the ultrasonic examination is an important technology for classified detection of first-line trauma and acute and severe trauma. The heart ultrasonic image artificial intelligence system is developed, precious treatment time is fully utilized, the wound treatment capacity of cardiovascular diseases can be further improved, and the casualty rate is reduced.
Disclosure of Invention
Compared with CT and MRI images, the ultrasonic image has the characteristics of lower resolution, more noise interference, unclear focus outline and the like, and the difficulty of accurately marking the region of interest of the ultrasonic image is higher. The existing medical image labeling tool has more defects in the aspects of function design, interface layout, user interaction and the like when processing an ultrasonic image. For example, organ tissues and lesion outlines in ultrasound examination images often appear as irregular shapes, so that currently common image labeling software cannot achieve high labeling precision.
In clinical applications, an ultrasound examination usually lasts for a long time, and image data is often stored in a DICOM file in a multi-frame format. In the front and back adjacent frames, the checking area is relatively fixed, and the image data has more redundant information. Therefore, redundant information between a plurality of frames of images before and after is fully mined, the ROI area labeling mode is improved, the workload of manual drawing and labeling of the ROI area of the (heart) ultrasonic image is reduced, and the method is a feasible way for improving the automation and the accuracy of the labeling.
In view of the above-identified deficiencies or inadequacies in the prior art that the present invention has identified, it is desirable to provide a method, apparatus, and computer-readable storage medium for labeling a region of interest in an ultrasound multi-frame image that addresses at least one of the above-identified technical problems.
In a first aspect of the present invention, there is provided a method for marking a region of interest in an ultrasound multi-frame image, comprising:
obtaining an ultrasonic multi-frame image, wherein the ultrasonic multi-frame image comprises a first frame image, a second frame image and a third frame image;
manually generating a first labeling curve aiming at a region of interest in the first frame of image;
generating a second annotation curve in the second frame image based on the translation amount of the second frame image relative to the first frame image and the first annotation curve,
in the third frame of image, the first frame of image,
if the translation amount of the third frame image relative to the second frame image is less than or equal to a predetermined threshold, generating a third labeling curve based on the translation amount of the third frame image relative to the second frame image and the second labeling curve,
if the translation amount of the third frame image relative to the second frame image is greater than the predetermined threshold, a third annotation curve is generated based on the translation amount of the third frame image relative to the second frame image, the translation amount of the third frame image relative to the first frame image, and the second annotation curve.
Preferably, the step of manually generating the first annotation curve comprises: in the first frame image, a plurality of marking points are manually selected for the region of interest, and the first marking curve is generated according to the plurality of marking points.
Preferably, the first labeled curve comprises a closed curve, and the step of generating the first labeled curve by calculation according to the plurality of labeled points comprises generating a cubic non-uniform B-spline closed curve by calculation according to the plurality of labeled points.
Preferably, the step of obtaining the translation amount of the second frame image relative to the first frame image comprises performing a first screening on matching points between the second frame image and the first frame image by using the first annotation curve.
Preferably, the step of obtaining the translation amount of the second frame image relative to the first frame image further comprises performing a second screening on the matching points subjected to the first screening by using a slope voting method.
Preferably, the ultrasound multi-frame image further includes a fourth frame image,
the method further comprises the following steps:
in the fourth frame of image, the image is displayed,
if the translation amount of the fourth frame image relative to the third frame image is less than or equal to the predetermined threshold, generating a fourth labeling curve based on the translation amount of the fourth frame image relative to the third frame image and the third labeling curve,
if the translation amount of the fourth frame image relative to the third frame image is greater than the predetermined threshold, a fourth annotation curve is generated based on the translation amount of the fourth frame image relative to the third frame image, the translation amount of the fourth frame image relative to the second frame image, and the third annotation curve.
Preferably, according to the acquisition time of each frame image, the first frame image is earlier than the second frame image, the second frame image is earlier than the third frame image, and the third frame image is earlier than the fourth frame image.
Preferably, the image acquisition time is determined based on the acquisition time of each frame image,
the first frame image and the second frame image are two adjacent frame images, and/or
The second frame image and the third frame image are two adjacent frame images, and/or
The third frame image and the fourth frame image are two adjacent frame images.
In a second aspect of the invention, there is provided an apparatus for marking a region of interest in an ultrasound multi-frame image, comprising a processor and a memory, the memory storing instructions executable by the processor to cause the processor to perform the above method.
In a third aspect of the invention, a computer-readable storage medium is provided, storing a computer program for implementing the above method.
The beneficial technical effects of the invention comprise at least one of the following: aiming at the marking requirement of the ROI area delineation of an ultrasonic multi-frame image (such as a heart), the redundant characteristics of a plurality of frames of image data before and after are fully utilized, a user only needs to manually mark the boundary of the ROI area of a certain frame of image, and other multi-frame images are calculated by a self-adaptive method to generate the ROI area marking, so that the original processing mode of manually marking the ultrasonic multi-frame image frame by frame is changed, and a new mode of 'manual marking of a first frame and automatic marking of subsequent multi-frames' is established; the invention has the advantages of accurate marking result, small noise interference and less manual operation.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a simplified flow diagram of an annotation process according to an embodiment of the invention;
fig. 2 is a schematic flow chart of an annotation method according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and that the relative arrangement of parts and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the invention unless it is specifically stated otherwise. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
As shown in fig. 1, a method for adaptively labeling a region of interest of an ultrasound multi-frame image is provided in a modular manner, and the method includes: firstly, reading an ultrasonic file to obtain an ultrasonic multi-frame image; manually labeling a labeling curve around (partially surrounding) the region of interest on the first frame (a certain frame) of ultrasonic image, namely a first labeling curve; then starting adaptive labeling of a second frame image, namely performing image registration and feature point screening on the second frame image and the first frame image, calculating the translation amount (horizontal translation amount Th and vertical translation amount Tv) of the second frame image relative to the first frame image, and then calculating a second labeling curve on the second frame image by referring to the coordinate value of the first curve and the translation amounts Th and Tv on the second frame image; starting from the third frame image, if the translation amount between the third frame image and the previous frame image is large, not only the translation amount between the third frame image and the previous frame image but also the translation amount between the third frame image and the previous frame image need to be considered to calculate the labeling curve in the current frame image; by analogy, marking subsequent multi-frame images one by one through marked images; and finally, outputting the labeling result, and storing or utilizing the labeling result. The invention has the advantages of high automation degree, high marking precision and the like, and can effectively improve the marking efficiency and consistency of the ultrasonic images.
The technical framework flow of an exemplary embodiment of the present invention, which illustrates details of the present invention with respect to cardiac ultrasound image labeling, is described in detail below with reference to fig. 2.
As shown in fig. 2, the framework mainly comprises modules or steps of image file reading and frame-by-frame display, first frame image ROI manual labeling (including selecting ROI boundary points and interpolating ROI boundary points to generate B-spline closed curves), subsequent multi-frame image adaptive ROI labeling (including registration of front and rear adjacent frames of images, screening matching feature points, calculating translation amount of a current frame image and B-spline closed curves of ROI regions of the current frame image), current frame labeling result output, and the like.
Firstly, an image file reading and frame-by-frame display module (module S101) reads a cardiac ultrasound multi-frame image file, and then sequentially displays the 1 st, 2 nd, 3 rd and … th frame images according to the sequence of frame data acquisition time; after the previous frame of image is displayed, the next frame of image is displayed according to the triggering condition.
The ROI of the first frame image is manually markedThe method mainly comprises a module for selecting ROI boundary points (module S102) and an interpolation boundary pointP j-1 AndP j (j>0) Calculating and generating cubic non-uniform B-spline curve segmentP j-1 P j Module (Module S103), interpolation boundary pointsP m-1 AndP 0 calculating and generating cubic non-uniform B-spline closed curveQ 1 And (block S104).
The method mainly comprises a module (module S105) for registering and screening matched feature points of two adjacent frames of images, a module (module S106) for calculating the translation amount of a current frame image relative to a previous frame, a module (module S107) for calculating the translation amount of the current frame image relative to the previous frame, a module (module S108) for calculating the optimal translation amount and a module (module S109) for calculating a current frame labeling curve.
The current frame labeling result output module (module S110) sequentially outputs and stores the coordinate values of the labeling curve of the current frame ROI.
The actual working content of each module will be exemplarily detailed below.
A module S101: firstly, a heart ultrasonic multi-frame image file (such as DICOM format) is read, and the first image file and the second image file are sequentially displayediFrame imageI i ,(i=1,2,…,n). This cardiac ultrasound multi-frame image file may be acquired by medical personnel using an ultrasound device.
A module S102: for the first frame image, allowing the user to aim at the first frame imageROIRegion manual selection boundary pointP j ,(j=0,1,…,m) I.e. the annotation point. In other embodiments of the present invention, the first frame image may not be the earliest frame image obtained in the acquisition time sequence, but may be any one of the ultrasound multi-frame images, for example, the frame image with the clearest boundary of the region of interest.
A module S103: the user-selected edge on the first frame imageBoundary pointP j-1 AndP j (j>0) As a type value point, carrying out cubic non-uniform B-spline open curve interpolation calculation to obtain a relevant control point vector and a node vector, and then drawing and generating a curve segment according to a deboer algorithmP j-1 P j
A module S104: after the user finishes selecting points, boundary points on the first frame image are selectedP m-1 AndP 0 performing cubic non-uniform B-spline closed curve interpolation calculation as a type value point to obtain a relevant control point vector and a node vector, adjusting the control point vector and the node vector according to a periodic B-spline curve generation rule, and finally generating a closed curve according to the drawing of a boost algorithmQ 1
It should be noted that, here, fitting the labeling points to a cubic non-uniform B-spline closed curve can be more suitable for the boundary of the region of interest (e.g., an organ or a lesion), so as to obtain a more accurate labeling curve. In other embodiments of the present invention, the labeled points may be fitted to other curves, such as a primary curve, a secondary curve, and a higher curve, and the labeled curve may be an open curve or a closed curve. In other embodiments of the present invention, the annotation curve can also be manually marked directly on the first frame image.
A module S105: for the second frame image and the subsequent multi-frame images, for exampleSIFT(Scale-invariantfeaturetransform) Algorithm, calculating to obtain the previous frame imageI i-1 And the current frame imageI i Is matched withS(r (i-1,k) , r (i,k) ),(k=0,1,…,l) Wherein, in the step (A),r (i-1,k) is shown askFor matching points (r (i-1,k) , r (i,k) ) In the previous frame imageI i-1 The point(s) on the upper surface,r (i,k) represents a matching point (r (i-1,k) , r (i,k) ) Belongs to the current frame imageI i A point on; then, the image is examinedI i-1 All the matching points on the table are eliminatedr (i-1,k) Point on closed curveQ i-1 What encloseROIMatching points outside of the regionS D (r (i-1,k) , r (i,k) ) Finally, an image is obtainedI i-1 AndI i effective matching point ofS Q (r (i-1,k) , r (i,k) ) It is obvious thatS Q (r (i-1,k) , r (i,k) ) =S(r (i-1,k) , r (i,k) )-S D (r (i-1,k) , r (i,k) ). In other embodiments of the present invention, if the labeled curve is an open curve, the screening is performed, for example, by rejecting matching points on one side of the open curve.
It should be noted that, since the obtaining of the ultrasound image of the heart and the like requires the ultrasound probe to move on the surface of the skin and the like of the human body, the movement usually causes the angular offset of the ultrasound probe, and the above-mentioned screening step can effectively reduce the error introduced by the angular offset and can also reduce the influence of the noise of the ultrasound image.
In other embodiments of the present invention, the second frame image may be earlier or later than the first frame image according to the image acquisition time sequence of each frame; the second frame image may be adjacent to the first frame image in anticipation of the maximum redundant data between the two frame images, or may not be adjacent, thereby enabling more flexible and accurate selection of the standard image sequence.
A module S106: the previous frame imageI i-1 And the current frame imageI i Horizontally arranged adjacently, each pair of valid matching pointsS Q (r (i-1,k) , r (i,k) ) By straight linesL(r (i-1,k) , r (i,k) ) Connecting and calculating the slope value of the straight line; then, willS Q (r (i-1,k) , r (i,k) ) Calculating the slope of the connecting lines of all the effective matching points, and selecting the slope value with the most votes as a main slope value; finally, calculating the average deviation value of coordinate values of effective matching points with all connecting line slopes equal to (approximately equal to) the main slope value in the horizontal direction and the vertical direction respectively, and taking the average deviation value as an imageI i-1 AndI i amount of horizontal translation ofT h,i-1 And amount of vertical translationT y,i-1
It should be noted that the slope voting method performs secondary screening on the screened matching points, which is helpful to reduce the influence caused by the ultrasound image noise.
A module S107: then, the translation amount is judged, if the translation amount is larger than a preset threshold value and the current frame is a third frame image or a subsequent image (i)>2) Then, the three front and back frame images are adopted to carry out registration calculation translation amount, that is, the current frame image is calculated respectivelyI i And the previous frame imageI i-1 Translation in horizontal and vertical directionsT h,i-1 AndT v,i-1 on the basis, the current frame image is additionally calculatedI i And the first two frame imagesI i-2 Translation in horizontal and vertical directionsT h,i-2 AndT v,i-2
it should be noted that in other embodiments of the present invention, the translation amount for determination may be a translation amount in a horizontal direction, or a translation amount in a vertical direction, or an absolute translation amount (square root of sum of squares of the former two). The judgment of the translation amount can effectively find the defect introduced in the process of collecting the multi-frame image, so that the defect can be corrected by subsequent operation.
Likewise, in other embodiments of the present invention, the third frame image may be earlier or later than the first frame image and/or the second frame image in terms of the respective frame image acquisition time sequence; the third frame image may or may not be adjacent to the second frame image.
Of course, in block S107, if the translation amount calculated in block S106 is equal to or less than the predetermined threshold, or the current frame is the second frame image, the process directly jumps to block S109.
A module S108: in obtaining translationT h,i-1 AndT v,i-1 and amount of translationT h,i-2 AndT v,i-2 then, a weighting coefficient is introducedλ(0≤λ≤1) Respectively obtaining the weighted translation amount in the horizontal direction and the vertical directionT h, =λT h,i-1 +(1-λ)T h,i-2 AndT v, =λT v,i-1 +(1-λ)T v,i-2 then, the weighting coefficient is obtained by calculating the following objective function (equation 1-1) at a step value of, for example, 0.1λOptimum value of (2)
Figure DEST_PATH_IMAGE001
And an optimal value of the weighted translation amount
Figure DEST_PATH_IMAGE002
And
Figure DEST_PATH_IMAGE003
Figure DEST_PATH_IMAGE004
(formula 1-1)
Wherein the content of the first and second substances,
Figure 299496DEST_PATH_IMAGE005
Figure DEST_PATH_IMAGE006
P i x k ,y k is shown asiIn the frame imagex k Line and firsty l The pixel values of the column pixels are then compared,P i,ROI is shown asiAnd (4) pixel points of the ROI area of the frame image.
A module S109: order pointq(q x ,q y ) Is as followsI i-1 Closed curve of frame imageQ i-1 At any point of (1), which is atI i Closed curve of frame imageQ i Corresponding point on is
Figure 511909DEST_PATH_IMAGE007
(
Figure DEST_PATH_IMAGE008
,
Figure DEST_PATH_IMAGE009
) Then point of
Figure 790181DEST_PATH_IMAGE007
The coordinate value satisfies
Figure 372341DEST_PATH_IMAGE008
=q x +T ,h,i-1 And
Figure DEST_PATH_IMAGE010
=q y +T v,i-1 (front and rear two frame registration)
Or
Figure 809883DEST_PATH_IMAGE008
=q x +
Figure 209509DEST_PATH_IMAGE002
And
Figure 555171DEST_PATH_IMAGE010
=q y +
Figure 393551DEST_PATH_IMAGE003
(front and rear three frame registration)
Closed curveQ i Is the firstI i Frame imageROIThe labeled curve of (1).
A module S110: sequentially outputting and storing closed curvesQ i Coordinate values of all boundary points.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that retains and stores instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing.
The computer-readable program instructions described herein may be downloaded to various computing/processing devices from a computer-readable storage medium.
The computer program instructions for carrying out operations of the present invention may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A method for labeling a region of interest in an ultrasound multiframe image, comprising:
obtaining an ultrasonic multi-frame image, wherein the ultrasonic multi-frame image comprises a first frame image, a second frame image and a third frame image;
manually generating a first labeling curve aiming at a region of interest in the first frame of image;
generating a second annotation curve in the second frame image based on the translation amount of the second frame image relative to the first frame image and the first annotation curve,
in the third frame of image, the first frame of image,
if the translation amount of the third frame image relative to the second frame image is less than or equal to a predetermined threshold, generating a third labeling curve based on the translation amount of the third frame image relative to the second frame image and the second labeling curve,
if the translation amount of the third frame image relative to the second frame image is greater than the predetermined threshold, a third annotation curve is generated based on the translation amount of the third frame image relative to the second frame image, the translation amount of the third frame image relative to the first frame image, and the second annotation curve.
2. The method of claim 1, wherein the step of manually generating a first annotated curve comprises: in the first frame image, a plurality of marking points are manually selected for the region of interest, and the first marking curve is generated according to the plurality of marking points.
3. The method of claim 2, wherein the first annotated curve comprises a closed curve, and the step of computationally generating the first annotated curve from the plurality of annotated points comprises computationally generating a cubic non-uniform B-spline closed curve from the plurality of annotated points.
4. The method of any of claims 1 to 3, wherein the step of obtaining the amount of translation of the second frame image relative to the first frame image comprises first screening matching points between the second frame image and the first frame image using the first annotation curve.
5. The method of claim 4, wherein obtaining the amount of translation of the second frame image relative to the first frame image further comprises second filtering the first filtered matching points using slope voting.
6. The method of any of claims 1-3, wherein the ultrasound multi-frame image further comprises a fourth frame image,
the method further comprises the following steps:
in the fourth frame of image, the image is displayed,
if the translation amount of the fourth frame image relative to the third frame image is less than or equal to the predetermined threshold, generating a fourth labeling curve based on the translation amount of the fourth frame image relative to the third frame image and the third labeling curve,
if the translation amount of the fourth frame image relative to the third frame image is greater than the predetermined threshold, a fourth annotation curve is generated based on the translation amount of the fourth frame image relative to the third frame image, the translation amount of the fourth frame image relative to the second frame image, and the third annotation curve.
7. The method of claim 6, wherein the first frame image is earlier than the second frame image, the second frame image is earlier than the third frame image, and the third frame image is earlier than the fourth frame image according to an acquisition time of each frame image.
8. The method of claim 6, wherein, based on the acquisition time of each frame of image,
the first frame image and the second frame image are two adjacent frame images,
and/or
The second frame image and the third frame image are two adjacent frame images,
and/or
The third frame image and the fourth frame image are two adjacent frame images.
9. An apparatus for labeling a region of interest in an ultrasound multi-frame image, comprising a processor and a memory, characterized in that the memory stores instructions executable by the processor to cause the processor to perform the method according to any one of claims 1 to 8.
10. A computer-readable storage medium, in which a computer program is stored, characterized in that the computer program is adapted to implement the method according to any of claims 1 to 8.
CN202010329836.1A 2020-04-24 2020-04-24 Method, apparatus and computer-readable storage medium for labeling regions of interest Active CN111383236B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010329836.1A CN111383236B (en) 2020-04-24 2020-04-24 Method, apparatus and computer-readable storage medium for labeling regions of interest

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010329836.1A CN111383236B (en) 2020-04-24 2020-04-24 Method, apparatus and computer-readable storage medium for labeling regions of interest

Publications (2)

Publication Number Publication Date
CN111383236A true CN111383236A (en) 2020-07-07
CN111383236B CN111383236B (en) 2021-04-02

Family

ID=71219095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010329836.1A Active CN111383236B (en) 2020-04-24 2020-04-24 Method, apparatus and computer-readable storage medium for labeling regions of interest

Country Status (1)

Country Link
CN (1) CN111383236B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114117666A (en) * 2021-11-16 2022-03-01 吉林大学 Method for modeling blades of hydraulic torque converter
CN115861603A (en) * 2022-12-29 2023-03-28 宁波星巡智能科技有限公司 Interest region locking method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040151355A1 (en) * 2003-01-31 2004-08-05 Riken Method of extraction of region of interest, image processing apparatus, and computer product
US20090010511A1 (en) * 2003-07-25 2009-01-08 Gardner Edward A Region of interest methods and systems for ultrasound imaging
CN105160294A (en) * 2015-07-09 2015-12-16 山东大学 Automatic real-time MCE sequence image myocardial tissue region-of-interest tracking method
CN106923864A (en) * 2015-11-03 2017-07-07 东芝医疗系统株式会社 Diagnostic ultrasound equipment, image processing apparatus and image processing program
CN108053424A (en) * 2017-12-15 2018-05-18 深圳云天励飞技术有限公司 Method for tracking target, device, electronic equipment and storage medium
CN108665456A (en) * 2018-05-15 2018-10-16 广州尚医网信息技术有限公司 The method and system that breast ultrasound focal area based on artificial intelligence marks in real time
CN109685060A (en) * 2018-11-09 2019-04-26 科大讯飞股份有限公司 Image processing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040151355A1 (en) * 2003-01-31 2004-08-05 Riken Method of extraction of region of interest, image processing apparatus, and computer product
US20090010511A1 (en) * 2003-07-25 2009-01-08 Gardner Edward A Region of interest methods and systems for ultrasound imaging
CN105160294A (en) * 2015-07-09 2015-12-16 山东大学 Automatic real-time MCE sequence image myocardial tissue region-of-interest tracking method
CN106923864A (en) * 2015-11-03 2017-07-07 东芝医疗系统株式会社 Diagnostic ultrasound equipment, image processing apparatus and image processing program
CN108053424A (en) * 2017-12-15 2018-05-18 深圳云天励飞技术有限公司 Method for tracking target, device, electronic equipment and storage medium
CN108665456A (en) * 2018-05-15 2018-10-16 广州尚医网信息技术有限公司 The method and system that breast ultrasound focal area based on artificial intelligence marks in real time
CN109685060A (en) * 2018-11-09 2019-04-26 科大讯飞股份有限公司 Image processing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈兆学等: "CT肝脏灌注图像序列感兴趣区域运动跟踪算法研究", 《计算机应用与软件》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114117666A (en) * 2021-11-16 2022-03-01 吉林大学 Method for modeling blades of hydraulic torque converter
CN114117666B (en) * 2021-11-16 2024-04-19 吉林大学 Method for modeling blade of hydraulic torque converter
CN115861603A (en) * 2022-12-29 2023-03-28 宁波星巡智能科技有限公司 Interest region locking method, device, equipment and storage medium
CN115861603B (en) * 2022-12-29 2023-09-26 宁波星巡智能科技有限公司 Method, device, equipment and medium for locking region of interest in infant care scene

Also Published As

Publication number Publication date
CN111383236B (en) 2021-04-02

Similar Documents

Publication Publication Date Title
USRE35798E (en) Three-dimensional image processing apparatus
JP5643304B2 (en) Computer-aided lung nodule detection system and method and chest image segmentation system and method in chest tomosynthesis imaging
US9547894B2 (en) Apparatus for, and method of, processing volumetric medical image data
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
KR20210002606A (en) Medical image processing method and apparatus, electronic device and storage medium
US11450003B2 (en) Medical imaging apparatus, image processing apparatus, and image processing method
US9519993B2 (en) Medical image processing apparatus
CN112885453A (en) Method and system for identifying pathological changes in subsequent medical images
CN111383236B (en) Method, apparatus and computer-readable storage medium for labeling regions of interest
CN109754396A (en) Method for registering, device, computer equipment and the storage medium of image
CN105303550A (en) Image processing apparatus and image processing method
CN115830016B (en) Medical image registration model training method and equipment
US20190392552A1 (en) Spine image registration method
CN113610752A (en) Mammary gland image registration method, computer device and storage medium
JP5121399B2 (en) Image display device
CN111223158B (en) Artifact correction method for heart coronary image and readable storage medium
Eulzer et al. Temporal views of flattened mitral valve geometries
CN109919953B (en) Method, system and apparatus for carotid intima-media thickness measurement
US9147250B2 (en) System and method for automatic magnetic resonance volume composition and normalization
CN116524158A (en) Interventional navigation method, device, equipment and medium based on image registration
CN114596275A (en) Pulmonary vessel segmentation method, device, storage medium and electronic equipment
CN114565623A (en) Pulmonary vessel segmentation method, device, storage medium and electronic equipment
JP4571378B2 (en) Image processing method, apparatus, and program
Lehmann et al. Integrating viability information into a cardiac model for interventional guidance
CN111000580A (en) Intervertebral disc scanning method and device, console equipment and CT system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant