CN113744288B - Method, apparatus, and medium for generating annotated sample images - Google Patents

Method, apparatus, and medium for generating annotated sample images Download PDF

Info

Publication number
CN113744288B
CN113744288B CN202111296863.4A CN202111296863A CN113744288B CN 113744288 B CN113744288 B CN 113744288B CN 202111296863 A CN202111296863 A CN 202111296863A CN 113744288 B CN113744288 B CN 113744288B
Authority
CN
China
Prior art keywords
closed
slice image
dimensional model
loop region
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111296863.4A
Other languages
Chinese (zh)
Other versions
CN113744288A (en
Inventor
张世坤
赵国涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ouying Information Technology Co ltd
Original Assignee
Beijing Ouying Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ouying Information Technology Co ltd filed Critical Beijing Ouying Information Technology Co ltd
Priority to CN202111296863.4A priority Critical patent/CN113744288B/en
Publication of CN113744288A publication Critical patent/CN113744288A/en
Application granted granted Critical
Publication of CN113744288B publication Critical patent/CN113744288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Abstract

Embodiments of the present disclosure relate to a method, apparatus, and medium for generating an annotated sample image. According to the method, a three-dimensional model created for a predetermined object is subjected to a normalization process such that the size of the three-dimensional model subjected to the normalization process is in a predetermined proportion to the size of a reference three-dimensional model; cutting the three-dimensional model subjected to the standardization processing to obtain a plurality of slice images; extracting a plurality of closed-loop region portions in the slice image; determining density data associated with each closed-loop region portion for the slice image based on the reference three-dimensional model to determine annotation data for annotating the closed-loop region portion based on the density data; and filling corresponding marking data in each closed-loop area part included in the slice image to obtain a marked sample image. Thus, the efficiency of generating the labeled sample images can be improved, and the number of labeled sample images can be increased.

Description

Method, apparatus, and medium for generating annotated sample images
Technical Field
Embodiments of the present disclosure relate generally to the field of image processing, and in particular, to methods, apparatuses, and media for generating annotated sample images.
Background
With the development of artificial intelligence technology, it is often necessary to label (e.g., label three-dimensionally) various portions of an image (especially a medical image picture in the medical field, such as a CT picture, etc.) with, for example, labels or colors to distinguish the portions. These annotated images are often used as sample images to train various network models, for example, to help relevant staff (e.g., medical staff) more efficiently complete their work.
Currently, the image is usually labeled manually, however, the manual labeling process is laborious and time-consuming. Especially for medical image pictures, many areas are marked on one picture, so that it takes a lot of time to complete the marking of one picture. When a network model (e.g., a deep learning model) needs to be trained with a certain accuracy based on a large number of sample images, the inefficient labeling method may further reduce the training efficiency of the network model, thereby greatly increasing the required cost.
Therefore, there is a need for a technique for automatically labeling images (particularly medical image pictures), such that the efficiency of generating labeled sample images can be improved, and such that a large number of labeled sample images can be generated based on fewer original images (e.g., original CT image images), thereby reducing the associated labeling costs.
Disclosure of Invention
In view of the above problems, the present disclosure provides a method, apparatus, and medium for generating an annotated sample image, so that not only the efficiency of generating an annotated sample image can be improved, but also the number of annotated sample images can be increased, thereby greatly reducing the associated annotation cost.
According to a first aspect of the present disclosure, there is provided a method for generating an annotated sample image, comprising: standardizing a three-dimensional model created for a predetermined object such that a size of the standardized three-dimensional model is in a predetermined proportion to a size of a reference three-dimensional model; cutting the three-dimensional model subjected to the standardization processing to obtain a plurality of slice images; extracting a plurality of closed-loop region portions in the slice image; determining density data associated with each closed-loop region portion for the slice image based on the reference three-dimensional model to determine annotation data for annotating the closed-loop region portions based on the density data; and filling corresponding labeling data in each closed-loop area part included in the slice image to obtain a labeled sample image.
According to a second aspect of the present disclosure, there is provided a computing device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect of the disclosure.
In a third aspect of the present disclosure, a non-transitory computer readable storage medium is provided having stored thereon computer instructions for causing a computer to perform the method of the first aspect of the present disclosure.
In some embodiments, determining, for the slice image, density data associated with each closed-loop region portion based on the reference three-dimensional model comprises: determining a first number of closed loop region portions comprised by the slice image; determining a cutting position of the slice image; determining a second number of closed-loop region portions comprised by reference slice images of the reference three-dimensional model at the cutting position, wherein the closed-loop region portions comprised by each reference slice image of the reference three-dimensional model are predetermined and density data of the closed-loop region portions comprised by each reference slice image are also predetermined; determining whether the first number is equal to the second number; and in response to determining that the first number is equal to the second number, assigning density data associated with the closed-loop region portions included in the reference slice image to the respective closed-loop region portions included in the slice image based on a relative positional relationship between the closed-loop region portions included in the reference slice image and a relative positional relationship between the closed-loop region portions included in the slice image.
In some embodiments, determining, for the slice image, density data associated with each closed-loop region portion based on the reference three-dimensional model further comprises: in response to determining that the first number is not equal to the second number, determining a third number of closed-loop region portions comprised by a next reference slice image of the reference three-dimensional model at a next cut position; determining whether the first number is equal to the third number; in response to determining that the first number is equal to the third number, assigning density data associated with closed-loop region portions included in the next reference slice image to the respective closed-loop region portions included in the slice image based on a relative positional relationship between the closed-loop region portions included in the next reference slice image and a relative positional relationship between the closed-loop region portions included in the slice image; and in response to determining that the first number is not equal to the third number, marking the slice image as anomalous.
In some embodiments, the method further comprises: establishing a three-dimensional coordinate system so that an original point of the three-dimensional coordinate system is located at a central point of the bottom of the three-dimensional model, wherein a positive direction of a Y axis of the three-dimensional coordinate system indicates a direction of the three-dimensional model from the bottom to the top, and a positive direction of a Z axis of the three-dimensional coordinate system indicates a direction of the three-dimensional model from the back to the front.
In some embodiments, cutting the normalized three-dimensional model to obtain a plurality of slice images comprises: the normalized three-dimensional model is cut in a negative direction of the Y-axis from the top of the three-dimensional model until the bottom of the three-dimensional model is reached such that each slice image is parallel to a plane defined by the X-axis and Z-axis.
In some embodiments, the method further comprises filling in a portion of each slice image outside the determined portion of the closed-loop region with null values.
In some embodiments, the method also includes generating, based on each annotated sample image, a plurality of new annotated sample images, each new annotated sample image generated by: generating a sliding window of a predetermined size; traversing the annotated sample image using the sliding window, wherein each sliding step of the sliding window randomly arranges the pixels of the annotated sample image that are within the sliding window, thereby generating the new annotated sample image.
In some embodiments, the method further comprises: determining a minimum bounding rectangle of all closed loop region portions of each slice image; and cropping the slice image based on the minimum bounding rectangle.
In some embodiments, the filling of the respective annotation data in each closed-loop region portion included in the slice image includes: and filling corresponding marking data in the closed-loop area part according to the sequence from small to large of the closed-loop area part included in the slice image.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements.
FIG. 1 shows a schematic diagram of a system 100 for implementing a method for generating an annotated sample image according to an embodiment of the invention.
Fig. 2 shows a flow diagram of a method 200 for generating an annotated sample image according to an embodiment of the disclosure.
Fig. 3 shows a flow diagram of a method 300 for determining density data associated with each closed-loop region portion for a slice image in accordance with an embodiment of the present disclosure.
Fig. 4 shows a schematic diagram of an example of an annotated slice image 400, in accordance with an embodiment of the disclosure.
Fig. 5 shows a block diagram of an electronic device 500 according to an embodiment of the disclosure.
Detailed Description
Preferred embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
In the following description, for the purposes of illustrating various inventive embodiments, certain specific details are set forth in order to provide a thorough understanding of the various inventive embodiments. One skilled in the relevant art will recognize, however, that the embodiments may be practiced without one or more of the specific details. In other instances, well-known devices, structures and techniques associated with this application may not be shown or described in detail to avoid unnecessarily obscuring the description of the embodiments.
Throughout the specification and claims, the word "comprise" and variations thereof, such as "comprises" and "comprising," are to be understood as an open, inclusive meaning, i.e., as being interpreted to mean "including, but not limited to," unless the context requires otherwise.
Reference throughout this specification to "one embodiment" or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment. Thus, the appearances of the phrases "in one embodiment" or "in some embodiments" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the terms first, second, third, fourth, etc. used in the description and in the claims, are used for distinguishing between various objects for clarity of description only and do not limit the size, other order, etc. of the objects described therein.
As mentioned above, currently, the images are usually labeled manually, however, the process of manual labeling is laborious and troublesome, and especially when the network model needs to be trained based on a large number of sample images, such an inefficient labeling manner affects the training efficiency of the network model.
To address at least in part one or more of the above issues and other potential issues, an example embodiment of the present disclosure proposes a method for generating an annotated sample image, comprising: standardizing a three-dimensional model created for a predetermined object such that a size of the standardized three-dimensional model is in a predetermined proportion to a size of a reference three-dimensional model; cutting the three-dimensional model subjected to the standardization processing to obtain a plurality of slice images of the three-dimensional model; extracting a plurality of closed-loop region portions in the slice image; determining density data associated with each closed-loop region portion for the slice image based on the reference three-dimensional model to determine annotation data for annotating the closed-loop region portions based on the density data; and filling corresponding labeling data in each closed-loop area part included in the slice image to obtain a labeled sample image. In this way, not only can the efficiency of generating the labeled sample images be improved, but also the number of labeled sample images can be increased, thereby greatly reducing the associated labeling cost.
Hereinafter, specific examples of the present scheme will be described in more detail with reference to the accompanying drawings.
FIG. 1 shows a schematic diagram of a system 100 for implementing a method for generating an annotated sample image according to an embodiment of the invention. As shown in fig. 1, system 100 includes a computing device 110, a network 120, and a server 130. The computing device 110 and the control system 130 may interact with data via a network 120 (e.g., the internet). In the present disclosure, the server 130 may be used to provide the computing device 110 with various information related to a three-dimensional model of a predetermined object (e.g., a human body or a part of a human body), such as medical image pictures used to build a three-dimensional model of the predetermined object, and the like. In some examples, the server 130 may be a medical services System such as a Hospital Information System (HIS). Computing device 110 may communicate with server 130, for example, to send information to server 130 and/or to receive information from server 130. The computing device 110 performs corresponding operations based on data from the server 130. The computing device 110 may include at least one processor 112 and at least one memory 114 coupled to the at least one processor 112, the memory 114 having stored therein instructions 116 executable by the at least one processor 112, the instructions 116, when executed by the at least one processor 112, performing at least a portion of the methods 200 and 300 as described below. Note that herein, computing device 110 may be part of server 130 or may be separate from server 130. The specific structure of computing device 110 may be described, for example, as follows in connection with FIG. 5.
Fig. 2 shows a flow diagram of a method 200 for generating an annotated sample image according to an embodiment of the disclosure. The method 200 may be performed by the computing device 110 as shown in FIG. 1, or may be performed at the electronic device 500 shown in FIG. 5. It should be understood that method 200 may also include additional blocks not shown and/or may omit blocks shown, as the scope of the present disclosure is not limited in this respect.
In step 202, the computing device 110 normalizes the three-dimensional model created for the predetermined object such that the size of the normalized three-dimensional model is in a predetermined proportion to the size of the reference three-dimensional model.
In the present disclosure, the predetermined object refers to an object of interest of a sample image that needs to be generated. For example, in the medical field, the predetermined object may be a medical object such as an entire human body or a certain part of the human body according to the actual role of the network model that needs to be trained. The created three-dimensional model for the predetermined object (i.e., the three-dimensional model of the predetermined object) may be created using any existing or future-developed three-dimensional model creation method, for example, may be created directly using professional three-dimensional construction software such as 3D Max, Maya, CAD, or may also be created based on an image-segmented neural network model, or the like.
In addition, the predetermined ratio mentioned in step 202 can be selected according to the actual needs of the application, including but not limited to, for example, 1, 0.8, 0.6, and so on. The reference three-dimensional model may include a plurality of pre-segmented reference slice images that each pre-label a closed-loop region portion of interest to indicate what the closed-loop region portion of interest specifically indicates. For example, in case the predetermined object is the whole human body, each closed loop region part of interest may be associated with a tissue of the human body, which may be any of skin, muscle, tissue organ, blood vessel or bone, etc., and thus these tissues are labeled on the reference slice images.
The normalization process mentioned in step 202 may be implemented based on at least one of the following geometrical transformations: scaling, warping, local lengthening, local shortening, etc., in order to make the size of the three-dimensional model subjected to normalization in a predetermined proportion to the size of the reference three-dimensional model, thereby making it possible to have a rough correspondence between a slice image cut out from the three-dimensional model of the predetermined object and a reference slice image obtained by the reference three-dimensional model at the same or a similar cutting position, so that each closed-loop region portion of the slice image can be determined by means of the reference slice image, thereby contributing to an improvement in the accuracy of image labeling. In the present disclosure, these geometric transformations may be implemented using any mesh (mesh) morphing algorithm.
At step 204, the three-dimensional model after normalization is cut to obtain a plurality of slice images.
In some embodiments, the cutting of the three-dimensional model may be accomplished using any model cutting algorithm, and the thickness of each slice image may be consistent with, for example, the thickness of a Computed Tomography (CT) scan image, e.g., 0.625mm, 1.25mm, 5mm, etc., and the aforementioned thickness of the reference slice image is also consistent with the thickness of the CT scan image. The distance between adjacent slice images and adjacent reference slice images (also referred to as the cutting distance) may be selected according to the actual application, and may be, for example, 1mm, so that there may be an approximate correspondence between the slice images and the plurality of reference slice images of the reference three-dimensional model, thereby facilitating coarse positioning of the respective closed-loop regions on the slice images by means of the reference slice images of the reference three-dimensional model.
In the present disclosure, in order to facilitate cutting of the three-dimensional model subjected to the normalization process, a three-dimensional coordinate system is further established such that an origin of the three-dimensional coordinate system is located at a center point of a bottom of the three-dimensional model, a positive direction of a Y axis of the three-dimensional coordinate system indicates a direction of the three-dimensional model from the bottom to the top, and a positive direction of a Z axis of the three-dimensional coordinate system indicates a direction of the three-dimensional model from the back to the front. For example, in the case where the predetermined object is an entire human body, the origin of the three-dimensional coordinate system may be set to be located at the center point of both feet of the human body, the positive direction of the Y axis of the three-dimensional coordinate system may be set to indicate the direction from the feet to the head, and the positive direction of the Z axis of the three-dimensional coordinate system may be set to indicate the orientation of the toes.
Thus, cutting the normalized three-dimensional model may include: the normalized three-dimensional model is cut from the top of the three-dimensional model in the negative direction of the Y-axis of the above three-dimensional coordinate system until the bottom of the three-dimensional model is reached such that each slice image is parallel to the plane defined by the X-axis and the Z-axis. As mentioned earlier, the thickness of each slice image may be kept consistent with the thickness of the CT scan image, e.g. 0.625mm, 1.25mm, 5mm, etc., and the thickness of the aforementioned reference slice image is also kept consistent with the thickness of the CT scan image. It should be noted that, in order to ensure a certain correspondence between the reference slice image and the slice image of the reference three-dimensional model, the reference slice image of the reference three-dimensional model should be cut in a similar manner based on a similar three-dimensional coordinate system.
In step 206, a plurality of closed-loop region portions in the slice image are extracted.
In some embodiments, where the three-dimensional model is a three-dimensional model built for a human body, each closed-loop region portion may be associated with a tissue of the human body, which may include skin, muscle, organs, or bones, and the like.
In some embodiments, all closed-loop region portions in the slice image may be extracted using a binary image connected region labeling algorithm. The binary image connected domain mark is obtained by finding out and marking adjacent pixels with the same pixel value from a dot matrix image only consisting of background points and target points. The aim is to find all target objects in the image and to mark all pixels belonging to the same target object with a unique marking value, resulting in a closed loop area part of each target object. In some embodiments of the present disclosure, the binary image connected region labeling algorithm used may include, for example: the current slice image is subjected to binarization processing so as to be converted into a binary image whose luminance value includes only two states of 0 or 255. Then, traversing the binary image from left to right and from top to bottom by taking the pixel as a unit, if a pixel point with a brightness value of 255 in the binary image is encountered and the pixel point is not adjacent to a scanned pixel point, marking the pixel point with a new label (for example, label L), if the pixel point is adjacent to one scanned pixel point (namely, the pixel point marked with the label), marking the pixel point as the label of the scanned pixel point, but if the pixel point is adjacent to a plurality of scanned pixel points, marking the pixel point as the label of one scanned pixel point in the scanned pixel points, and recording an equivalent label; then, the brightness value state of 8 points adjacent to the pixel point is judged, the pixel points of which the brightness value state is 255 and the labels are not marked are also marked by the specific label L, and then the method is continued until all the pixel points of the target object are found based on the adjacent pixel points marked as the label L, so that the closed-loop area part of the target object is found. Then, returning to the first pixel with the brightness value of 255 mentioned above, the above process is repeated until all the pixels of all the target objects in the binary image are found, and at this time, all the closed-loop region portions included in the slice image are found. In the disclosure, when searching for the closed-loop area portion, the two branch lines may be divided into two branches, i.e., an upper-left branch line and a lower-left branch line, from a left-side pixel point of the binary image, and the two branch lines are marked until an upper-lower branch overlapping coordinate is found, or until no pixel satisfying L equality is found.
In some embodiments, after determining the closed-loop region portion in each slice image, the method 200 may further include: determining a minimum bounding rectangle of all closed loop region portions of each slice image; and cropping the slice image based on the minimum bounding rectangle. By such cropping, the size of the image that needs to be marked can be reduced, thereby contributing to further improving the efficiency of generating the marked sample image.
At step 208, density data associated with each closed-loop region portion is determined for the slice image based on the reference three-dimensional model, so as to determine annotation data for annotating the closed-loop region portion based on the density data.
In the present disclosure, and particularly in the medical field, each portion (e.g., tissue organ) contained by a predetermined object (e.g., an entire human body or body part) corresponds to a different medium, and thus different portions are each associated with different density data (which may be represented by respective hounsfield unit values). For example, density data of local tissues or organs of the human body is often expressed in terms of CT values expressed in Hounsfield Unit values, such as a Hounsfield Unit value of-1000 for air (which is the minimum medium density data of the human body), a Hounsfield Unit value of-120 to-90 for fat, a Hounsfield Unit value of +300 to +400 for cancellous bone, a Hounsfield Unit value of +1800 to +1900 for compact bone (+ 1900 being the maximum medium density data of the human body), a Hounsfield Unit value of-700 to-600 for lung, a Hounsfield Unit value of +25 to +45 for kidney, a Hounsfield Unit value of 60 plus or minus 6 for liver, a Hounsfield Unit value of +35 to +55 for muscle, and so forth. In the present disclosure, since each reference slice image segmented based on the reference three-dimensional model is labeled with the closed-loop region part of interest in advance to indicate what part the closed-loop region part of interest specifically indicates, it is possible to determine which tissue the associated closed-loop region is based on the reference three-dimensional model, and thus to determine the density data associated with the closed-loop region, and further determine the labeling data for labeling the closed-loop region part based on the density data. In particular, the corresponding annotation data may be determined based on the density data by: the density data for each medium from water (-1000) to bone (1900) was regressed to between 0-1, and the regressed values were then mapped to corresponding labeling data (e.g., color data). For example, the media corresponding to each closed loop region segment can be regressed to between 0-1 by the following formula: the difference between the density data k of the medium corresponding to the current closed-loop area section and the minimum medium density data w (e.g., -1000 for the density data of air mentioned above) is divided by the difference between the maximum medium density data s (e.g., +1900 for the maximum density data of compact bone mentioned above) and the minimum density data, i.e., (k- (-1000))/(1900- (-1000)). The regressed values can then be mapped to corresponding annotation data (e.g., color data) on a self-defined basis based on different task requirements. For example, a mapping relationship between each regressed value and color data may be established in advance based on the corresponding task requirements, and then the above mapping may be implemented based on the mapping relationship.
The label data may be label data or color data. As described above, since the density data associated with each closed-loop area portion is actually within a specific interval range, when the label data is color data, the color data corresponding to each closed-loop area portion may vary randomly (i.e., fluctuate randomly) within a predetermined threshold range. Also, in the present disclosure, when the annotation data is color data, the annotation of the closed-loop region part may be achieved by directly filling a color corresponding to a value of the color data in the closed-loop region part. For example, the value of the color data as the label data may be between 0 and 255.
A method 300 for determining density data associated with each closed-loop region portion for a slice image is described in further detail below in conjunction with fig. 3.
In step 210, corresponding annotation data is filled in each closed-loop region portion included in the slice image to obtain an annotated sample image.
Specifically, the closed-loop region portions included in the slice image may be filled with the corresponding annotation data in the order from smaller to larger (i.e., in priority from smaller to larger in the area of the closed-loop region portions). By labeling in the order from small to large, filling errors of the labeled data can be effectively avoided. For example, if a smaller closed-loop area portion is nested within a larger closed-loop area portion, the two area portions can be distinguished significantly in advance by filling the smaller closed-loop area portion first, thereby effectively avoiding mis-filling the smaller closed-loop area portion with annotation data associated with the larger closed-loop area portion when filling the larger closed-loop area portion.
In some embodiments, in addition to filling the corresponding annotation data in each closed-loop region portion included in the slice image, a portion of each slice image located outside the determined closed-loop region portion is filled with a null value, for example, color data having a fill value of 0. For example, as shown in fig. 4, which is a schematic diagram of an example of an annotated sample image 400 according to an embodiment of the present disclosure, the sample image shown in fig. 4 is generated based on a slice image of a cut human head, wherein the slice image is annotated by different gray-scale color data, so that different parts included in the picture, such as a brain, a blood vessel, and the like, can be distinguished.
In some embodiments, the method 200 may further include locally randomly assigning each labeled sample image obtained as per the above method to augment the sample image. In particular, the method 200 may also include generating a plurality of new annotated sample images based on each annotated sample image, each new annotated sample image may be generated by: generating a sliding window of a predetermined size, which may be, for example, a sliding window of 9 × 9 size (i.e., 9 pixels × 9 pixels size); traversing the labeled sample image by using the sliding window, wherein each step of sliding of the sliding window randomly arranges pixels of the labeled sample image in the sliding window, thereby generating a new labeled sample image. Through the operations, the edges of the closed-loop area parts can be blurred, so that the noise of the computed tomography image can be simulated, and the accuracy of the network model trained on the sample images is higher. Additionally, in the present disclosure, by repeating the above operations, a plurality of different new annotated sample images may be generated for each annotated sample image. Thus, in the present disclosure, a large number of sample images can be generated based on one three-dimensional model, thereby contributing to an improvement in the training accuracy of a network model that needs to be trained based on such sample images.
In some embodiments, each labeled image may also be gaussian filtered, for example, 3 x 3 gaussian filtered (i.e., 3 pixels by 3 pixels gaussian filtered), to improve the smoothness of the image.
By the method, the usage amount of original images (such as CT image images) used for generating the labeled sample images can be greatly reduced, the efficiency of generating the labeled sample images is greatly improved, a large number of labeled sample images can be generated based on a three-dimensional model, and the related cost is greatly reduced.
Fig. 3 shows a flow diagram of a method 300 for determining density data associated with each closed-loop region portion for a slice image in accordance with an embodiment of the present disclosure. The method 300 may be performed by the computing device 110 as shown in FIG. 1, or may be performed at the electronic device 500 shown in FIG. 5. It should be understood that method 300 may also include additional blocks not shown and/or may omit blocks shown, as the scope of the disclosure is not limited in this respect.
In some embodiments, the method 300 includes steps 302-310.
In step 302, a first number of closed loop region portions comprised by the slice image is determined.
In the present disclosure, the first number is used to indicate how many closed loop region portions the current slice image includes.
At step 304, the cutting location of the slice image is determined.
As described above, since the slice images are obtained by cutting the three-dimensional model subjected to the normalization processing from the top of the three-dimensional model and the cutting interval between the adjacent slice images is predetermined, the cutting position of each slice image can be indicated by the number of orders in which the slice image is cut (i.e., the slice image is the number of the slice image that is cut). Of course, the cutting position of the slice image can also be obtained by the distance of the slice image from the top of the three-dimensional model, but when the cutting position is indicated by such a distance, the corresponding cutting position of the reference three-dimensional model needs to be scaled.
In step 306, a second number of closed-loop region portions comprised by the reference slice image of the reference three-dimensional model at the cutting position is determined, wherein the closed-loop region portions comprised by each reference slice image of the reference three-dimensional model are pre-labeled and density data of the closed-loop region portions comprised by each reference slice image is also pre-determined.
Since the size of the normalized three-dimensional model is in a predetermined proportion to the size of the reference three-dimensional model, it can be roughly determined that there is a certain correspondence between the reference slice image having the same order number as the slice picture and the slice image, and thus it can be facilitated to determine which tissues are specifically indicated by the closed-loop region portion included in the slice image by means of the reference slice image. In addition, as previously described, the closed-loop region portions of interest included in the reference slice images of the reference three-dimensional model have been pre-labeled to indicate what the closed-loop region portions of interest specifically indicate, and therefore the density data of these closed-loop region portions is also pre-determined.
In addition, in the present disclosure, the third number is used to indicate how many closed-loop region portions the reference slice image includes.
At step 308, it is determined whether the first number is equal to the second number.
In the present disclosure, determining whether the first number is equal to the second number is mainly used to verify whether there is indeed a correspondence between the slice image at the cutting position and the reference slice image at the corresponding cutting position, thereby helping to ensure more accurate labeling of the slice image. If the first number is the same as the second number, it may be determined that there should be a correspondence between the slice image and the reference slice image at the cutting position, so that the slice image may be annotated based on the reference slice image.
In step 310, in response to determining that the first number is equal to the second number, density data associated with the closed-loop area portions included in the reference slice image are respectively assigned to the respective closed-loop area portions included in the slice image based on the relative positional relationship between the closed-loop area portions included in the reference slice image and the relative positional relationship between the closed-loop area portions included in the slice image.
For example only, if the slice image includes four closed-loop area portions, and the relative positional relationship between the four closed-loop area portions is a relationship of top, bottom, left, and right, respectively, and the reference slice image at the same cutting position also includes four closed-loop area portions, and the relative positional relationship therebetween is also a relationship of top, bottom, left, and right, respectively, the density data of the closed-loop area portion where the reference slice image is located above may be assigned to the closed-loop area portion where the slice image is located above, the density data of the closed-loop area portion where the reference slice image is located below may be assigned to the closed-loop area portion where the slice image is located below, and so on, so that the density data of all the closed-loop area portions included in the slice image may be determined. The above is merely an example, and in practical applications, the relative positional relationship between the respective closed-loop area portions may be much more complicated, for example, there may be a nested relationship, but the basic idea of allocation is consistent. However, if the relative positional relationship between the closed-loop area portions included in the reference slice image is significantly inconsistent with the relative positional relationship between the closed-loop area portions included in the slice image, it indicates that there may be an anomaly, and thus the assignment may be re-performed based on a similar manner as in step 312 and 318.
In other embodiments, the method 300 may further include steps 312-318.
At step 312, in response to determining that the first number is not equal to the second number, a third number of closed-loop region portions comprised by a next reference slice image of the reference three-dimensional model at a next cut position is determined.
This is a less likely occurrence and generally occurs primarily in slice images at the interface of two tissue portions. In addition, in the present disclosure, the third number is used to indicate how many closed-loop region portions the next reference slice image includes.
In the case where the first number is not equal to the second number, it is indicated that it is determined that there is no correspondence between the slice image and the reference slice image at the cutting position, whereby it is possible to determine whether there is a correspondence between the next reference slice image at the next cutting position and the slice image, so as to determine whether the slice image can be labeled based on the next reference slice image.
At step 314, it is determined whether the first number is equal to the third number.
In step 316, in response to determining that the first number is equal to the third number, density data associated with the closed-loop area portions included in the next reference slice image are respectively assigned to the respective closed-loop area portions included in the slice image based on the relative positional relationship between the closed-loop area portions included in the next reference slice image and the relative positional relationship between the closed-loop area portions included in the slice image.
Steps 314 and 316 are similar to steps 308 and 310, respectively, previously mentioned, and therefore are not further described herein.
At step 318, the slice image is marked as abnormal in response to determining that the first number is not equal to the third number.
The slice images determined to be abnormal can be marked manually, which does not add much work since the probability of occurrence of such abnormal cases is very low.
Through the technical scheme, efficient labeling of each slice image can be realized, and further, related labeling cost is reduced.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. For example, the computing device 110 as shown in fig. 2 may be implemented by the electronic device 500. As shown, electronic device 500 includes a Central Processing Unit (CPU) 501 that may perform various appropriate actions and processes according to computer program instructions stored in a Read Only Memory (ROM) 502 or loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the random access memory 503, various programs and data necessary for the operation of the electronic apparatus 500 can also be stored. The central processing unit 501, the read only memory 502 and the random access memory 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A plurality of components in the electronic device 500 are connected to the input/output interface 505, including: an input unit 506 such as a keyboard, a mouse, a microphone, and the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The various processes and processes described above, such as methods 200 and 300, may be performed by the central processing unit 501. For example, in some embodiments, methods 200 and 300 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the read only memory 502 and/or the communication unit 509. When the computer program is loaded into the random access memory 503 and executed by the central processing unit 501, one or more of the actions of the methods 200 and 300 described above may be performed.
The present disclosure relates to methods, apparatuses, systems, electronic devices, computer-readable storage media and/or computer program products. The computer program product may include computer-readable program instructions for performing various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge computers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A method for generating an annotated sample image, comprising:
standardizing a three-dimensional model created for a predetermined object such that a size of the standardized three-dimensional model is in a predetermined proportion to a size of a reference three-dimensional model;
cutting the three-dimensional model subjected to the standardization processing to obtain a plurality of slice images;
extracting a plurality of closed-loop region portions in the slice image;
determining density data associated with each closed-loop region portion for the slice image based on the reference three-dimensional model to determine annotation data for annotating the closed-loop region portions based on the density data; and
filling corresponding labeling data in each closed-loop area part included in the slice image to obtain a labeled sample image;
determining, for the slice image, density data associated with each closed-loop region portion based on the reference three-dimensional model comprises:
determining a first number of closed loop region portions comprised by the slice image;
determining a cutting position of the slice image;
determining a second number of closed-loop region portions comprised by reference slice images of the reference three-dimensional model at the cutting position, wherein the closed-loop region portions comprised by each reference slice image of the reference three-dimensional model have been pre-labeled and density data of the closed-loop region portions comprised by each reference slice image is predetermined;
determining whether the first number is equal to the second number; and
in response to determining that the first number is equal to the second number, assigning density data associated with closed-loop region portions included in the reference slice image to respective closed-loop region portions included in the slice image based on a relative positional relationship between the closed-loop region portions included in the reference slice image and a relative positional relationship between the closed-loop region portions included in the slice image.
2. The method of claim 1, wherein determining density data associated with each closed-loop region portion for the slice image based on the reference three-dimensional model further comprises:
in response to determining that the first number is not equal to the second number, determining a third number of closed-loop region portions comprised by a next reference slice image of the reference three-dimensional model at a next cut position;
determining whether the first number is equal to the third number;
in response to determining that the first number is equal to the third number, assigning density data associated with closed-loop region portions included in the next reference slice image to the respective closed-loop region portions included in the slice image based on a relative positional relationship between the closed-loop region portions included in the next reference slice image and a relative positional relationship between the closed-loop region portions included in the slice image; and
in response to determining that the first number is not equal to the third number, marking the slice image as anomalous.
3. The method of claim 1, further comprising:
establishing a three-dimensional coordinate system so that an original point of the three-dimensional coordinate system is located at a central point of the bottom of the three-dimensional model, wherein a positive direction of a Y axis of the three-dimensional coordinate system indicates a direction of the three-dimensional model from the bottom to the top, and a positive direction of a Z axis of the three-dimensional coordinate system indicates a direction of the three-dimensional model from the back to the front.
4. The method of claim 3, wherein cutting the normalized three-dimensional model to obtain a plurality of slice images comprises:
the normalized three-dimensional model is cut from the top of the three-dimensional model in a negative direction of the Y-axis until the bottom of the three-dimensional model is reached such that each slice image is parallel to a plane defined by the X-axis and the Z-axis of the three-dimensional coordinate system.
5. The method of claim 1, further comprising:
filling in a null value in a portion of each slice image located outside the determined closed-loop area portion.
6. The method of claim 1, further comprising:
generating a plurality of new annotated sample images based on each annotated sample image, each new annotated sample image generated by:
generating a sliding window of a predetermined size;
traversing the annotated sample image using the sliding window, wherein each step of sliding of the sliding window randomly arranges pixels of the annotated sample image that are within the sliding window, thereby generating the new annotated sample image.
7. The method of claim 1, further comprising:
determining a minimum bounding rectangle of all closed loop region portions of each slice image; and
and cutting the slice image based on the minimum bounding rectangle.
8. The method of claim 1, wherein populating each closed-loop region portion included in the slice image with respective annotation data comprises:
and filling corresponding marking data in the closed-loop area part according to the sequence from small to large of the closed-loop area part included in the slice image.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1-8.
CN202111296863.4A 2021-11-04 2021-11-04 Method, apparatus, and medium for generating annotated sample images Active CN113744288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111296863.4A CN113744288B (en) 2021-11-04 2021-11-04 Method, apparatus, and medium for generating annotated sample images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111296863.4A CN113744288B (en) 2021-11-04 2021-11-04 Method, apparatus, and medium for generating annotated sample images

Publications (2)

Publication Number Publication Date
CN113744288A CN113744288A (en) 2021-12-03
CN113744288B true CN113744288B (en) 2022-01-25

Family

ID=78727338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111296863.4A Active CN113744288B (en) 2021-11-04 2021-11-04 Method, apparatus, and medium for generating annotated sample images

Country Status (1)

Country Link
CN (1) CN113744288B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311592A (en) * 2020-03-13 2020-06-19 中南大学 Three-dimensional medical image automatic segmentation method based on deep learning
CN111523610A (en) * 2020-05-06 2020-08-11 青岛联合创智科技有限公司 Article identification method for efficient sample marking
CN111583199A (en) * 2020-04-24 2020-08-25 上海联影智能医疗科技有限公司 Sample image annotation method and device, computer equipment and storage medium
CN112007289A (en) * 2020-09-09 2020-12-01 上海沈德医疗器械科技有限公司 Automatic planning method and device for tissue ablation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170079604A1 (en) * 2015-09-18 2017-03-23 Anbinh T. Ho System and method for digital breast tomosynthesis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311592A (en) * 2020-03-13 2020-06-19 中南大学 Three-dimensional medical image automatic segmentation method based on deep learning
CN111583199A (en) * 2020-04-24 2020-08-25 上海联影智能医疗科技有限公司 Sample image annotation method and device, computer equipment and storage medium
CN111523610A (en) * 2020-05-06 2020-08-11 青岛联合创智科技有限公司 Article identification method for efficient sample marking
CN112007289A (en) * 2020-09-09 2020-12-01 上海沈德医疗器械科技有限公司 Automatic planning method and device for tissue ablation

Also Published As

Publication number Publication date
CN113744288A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN106663309B (en) Method and storage medium for user-guided bone segmentation in medical imaging
CN110310287B (en) Automatic organ-at-risk delineation method, equipment and storage medium based on neural network
CN110059697B (en) Automatic lung nodule segmentation method based on deep learning
US20180225823A1 (en) Adversarial and Dual Inverse Deep Learning Networks for Medical Image Analysis
CN109285142B (en) Method and device for detecting head and neck tumors and computer readable storage medium
US20210287454A1 (en) System and method for segmentation and visualization of medical image data
CN114365181B (en) Automatic detection and replacement of identification information in images using machine learning
CN111583199B (en) Sample image labeling method, device, computer equipment and storage medium
US10726948B2 (en) Medical imaging device- and display-invariant segmentation and measurement
CN111161268A (en) Image processing method, image processing device, electronic equipment and computer storage medium
CN112529913A (en) Image segmentation model training method, image processing method and device
US20210158515A1 (en) Method and apparatus for determining mid-sagittal plane in magnetic resonance images
CN115546231A (en) Self-adaptive brain glioma segmentation method based on semi-supervised deep learning
CN114266896A (en) Image labeling method, model training method and device, electronic equipment and medium
CN113744288B (en) Method, apparatus, and medium for generating annotated sample images
CN113222051A (en) Image labeling method based on small intestine focus characteristics
CN110692065A (en) Surface-based object recognition
CN111862001A (en) Semi-automatic labeling method and device for CT image, electronic equipment and storage medium
CN113766147B (en) Method for embedding image in video, and method and device for acquiring plane prediction model
CN115797533A (en) Model edge tracing method, device, equipment and storage medium
US11971953B2 (en) Machine annotation of photographic images
KR102417531B1 (en) Apparatus for Generating Learning Data and Driving Method Thereof, and Computer Readable Recording Medium
Al-Dhamari et al. Automatic cochlear multimodal 3D image segmentation and analysis using atlas–model-based method
CN114187305A (en) Alveolar bone segmentation method, device, equipment and storage medium
CN116710956A (en) System and method for generating medical images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant