CN111738998A - Dynamic detection method and device for focus position, electronic equipment and storage medium - Google Patents

Dynamic detection method and device for focus position, electronic equipment and storage medium Download PDF

Info

Publication number
CN111738998A
CN111738998A CN202010534757.4A CN202010534757A CN111738998A CN 111738998 A CN111738998 A CN 111738998A CN 202010534757 A CN202010534757 A CN 202010534757A CN 111738998 A CN111738998 A CN 111738998A
Authority
CN
China
Prior art keywords
lung
image
images
optical flow
lesion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010534757.4A
Other languages
Chinese (zh)
Other versions
CN111738998B (en
Inventor
郭英委
杨英健
李强
刘洋
曾吴涛
康雁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Technology University
Original Assignee
Shenzhen Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Technology University filed Critical Shenzhen Technology University
Priority to CN202010534757.4A priority Critical patent/CN111738998B/en
Publication of CN111738998A publication Critical patent/CN111738998A/en
Application granted granted Critical
Publication of CN111738998B publication Critical patent/CN111738998B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Abstract

The present disclosure relates to a method and apparatus for dynamically detecting a lesion position, an electronic device, and a storage medium, wherein the method includes acquiring a predicted position for a lesion in a plurality of sets of lung images, the plurality of sets of lung images being lung images respectively acquired at multiple times during a breathing process; respectively extracting first images at the same position in the multiple groups of lung images to obtain a lung motion sequence image at each same position; sequentially correcting the predicted positions of the first images according to the sequence of the first images in the lung motion sequence to obtain the dynamic detection position of the focus; and correcting the predicted position of the (i + 1) th first image by using the dynamic detection position of the ith first image in the lung motion sequence to obtain the dynamic detection position of the (i + 1) th first image. The embodiment of the disclosure can realize dynamic tracking of the lesion position in the lung image.

Description

Dynamic detection method and device for focus position, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of medical image processing technologies, and in particular, to a method and an apparatus for dynamically detecting a lesion position, an electronic device, and a storage medium.
Background
With the expansion and contraction of the thorax, the movement of air into and out of the lungs via the respiratory tract is called respiratory movement. The lung's relaxation is completely dependent on the movement of the thorax. When the thorax is expanded, the lung is pulled outwards and air enters the lung, which is called inspiration movement. When the thorax is retracted, the air in the lungs is expelled from the body, called an expiratory movement. Because the respiratory movement is continuously carried out, the relative constancy of the gas component in the alveolus is ensured, and the gas exchange between the blood and the gas in the alveolus is continuously carried out.
During the respiratory movement, the lung can move in a relaxing way, and the position of the lung lobe and the focus of the lung can be changed. In clinical situations, such as during surgery or during CT detection, there is a need to monitor the lesion position in the lung, and the current detection of the lesion position is usually determined by analyzing the lesion position directly with static influence, without considering the influence of the lung movement during respiration, which also reduces the accuracy of determining the lesion position.
Disclosure of Invention
The technical scheme of the method for dynamically tracking the lesion position in the lung image can be accurately achieved.
According to an aspect of the present disclosure, there is provided a dynamic lesion position detection method, including:
acquiring the predicted positions of a plurality of groups of lung images aiming at the focus, wherein the plurality of groups of lung images are respectively acquired at multiple moments in the breathing process;
respectively extracting first images at the same position in the multiple groups of lung images to obtain a lung motion sequence image at each same position;
sequentially correcting the predicted positions of the first images according to the sequence of the first images in the lung motion sequence to obtain the dynamic detection position of the focus; and correcting the predicted position of the (i + 1) th first image by using the dynamic detection position of the ith first image in the lung motion sequence to obtain the dynamic detection position of the (i + 1) th first image.
In some possible embodiments, the separately extracting the first images at the same positions of the plurality of sets of lung images to obtain the lung motion sequence image at each of the same positions includes:
determining the number of layers of the plurality of sets of lung images;
determining first images at the same position in the multiple groups of lung images according to the number of layers;
and obtaining the lung motion sequence image corresponding to the same position according to the first image positioned at the same position in each group of lung images.
In some possible embodiments, the sequentially correcting the predicted positions according to the order of the first image in the lung motion sequence to obtain a dynamic detection position of the lesion includes:
obtaining a first lesion feature corresponding to the predicted position in the first image;
sequentially correcting each first focus characteristic according to the sequence of the first images in the lung motion sequence;
and obtaining a dynamic detection position of the focus based on the corrected first focus characteristic.
In some possible embodiments, the sequentially correcting each of the first lesion features according to the order of the first images in the lung motion sequence includes:
obtaining optical flow between the first images in the lung motion sequence;
sequentially correcting the first focus features by using the optical flow according to the sequence of the first images in the lung motion sequence;
wherein the optical flow comprises a forward optical flow obtained in a forward order of the first image in the lung motion sequence and/or a backward optical flow obtained in a backward order of the first image in the lung motion sequence.
In some possible embodiments, the obtaining optical flow between the first images in the lung motion sequence includes:
and performing optical flow estimation on each first image in the input lung motion sequence by using an optical flow estimation model to obtain optical flow between each first image in the lung motion sequence.
In some possible embodiments, the sequentially correcting the first lesion feature using the optical flow in the order of the first image in the lung motion sequence includes at least one of:
according to the forward sequence of the first images in the lung motion sequence, sequentially correcting first focus characteristics of the (i + 1) th first image according to a forward optical flow between the (i) th and the (i + 1) th first images;
and correcting the first focus characteristics of the i +1 th first image according to the reverse optical flow between the i and i +1 th first images in turn according to the reverse order of the first images in the lung motion sequence.
In some possible embodiments, the sequentially correcting the first lesion feature using the optical flow in the order of the first image in the lung motion sequence further includes:
performing feature fusion processing on a first focus feature obtained through the forward optical flow correction and a first focus feature obtained through the reverse optical flow correction; and/or
And performing optical flow optimization processing on the forward optical flow and/or the backward optical flow, and correcting the first focus feature by using the optimized optical flow.
According to a second aspect of the present disclosure, there is provided a lesion position dynamic detection apparatus, comprising:
the acquisition module is used for acquiring the predicted positions of focuses in a plurality of groups of lung images, wherein the plurality of groups of lung images are respectively acquired at multiple moments in the breathing process;
the extraction module is used for respectively extracting first images at the same positions in the multiple groups of lung images to obtain lung motion sequence images at each same position;
the detection module is used for sequentially correcting the predicted positions of the first images according to the sequence of the first images in the lung motion sequence to obtain the dynamic detection position of the focus; and correcting the predicted position of the (i + 1) th first image by using the dynamic detection position of the ith first image in the lung motion sequence to obtain the dynamic detection position of the (i + 1) th first image.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of the first aspects.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method of any one of the first aspects.
In the embodiment of the present disclosure, lung images acquired at multiple times during a breathing process may be obtained, and a predicted position for a lesion within the lung image may be detected. Then, a corresponding image (first image) is extracted from the same position in the lung images at a plurality of times to form a lung motion sequence image corresponding to the position, and the predicted position can be corrected in sequence according to the sequence of the images in the lung motion sequence to obtain a corrected lesion position, that is, a dynamic detection position. The lung motion sequence formed by the embodiment of the disclosure is equivalent to the motion change of the same position of the lung at different time, so the predicted position of each first image in the lung motion sequence can reflect the position change of the lesion at different time. Based on this, the embodiment of the present disclosure corrects the predicted position using the change information of the predicted position in each first image in time order, and improves the lesion position detection accuracy. The dynamic detection of the focus position at each moment in the breathing process can be realized through the embodiment of the disclosure.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 illustrates a flow chart of a method of dynamic lesion location detection according to an embodiment of the present disclosure;
fig. 2 illustrates a flowchart of step S10 in a lesion position dynamic detection method according to the present disclosure;
fig. 3 shows a flowchart of step S20 in a dynamic lesion position detection method according to an embodiment of the present disclosure;
fig. 4 shows a flowchart of step S30 in a dynamic lesion position detection method according to an embodiment of the present disclosure;
fig. 5 shows a flowchart of step S32 in a dynamic lesion position detection method according to an embodiment of the present disclosure;
fig. 6 shows a flowchart of step S322 in a dynamic lesion position detection method according to an embodiment of the present disclosure;
FIG. 7 shows a schematic structural diagram of an optical flow optimization network according to an embodiment of the present disclosure;
fig. 8 shows a block diagram of a lesion location dynamic detection apparatus according to an embodiment of the present disclosure;
FIG. 9 shows a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure;
fig. 10 shows a block diagram of another electronic device 1900 according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
The embodiment of the present disclosure provides a dynamic lesion position detection method, which may be executed by an image processing apparatus, for example, a terminal device or a server or other processing device, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the dynamic lesion location detection method may be implemented by a processor calling computer readable instructions stored in a memory.
Fig. 1 shows a flowchart of a dynamic lesion position detection method according to an embodiment of the present disclosure, as shown in fig. 1, the method includes:
s10: acquiring the predicted positions aiming at the focus in a plurality of groups of lung images, wherein the plurality of groups of lung images are respectively acquired at multiple moments in the breathing process;
in some possible embodiments, the lung image may be a lung image obtained at multiple moments by taking CT (Computed Tomography). The specific method can comprise the following steps: setting the number of scanning layers, the layer thickness and the interlayer distance of the acquired lung images of the CT equipment; and acquiring multi-moment lung images according to the scanning layer number, the layer thickness and the interlayer distance. The lung image obtained by the embodiment of the present disclosure is composed of multiple layers of images, and can be regarded as a three-dimensional image structure. The lung images may be sets of lung images acquired at different times during the breathing process, each set of lung images corresponding to a time.
In some possible embodiments, the acquisition of the lung images at multiple time instants may be requested from other electronic devices or servers, wherein multiple sets (sets) of lung images may be obtained, each set of lung images corresponding to one time instant, and the multiple sets of lung images constitute lung images at multiple time instants.
In some possible embodiments, where a lung image is obtained, a predicted location for a lesion in the lung image may be detected. Wherein the type of the lesion may be at least one of lung diseases, such as at least one of pulmonary nodule, tuberculosis, tumor or lung cancer, but not as a specific limitation of the present disclosure, the type of the lesion may also be other lung diseases. In addition, the embodiment of the present disclosure may detect a lesion position (predicted position) in the lung image for a set lesion type by processing the lung image. The lung image is a multi-layer image, and the predicted position of a focus in each layer of image can be detected through the embodiment of the disclosure. The lesion detection performed in the embodiments of the present disclosure may be implemented by a convolutional neural network, or may also be implemented by using a conventional algorithm, which is not specifically limited by the present disclosure. For example, the convolutional neural network may include at least one of a region candidate network, a residual network, a mask-based target recognition network, and the conventional algorithm may include a region growing method, a K-means clustering algorithm, a classifier, and the like. The above method is only exemplary and those skilled in the art can implement the method in other ways. The predicted position of the lesion in the lung image may be represented in a matrix or vector form, and a region where the lesion is located in the lung image may be represented by a first identifier, where the first identifier may be 1, and a region other than the first identifier may be represented by a second identifier, for example, the second identifier may be 0.
S20: respectively extracting first images at the same position in the multiple groups of lung images to obtain a lung motion sequence image at each same position;
in some possible embodiments, in the case of obtaining multiple sets of lung images, images (first images) corresponding to the same position may be extracted from the multiple sets of lung images, and the first images at the same position may be used to form a lung motion sequence image at the position. Wherein the same position may be the same number of layers in the lung image.
S30: sequentially correcting the predicted positions of the first images according to the sequence of the first images in the lung motion sequence to obtain the dynamic detection position of the focus; and correcting the predicted position of the (i + 1) th first image by using the dynamic detection position of the ith first image in the lung motion sequence to obtain the dynamic detection position of the (i + 1) th first image.
In some possible embodiments, the predicted positions of the first images in the lung motion sequence images may be sequentially corrected by using a forward order and/or a reverse order of the first images, where each first image in the lung motion sequence images corresponds to an image of a lung plane of a lung at a different time, so that by sequentially correcting the predicted positions of the lesions in the first images, the dynamic changes of the lesions at different times may be accurately determined.
The steps of the disclosed embodiments are described in detail below with reference to the drawings. Fig. 2 shows a flowchart of step S10 in a lesion position dynamic detection method according to the present disclosure. Wherein the obtaining of the predicted location for the lesion in the plurality of sets of lung images comprises:
s11: acquiring a lung image at multiple moments in the respiratory process;
s12: and executing target detection processing on the lung image to detect the predicted position of the focus in the lung image.
As described in the above embodiments, the lung images in the embodiments of the present disclosure may be multiple sets of lung images acquired at different times during respiration, where each set of lung images corresponds to one time. The acquiring of the lung images may include acquiring a plurality of groups of lung images in a breathing process, and the plurality of groups of lung images may be a plurality of groups of lung images in an inspiration process, or a plurality of groups of lung images in a breathing process, or a plurality of groups of lung images in an inspiration and breathing process; the plurality of sets of lung images are obtained by the same patient at a plurality of moments in the expiration and/or inspiration process, and each moment corresponds to one set of lung images. The time of day in the embodiments of the present disclosure may be expressed as a time period, i.e., time information for acquiring a set of lung images. The specific acquisition process can be carried out according to the guidance of an imaging doctor; for example, in the breathing process, at least one set of lung images can be acquired at the time of deep inspiration, at least one set of lung images can be acquired at the time of deep expiration, and at least one set of lung images can be acquired in a calm state, wherein the calm state is a state between normal expiration and inspiration. For another example, during respiration, the patient is held for breathing at different times from the expiration phase to acquire a plurality of sets of lung images. One skilled in the art can acquire images of the lung at different times to perform dynamic detection of lesion locations under different conditions.
When a plurality of lung images are obtained, a set target detection process of a lesion may be performed on the plurality of lung images to obtain a predicted position of the lesion. As described in the above embodiments, the type of lesion may be at least one of lung diseases, such as at least one of a pulmonary nodule, tuberculosis, tumor, or lung cancer, and other lung diseases. The target detection process may be implemented by a target detection method, for example, by a region growing method, a K-means clustering algorithm, or the like. In the embodiments of the present disclosure, the implementation may also be implemented by a convolutional neural network, for example, a region candidate network, a residual error network, and the like, which is not specifically limited in this disclosure.
When the target detection method is used for realizing the detection, different classification areas are obtained through the detection method, and the classification areas are identified by combining a classifier, so that the position area of the focus is detected. When the method is implemented through the convolutional neural network, feature extraction processing can be firstly performed on the lung image to obtain extracted image features, and then the focus region detection is performed by using the image features to predict the focus position in the lung image. Each layer of the lung images may be detected separately, or a group of lung images may be detected simultaneously, which is not limited in this disclosure. By the target detection processing, the predicted position of the lesion within each layer image in the lung image can be obtained. The predicted position obtained by the embodiment of the disclosure can be represented in a matrix or vector form, wherein elements represent whether each position of the lung image is a focus area, the first mark represents that the corresponding position is the focus area, and the second mark represents an area outside the focus area.
In the case where the predicted position of the lesion in the lung image is detected, images at the same position in each set of lung images may be extracted to form a lung motion sequence image.
Fig. 3 shows a flowchart of step S20 in a dynamic lesion position detection method according to an embodiment of the present disclosure. Wherein, the respectively extracting the first images at the same position in the multiple groups of lung images to obtain the lung motion sequence image at each same position comprises:
s21: determining the number of layers of the plurality of sets of lung images;
s22: determining first images at the same position in the multiple groups of lung images according to the number of layers;
s23: and obtaining the lung motion sequence image corresponding to the same position according to the first image positioned at the same position in each group of lung images.
In the embodiment of the present disclosure, when acquiring a plurality of sets of lung images during respiration, the number of scanning layers, the layer thickness, and the interlayer distance of the plurality of sets of lung images have been set. Therefore, the number of layers, the thickness of the layers, and the distance between the layers of the acquired lung images are respectively the same. Based on this, images at the same position in each set of lung images can be determined according to the number of layers. That is, the image (first image) of the plurality of sets of lung images at the same position may be determined according to the number of layers. For example, the position corresponding to the nth layer of a group of lung images at the first time is the same as the position corresponding to the nth layer of lung images from the second time to the mth time, and is a lung plane, the lung planes of the same layer at all times are combined to be the lung motion sequence image, M is an integer greater than 1 and is used for representing the number of times or the number of groups, and N may represent any layer value.
In the case of obtaining a lung motion sequence image, each group of lung motion sequence images may be analyzed separately, and the predicted positions in each image may be corrected in turn. As described in the above embodiments, each lung motion sequence image may represent the image variation of the corresponding lung plane at different time instants.
Fig. 4 is a flowchart of step S30 in a dynamic lesion position detection method according to an embodiment of the present disclosure, in which the sequentially correcting the predicted positions of the first images in the order of the first images in the lung motion sequence to obtain a dynamic lesion detection position includes:
s31: obtaining a first lesion feature corresponding to the predicted position in the first image;
s32: sequentially correcting each first focus characteristic according to the sequence of the first images in the lung motion sequence;
s33: and obtaining a dynamic detection position of the focus based on the corrected first focus characteristic.
In some possible embodiments, the predicted position of the detected lesion in each first image in the lung motion sequence may be corrected separately to obtain a dynamic detected position that varies with time. Wherein a first image may first be acquired for a first lesion feature of a lesion.
In one example, an image region corresponding to the predicted position may be cut out from the first image according to the predicted position of the lesion in the first image detected in S10, and feature extraction processing may be performed on the cut-out image region to obtain a corresponding first lesion feature. The method for intercepting the image area corresponding to the prediction position may include multiplying a matrix or vector representing the prediction position by the first image to obtain the image area corresponding to the intercepted prediction position. The embodiment of the disclosure can enable the scale of the obtained first lesion feature to be the same through an up-sampling or down-sampling mode.
In another example, the feature extraction process may be performed on the first image to obtain an image feature of the first image, and then the image feature of the first image is multiplied by a matrix or vector representing the predicted position to obtain an image feature corresponding to the predicted position, that is, a first lesion feature. The feature extraction process may be implemented by a feature extraction neural network, such as a residual error network, a pyramid feature network, and the like, but is not limited in this disclosure. And similarly, the obtained characteristic scale of the first focus is the same.
In the case of acquiring the first lesion features of each first image in the lung motion sequence, correction optimization may be performed on each first lesion feature in turn.
Fig. 5 is a flowchart of step S32 in a dynamic lesion position detection method according to an embodiment of the present disclosure, where sequentially correcting each first lesion feature according to the order of the first images in the lung motion sequence includes:
s321: obtaining optical flow between the first images in the lung motion sequence;
s322: sequentially correcting the first focus features by using the optical flow according to the sequence of the first images in the lung motion sequence; wherein the optical flow comprises a forward optical flow obtained in a forward order of the first image in the lung motion sequence and/or a backward optical flow obtained in a backward order of the first image in the lung motion sequence.
In the embodiment of the present disclosure, optical flow (optical flow) may be used to represent the change between moving images, which refers to the velocity of pattern motion in time-varying images. When the lung is moving, the brightness pattern of its corresponding point on the image is also moving, so the optical flow can be used to represent the change between images, and since it contains the information of the lesion motion, it can be used by the viewer to determine the motion of the lesion on the lung. Each first image in the lung motion sequence image may be a change image of the same lung plane at different time, and motion information of a lesion in one lung plane of the lung may be obtained by analyzing optical flow changes between predicted positions of the lesion in the first images at the respective time.
It is assumed that the times corresponding to the plurality of sets of lung images are t1, t2, …, tM, and M respectively, which indicate the number of sets. The nth lung motion sequence images may respectively include nth layer images of the M groups of lung images, which correspond to the nth layer images F1N, F2N, …, FMN, representing the nth layer images in the lung images of the 1 st to M groups. The disclosed embodiment can respectively obtain the optical flow between any two first images in the lung motion sequence images. In order to obtain sufficient optical flow information in the embodiments of the present disclosure, optical flow information between adjacent first images may be obtained in a forward order and/or a reverse order of each first image in the lung motion sequence.
In one example, when performing optical flow estimation, forward optical flows between two adjacent first images in each lung motion sequence image can be obtained in a forward order of 1 to M groups, respectively, for example, F can be obtained1NTo F2NForward optical flow of F2N through F3N, and so on, to yield forward optical flow of F (M-1) through FMN. The forward optical flow represents motion speed information of each feature point in adjacent first images arranged in a forward order of time. Specifically, the first image in the lung motion sequence image may be input into an optical flow estimation model for obtaining a forward optical flow between the first images, and the optical flow estimation model may be flownet2.0, or may be another optical flow estimation model, which is not specifically limited by the present disclosure. Alternatively, optical flow estimation algorithms such as a sparse optical flow estimation algorithm and a dense optical flow estimation algorithm may be used to perform optical flow estimation on the adjacent first image, which is not specifically limited in this disclosure.
In another example, in performing optical flow estimation, the inverse optical flows of two adjacent first images within each lung motion sequence image are obtained in reverse order of the M to 1 groups, respectively, e.g., the inverse optical flows of FMN to F (M-1) N, the inverse optical flows of F (M-2) N to F (M-1) N, and so on, the inverse optical flows of F2N to F1N are obtained. The backward optical flow represents the motion velocity information of each feature point in the adjacent first images arranged in the backward order of time. Similarly, the lung motion sequence image may be input into the optical flow estimation model to obtain a reverse optical flow between the first images, or an optical flow estimation algorithm such as a sparse optical flow estimation algorithm and a dense optical flow estimation algorithm may be used to perform optical flow estimation on the adjacent images, which is also not specifically limited in this disclosure.
When an optical flow between predicted positions within the first image is obtained, the first lesion feature of the first image may be corrected using the optical flow. For example, the first lesion feature of the i +1 th first image may be corrected according to the forward optical flow between the i and i +1 th first images in turn according to the forward order of the first images in the lung motion sequence; and/or correcting the first focus characteristics of the i +1 th first image according to the reverse optical flow between the i and i +1 th first images in turn according to the reverse order of the first images in the lung motion sequence.
Fig. 6 shows a flowchart of step S322 in a dynamic lesion position detection method according to an embodiment of the present disclosure. Wherein sequentially correcting the first lesion feature using the optical flow in an order of a first image in the lung motion sequence comprises:
s3221: obtaining a first forward optical flow and/or a first backward optical flow between the predicted positions of the lesions of the first image by using the forward optical flow and/or the backward optical flow between the first images;
s3222: a first lesion feature is corrected based on the first forward optical flow and/or the first backward optical flow.
In the embodiment of the present disclosure, in the case of obtaining a forward optical flow and/or a backward optical flow between the first images, optical flow information (the first forward optical flow and/or the first backward optical flow) corresponding to the predicted position of the lesion in the first image may be further obtained. For example, a matrix or vector representing the predicted location of the lesion may be multiplied by the forward optical flow and/or the backward optical flow to obtain a corresponding first forward optical flow and/or a first backward optical flow. For example, a forward optical flow between the i-th first image and the i + 1-th first image may be multiplied by a matrix corresponding to the lesion prediction position of the i + 1-th first image, to obtain a first forward optical flow between the i-th first image and the i + 1-th first image. And the inverse optical flow between the (i + 1) th first image and the ith first image can be multiplied by the focus prediction position of the ith first image to obtain a first inverse optical flow between the (i + 1) th image and the ith image.
Based on the above, a first forward optical flow and a first backward optical flow between predicted positions of lesions in the respective first images in the lung motion sequence can be obtained. Each first lesion feature is then corrected using at least one of the first forward optical flow and the second forward optical flow.
The first forward optical flow is exemplified below. First lesion features of first images other than the first image may be corrected based on each of the first forward optical flows in a forward order of the first images in the lung motion sequence images.
In the embodiment of the present disclosure, the obtained first forward optical flow represents an optical flow between predicted positions (lesion regions) of lesions in two first images adjacent in the forward order of the lung motion sequence images, that is, an optical flow from a previous first image to a lesion region of a next first image. Therefore, the first lesion feature of each first image can be guided according to the obtained first forward optical flow, that is, a new lesion feature obtained by the first lesion feature of each first image under the condition that the optical flow of the corresponding first forward optical flow changes can be determined.
Specifically, the lesion repair feature of the first image other than the first image may be obtained first from the first forward optical flow, where the lesion repair feature may represent a change in the feature obtained by the optical flow. The process may include: according to the forward sequence of the lung motion sequence images, performing offset processing on the first focus feature of the first image according to the first forward optical flow to obtain a focus repairing feature of the first focus feature of the second first image; and then performing summation processing on the first focus characteristic of the kth first image and the focus repairing characteristic of the kth first image to obtain a first summation characteristic of the kth first image, performing offset processing on the kth first summation characteristic according to the kth first forward optical flow to obtain focus repairing characteristics of the (k + 1) th first image, and so on to obtain first summation characteristics of other first images except the first image. Where k is a positive integer greater than 1 and less than M, M representing the number of first images. The first summation feature obtained in the embodiment of the disclosure is a feature after the first focus feature of each first image is corrected.
Likewise, correction by the first inverse optical flow is also possible. The first lesion feature of the first images other than the first image in the reverse order may be corrected in reverse order of the first images in the lung motion sequence images based on the first reverse optical flows.
In the embodiment of the present disclosure, the obtained first backward optical flow represents an optical flow between predicted positions (lesion regions) of lesions in two first images adjacent in the backward order of the lung motion sequence images, that is, an optical flow from a previous first image to a lesion region of a next first image in the backward order. Therefore, the first lesion features of the first images can be guided according to the obtained first inverse optical flow, i.e. new lesion features obtained by the first lesion features of each first image under the condition that the optical flow of the corresponding first inverse optical flow changes can be determined.
Specifically, the lesion repair feature of the first image other than the first image in the reverse order may be first obtained from the first reverse optical flow, where the lesion repair feature may represent a change in the feature obtained by the optical flow. The process may include: according to the reverse sequence of the lung motion sequence images, performing offset processing on the first focus characteristics of the first image in the reverse sequence according to the first reverse optical flow to obtain focus repairing characteristics of the first focus characteristics of the second first image; and then, performing summation processing on the first focus characteristic of the kth first image in the reverse order and the focus repairing characteristic of the kth first image to obtain a first summation characteristic of the kth first image, performing offset processing on the kth first summation characteristic according to the kth first reverse optical flow to obtain focus repairing characteristics of the (k + 1) th first image, and so on to obtain first summation characteristics of other first images except the first image in the reverse order. Where k is a positive integer greater than 1 and less than M, M representing the number of first images. The first summation feature obtained in the embodiment of the disclosure is a feature after the first focus feature of each first image is corrected.
In the case where each first added feature obtained by the first forward optical flow correction or the first added feature obtained by the first backward optical flow is obtained, the first added feature may be directly used as the corrected first lesion feature. It should be noted that, in the forward correction, the first lesion feature of the first image is not corrected, and in the reverse correction, the first lesion feature of the first image in the reverse order is not corrected.
In some preferred embodiments of the present disclosure, the first summation characteristic obtained by the first forward optical flow correction and the first summation characteristic obtained by the first backward optical flow correction may be further fused, so as to further optimize the corrected characteristic.
In one example, the first summed feature from the first forward optical flow and the first summed feature from the first backward optical flow may be averaged to obtain an optimized first summed feature. Alternatively, in other embodiments, feature fusion may be achieved by a convolutional neural network. Wherein, for the first image and the first image except the last first image, two first summation features obtained by the first forward optical flow and the first backward optical flow can be connected to obtain a connection feature; and carrying out convolution processing on the connection characteristic to obtain a fusion characteristic. The fused features may be the result of further optimization of the features corrected by the two-directional optical flow. For example, taking the second first image as an example, when the first summation feature obtained by the first forward optical flow is F1, and the first summation feature obtained by the first backward optical flow is F2, F1 and F2 may be connected (connected in the depth direction) to obtain a connection feature F3, and the convolution operation of F3 is performed by at least one convolution layer to obtain a fusion feature, which has the same dimension as that of F1 and F2. The fused feature may be used as a final correction for the first lesion feature of the second first image.
In other embodiments of the present disclosure, the forward optical flow and the backward optical flow may be optimized when the forward optical flow and the backward optical flow are obtained, and the first forward optical flow and the first backward optical flow may be determined by using the optimized optical flow, and the above-mentioned correction process of the first lesion feature may be performed. Alternatively, the first forward optical flow and the first backward optical flow may be directly optimized, and the correction process of the first lesion feature may be performed using the optimized optical flows. The optimization process will be described below by taking the first forward optical flow and the first backward optical flow as an example, and the optimization method for the forward optical flow and/or the backward optical flow is the same, and will not be described repeatedly.
Wherein performing optical flow optimization processing on the first forward optical flow and the first backward optical flow comprises: connecting each first forward optical flow between each first image in the lung motion sequence images to obtain a first connecting optical flow, and connecting each first backward optical flow to obtain a second connecting optical flow; respectively executing Q times of optical flow optimization processing on the first connection optical flow and the second connection optical flow to obtain a first optimized optical flow corresponding to the first connection optical flow and a second optimized optical flow corresponding to the second connection optical flow, wherein Q is a positive integer greater than or equal to 1; and obtaining a second forward optical flow corresponding to each first forward optical flow according to the first optimized optical flow, and obtaining a second reverse optical flow corresponding to each first reverse optical flow according to the second optimized optical flow.
In the embodiment of the disclosure, before performing optical flow optimization, first forward optical flows between first images in the lung motion sequence images are respectively connected, such as respectively cascaded. Wherein the first forward luminous fluxes are connected in the depth direction to form a first connected luminous flux composed of a plurality of layers of the first forward luminous fluxes. Similarly, the first backward optical flows between the first images may be connected to each other, for example, the first backward optical flows may be connected in the depth direction to form a second connection optical flow composed of a plurality of layers of the first direction optical flows.
After obtaining the first connected optical flow and the second connected optical flow, optical flow optimization processing may be performed on the first connected optical flow and the second connected optical flow, respectively, and as described in the above embodiments, the disclosed embodiments may perform at least one optical flow optimization processing procedure. For example, each time the optical flow optimization processing in the embodiment of the present disclosure is performed, the optical flow optimization module may be composed of a neural network, or the optimization operation may be performed by using a corresponding algorithm. Correspondingly, when the optical flow optimization processing is performed Q times, the optical flow optimization network module may include Q optical flow optimization network modules connected in sequence, the input of the latter optical flow optimization network module is the output of the former optical flow optimization network module, and the output of the last optical flow optimization network module is the optimization result of the first connection optical flow and the second connection optical flow.
Specifically, when only one optical flow optimization network module is included, the optical flow optimization network module may be used to perform optimization processing on the first connection optical flow to obtain a first optimized sub-optical flow corresponding to the first connection optical flow, and perform optimization processing on the second connection optical flow through the optical flow optimization network module to obtain a second optimized sub-optical flow corresponding to the second connection optical flow. Wherein the optical flow optimization process may include a residual process and an upsampling process. That is, the optical flow optimization network module may further include a residual unit and an upsampling unit, where the residual unit performs residual processing on the input first connection optical flow or the second connection optical flow, where the residual unit may include a plurality of convolutional layers, each convolutional layer employs a convolution kernel, which is not specifically limited by the embodiment of the present disclosure, and a scale of the first connection optical flow after residual processing by the residual unit becomes smaller, for example, is reduced to one fourth of a scale of the input connection optical flow, which is not specifically limited by the present disclosure, and may be set according to a requirement. After performing the residual processing, an upsampling process may be performed on the residual processed first connected optical flow or the second connected optical flow, by which the scale of the output first optimized sub-optical flow may be adjusted to the scale of the first connected optical flow and the scale of the output second optimized sub-optical flow may be adjusted to the scale of the second connected optical flow. And the characteristics of each optical flow can be fused through the optical flow optimization process, and the optical flow precision can be improved.
In other embodiments, the optical flow optimization module may also include a plurality of optical flow optimization network modules, such as Q optical flow optimization network modules. The first optical flow optimization network module may receive the first connection optical flow and the second connection optical flow, and perform first optical flow optimization processing on the first connection optical flow and the second connection optical flow, where the first optical flow optimization processing includes residual processing and upsampling processing, and a specific process is the same as that in the above embodiment, and is not described herein again. A first optimized sub-luminous flux of the first connection luminous flux and a first optimized sub-luminous flux of the second connection luminous flux can be obtained by the first luminous flux optimization process.
Further, the first connection sub optical flow and the second connection sub optical flow are input to a second optical flow optimization network module, and a second optical flow optimization process is performed, where the second optical flow optimization network module may also include a residual unit for performing a residual process and an upsampling unit for performing an upsampling process, and the optical flow optimization process of the first connection sub optical flow and the second connection sub optical flow may be performed by the second optical flow optimization network module, where a specific process is the same as that in the foregoing embodiment, and is not described herein again. The second optical flow optimization process can obtain a second optimized sub-optical flow of the first connected optical flow and a second optimized sub-optical flow of the second connected optical flow.
Similarly, each optical flow optimization network module may perform an optical flow optimization process once, that is, a (k + 1) -th optical flow optimization network module may perform a (k + 1) -th optical flow optimization process on an ith optimized sub-optical flow of the first connection optical flow and the second connection optical flow to obtain a (k + 1) -th optimized sub-optical flow corresponding to the first connection optical flow and a (k + 1) -th optimized sub-optical flow corresponding to the second connection optical flow, where i is a positive integer greater than 1 and less than Q. Finally, an nth sub-optimization process, which may be performed by an nth optical flow optimization network module, obtains an nth optimized sub-optical flow of a first connected optical flow and a qth optimized sub-optical flow of a second connected optical flow, and may determine the obtained qth optimized sub-optical flow of the first connected optical flow as the first optimized optical flow and the obtained nth optimized sub-optical flow of the second connected optical flow as the second optimized optical flow. In the embodiment of the disclosure, the optical flow optimization processing procedure executed by each optical flow optimization network module may be residual error processing and upsampling processing. That is, each optical flow optimization network module may be the same optical flow optimization module.
For example, fig. 7 shows a schematic structural diagram of an optical flow optimization network according to an embodiment of the present disclosure, which may include three optical flow optimization network modules A, B and C. The three optical flow optimization network modules can be respectively composed of a residual error unit and an up-sampling unit. Wherein the first connection optical flow f can be executed by the first optical flow optimization network module A0And a second connecting luminous flux f1To obtain a first connected sub-stream f of the first connected stream1And a first connected sub-stream f of a second connected stream1'. The first connecting sub-luminous flux f1And f1'Respectively input into a second optical flow optimization network module B, and execute a second optical flow optimization process to obtain a second connection sub-optical flow f of the first connection optical flow2Second connecting partial light stream f of second connecting light stream2'. Further, the second connecting sub-luminous flux f2And f2'Respectively input to a third optical flow optimization network module C, and respectively execute third optical flow optimization processing to obtain a third optimized sub-optical flow f corresponding to the first connection optical flow3Third optimized sub-stream f corresponding to the second connected stream3'. It is possible here to determine a third optimized sub-optical flow of the first connected optical flow resulting from the last optical flow optimization process as the first optimized optical flow and a third optimized sub-optical flow of the second connected optical flow resulting from the last optical flow optimization process as the second optimized optical flow.
After Q times of optical flow optimization processing, the scale of the obtained first optimized optical flow is the same as the scale of the first connection optical flow, and the first optimized optical flow may be split into a plurality of second forward optical flows (one second forward optical flow for each layer) in the depth direction, and the plurality of second forward optical flows respectively correspond to the optimization results of the first forward optical flows. Similarly, after Q-times optical flow optimization processing, the second optimized optical flow obtained may have the same scale as the second connected optical flow, and the second optimized optical flow may be divided into a plurality of second backward optical flows (one second backward optical flow for each layer) in the depth direction, and the plurality of second backward optical flows may correspond to the optimization results of the respective first backward optical flows.
With the above embodiment, it is possible to obtain the first forward optical flow optimized second forward optical flow between the first images and the first backward optical flow optimized second backward optical flow between the first images.
After the optimized optical flow is obtained, the optimized second forward optical flow and the optimized second backward optical flow may be used to perform forward order correction and/or backward order correction of the first lesion features of each first image in the lung motion sequence images, respectively, so as to obtain a correction result of each first lesion feature. The specific process may refer to the above-mentioned first forward optical flow and first backward optical flow to correct the first lesion feature, and a repeated description thereof is omitted here.
Based on the above configuration, a correction result of the first lesion feature of the respective first images in each lung motion sequence can be obtained. In the case of obtaining a corrected first lesion feature of each first image, correction of the predicted position may be performed, i.e., a dynamic detection position of the lesion may be obtained based on the corrected first lesion feature.
Wherein the obtaining a dynamic detection position of the lesion based on the corrected first lesion feature comprises: and executing target detection processing based on the corrected first focus characteristics to obtain a dynamic detection position of the focus.
Similarly, the target detection processing may be implemented by a detection algorithm or a target detection neural network, which specifically refers to the description of the foregoing embodiments and is not repeated herein. In the embodiment of the present disclosure, the target detection processing is performed on the first lesion feature of each first image in the lung motion sequence, so that a lesion position corresponding to the first lesion feature can be more accurately extracted, a dynamic detection position is obtained, and the lesion position of each first image is updated and optimized. The dynamic detection position can also be expressed in a matrix or vector form, and is used for expressing a position area corresponding to the lesion area in the corresponding first image.
In the case of obtaining the dynamic detection position of each first image in each lung motion sequence image, the dynamic detection position of the lesion in each group of lung images can be further obtained. As described in the above embodiment, since each first image in the lung motion sequence is an image sequence composed of images at the same position in the lung images at a plurality of times, when a dynamic detection position of a lesion in each first image is obtained, a dynamic detection position of each corresponding first image in each lung image group can be obtained correspondingly. For example, the ith lung image may be formed according to the ith first image in each set of lung motion sequence images, and the dynamic detection position of the lesion in the ith first image is the dynamic detection position of the lesion forming the ith lung image. i is greater than or equal to 1 and less than or equal to the total number of lung images.
In a preferred embodiment of the present disclosure, when a dynamic detection position of a lesion in each set of lung images is obtained, a volume of the lesion may be calculated to obtain volume information of the lesion. For example, the area of the region of the dynamic detection position corresponding to each layer of the lung image may be summed to obtain the total volume of the lesion.
Alternatively, in another preferred embodiment, in the case where a dynamic detection position of a lesion in each first image or a dynamic detection position of a lesion in each group of lung images is obtained, the dynamic detection position may be displayed in the first image or the lung images, and the predicted position may be displayed before the dynamic detection position is obtained, thereby realizing real-time display of a lesion region.
Because the dynamic detection position in the embodiment of the present disclosure is further corrected based on the predicted position, when the dynamic detection position is obtained through correction, the approximate position of the lesion may be displayed through the predicted position first, and when the dynamic detection position is obtained through correction, the dynamic detection position may be displayed, which may reduce waste of time cost and may improve user experience.
In the case where the predicted position of the lesion in the lung image is obtained in step S10, the embodiment of the present disclosure may display the predicted position in the lung image in the first manner. And displaying the dynamic detection position in a second mode under the condition that the predicted position in the lung image is corrected to obtain the dynamic detection position, wherein the first mode and the second mode can be the same or different.
Displaying the predicted location of the lesion in the lung image in a first manner, including at least one of: displaying the predicted location in a prominent first color; marking an area of the predicted location with a first form of detection box; concealing an image area outside the predicted position.
In one example, when a predicted position of a lung image (a predicted position of a lesion in each layer image in the lung image) is obtained, the predicted position in the lung image may be displayed in a color different from that of other image regions, for example, the lesion position may be displayed in red and regions other than the predicted position of the lesion may be displayed in black, but the present disclosure is not limited to this.
In one example, in the case where a predicted position of a lung image (a predicted position of a lesion in each layer image within the lung image) is obtained, the predicted position may be displayed using a detection frame of a preset shape. The detection frame may be rectangular, circular, or other shapes, which are not specifically limited by the present disclosure. In addition, the color of the detection frame can be set according to requirements.
In one example, in the case where a predicted position of a lung image (a predicted position of a lesion in each layer image within the lung image) is obtained, an image region other than the predicted position in the lung image may be hidden. In the present disclosure, in order not to affect the observation of the original lung image, the predicted position within the lung image may be displayed in the new display window while hiding the image region outside the predicted position, that is, only the image region of the predicted position is displayed in the new display window.
In addition, since the lung image includes a plurality of layers of images, the embodiment of the present disclosure may display the predicted position of the image corresponding to the received layer number information according to the received layer number information, or may dynamically change and display the predicted position of each layer of image according to an animation format, so that the change of the predicted position of each layer of lesion may be observed conveniently. In addition, at least one of the plurality of lung images may be selected to display a predicted lesion position or a dynamic lesion detection position based on the received lung image selection information, which is not limited in the present disclosure.
Similarly, in the case where the predicted position in the lung image is corrected to obtain a dynamic detection position, the dynamic detection position of the lesion in the lung image may be displayed in the second mode. This step may include: displaying the dynamic detection position in a highlighted second color; marking an area of the dynamic detection position with a second form of detection frame; hiding an image area outside the dynamic detection position.
In one example, in the case of obtaining a dynamic detection position of a lung image (dynamic detection position of a lesion in each layer image in the lung image), the dynamic detection position in the lung image may be displayed in a color state different from that of other image regions, and the color may be different from that of the predicted position, for example, the dynamic lesion detection position is displayed in blue, and a region other than the predicted lesion position is displayed in black, but not limited to the specific limitations of the present disclosure. Alternatively, in order to reduce the visual images generated by the predicted position and the dynamic detection position, the embodiments of the present disclosure may also display the predicted position and the dynamic detection position with the same color.
In one example, in the case where a predicted position of a lung image (a predicted position of a lesion in each layer image within the lung image) is obtained, the predicted position may be displayed using a detection frame of a preset shape. The detection frame may be rectangular, circular, or other shapes, which are not specifically limited by the present disclosure. In addition, the color of the detection frame can be set according to requirements. The first form and the second form may be at least one of different shapes and different colors of the detection frame. Alternatively, in order to reduce the visual image displayed with the change between the predicted position and the dynamic detection position, the image may be displayed in the same display state.
In one example, in the case where a dynamic detection position of a lung image (a dynamic detection position of a lesion in each layer image within the lung image) is obtained, an image region other than the dynamic detection position in the lung image may be hidden. In the present embodiment, in order not to affect the observation of the original lung image, the dynamic detection position in the lung image may be displayed in the new display window, while the image area other than the dynamic detection position is hidden, that is, only the image area of the dynamic detection position is displayed in the new display window.
Alternatively, the disclosed embodiment may switch from the display state in which only the predicted position or the dynamic detection position is displayed in a gradual manner from the lung image when the image area other than the predicted position is hidden and displayed and/or the area other than the dynamic detection position is hidden and displayed. The time of gradual change can be set, and the display state of the hidden rest area is finished through the set gradual change time.
Alternatively, in the embodiment of the present disclosure, the display may also be switched from the predicted position to the dynamically detected position in a gradual manner. For example, when the highlight colors of the predicted position and the dynamically detected position are different, the display may be switched from the first color to the second color in a gradation manner. Or the detection frame corresponding to the predicted position may be gradually changed to the detection frame of the dynamic detection position. The above description is not intended as a specific limitation of the present disclosure, and is not intended to be exemplified herein.
In the embodiment of the present disclosure, switching and displaying from the predicted position to the dynamic detection position according to a gradual change form includes: acquiring an initial value and a final value of a transition coefficient; controlling the transition coefficients to change from initial values to final values according to preset step lengths, and determining intermediate images corresponding to the transition coefficients according to a preset mode, wherein when the transition coefficients are the initial values, the corresponding intermediate images are image areas of predicted positions, and when the transition coefficients are the final values, the corresponding intermediate images are image areas of dynamic detection positions; and displaying each intermediate image.
In an embodiment of the disclosure, the transition coefficient may be an intermediate state in a transformation process for determining a transformation from a lung image showing the predicted position to a lung image showing the dynamic lesion position. For example, the method can be used for determining the pixel value of each pixel point after each change in the process of changing the lung image displaying the predicted position to the lung image showing the dynamic lesion position. The database may store an initial value and a final value of the transition coefficient, for example, the initial value may be 0, and the final value may be 1, but not limited to the specific limitations of the present disclosure. The disclosed embodiments may set different initial and final values.
After obtaining the initial value and the final value of the transition coefficient, the transition coefficient may be controlled to change from the initial value to the final value, for example, according to a preset step size. For example the step size may be 0.01. That is, the transition coefficient may be continuously controlled from 0 to 1 in accordance with 0.01. In the embodiment of the present disclosure, each transition coefficient may correspond to a display state of a gradient display, that is, an intermediate image. In the embodiment of the present disclosure, when the transition coefficient is an initial value, the corresponding intermediate image is a lung image displaying the predicted position, and when the transition coefficient is a final value, the corresponding intermediate image is a lung image displaying the motion detection position. So that the intermediate image corresponding to the transition coefficient can be determined in a preset manner according to the different transition coefficients. Wherein, the expression of the preset mode is as follows:
I=I0*(1-b)+I1*b;
wherein I is an intermediate image, I0To display lung images of predicted locations, I1To display the lung image of the dynamically detected position, b is the transition coefficient.
That is, the new pixel value obtained under the condition that the transition coefficient is changed can be determined according to the pixel value of each pixel point of the lung image displaying the predicted position, so as to obtain each intermediate image and finally obtain the lung image displaying the dynamic detection position.
The above-mentioned manner may be a gradual change process from the completion of the display of the predicted position to the display of the dynamically detected position, and in other embodiments, the gradual change process may be implemented in other manners, for example, the predicted position may be gradually changed to the dynamically detected position in the first direction. The first direction may be from top to bottom, or from left to right, or may be other directions, and the present disclosure is not particularly limited.
Based on the configuration of the above embodiment, the embodiments of the present disclosure may obtain lung images acquired at multiple times during a respiratory process, and detect a predicted position for a lesion within the lung images. Then, a corresponding image (first image) is extracted from the same position in the lung images at a plurality of times to form a lung motion sequence image corresponding to the position, and the predicted position can be corrected in sequence according to the sequence of the images in the lung motion sequence to obtain a corrected lesion position, that is, a dynamic detection position. The lung motion sequence formed by the embodiment of the disclosure is equivalent to the motion change of the same position of the lung at different time, so the predicted position of each first image in the lung motion sequence can reflect the position change of the lesion at different time. Based on this, the embodiment of the present disclosure corrects the predicted position using the change information of the predicted position in each first image in time order, and improves the lesion position detection accuracy. The dynamic detection of the focus position at each moment in the breathing process can be realized through the embodiment of the disclosure.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the present disclosure also provides a dynamic lesion position detection apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any dynamic lesion position detection method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are not repeated herein.
Fig. 8 is a block diagram of a dynamic lesion position detection apparatus according to an embodiment of the present disclosure, as shown in fig. 8, the dynamic lesion position detection apparatus includes:
the acquiring module 10 is configured to acquire predicted positions of focuses in a plurality of sets of lung images, where the plurality of sets of lung images are lung images acquired at multiple times in a respiratory process respectively;
an extracting module 20, configured to respectively extract first images at the same positions in the multiple sets of lung images, so as to obtain a lung motion sequence image at each of the same positions;
a detection module 30, configured to sequentially correct the predicted position of each first image according to an order of the first images in the lung motion sequence, so as to obtain a dynamic detection position of a lesion; and correcting the predicted position of the (i + 1) th first image by using the dynamic detection position of the ith first image in the lung motion sequence to obtain the dynamic detection position of the (i + 1) th first image.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 9 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 9, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 10 shows a block diagram of another electronic device 1900 according to an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 10, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A dynamic detection method for lesion positions is characterized by comprising the following steps:
acquiring the predicted positions of a plurality of groups of lung images aiming at the focus, wherein the plurality of groups of lung images are respectively acquired at multiple moments in the breathing process;
respectively extracting first images at the same position in the multiple groups of lung images to obtain a lung motion sequence image at each same position;
sequentially correcting the predicted positions of the first images according to the sequence of the first images in the lung motion sequence to obtain the dynamic detection position of the focus; and correcting the predicted position of the (i + 1) th first image by using the dynamic detection position of the ith first image in the lung motion sequence to obtain the dynamic detection position of the (i + 1) th first image.
2. The method according to claim 1, wherein said separately extracting the first images at the same positions of the plurality of sets of lung images to obtain the lung motion sequence image at each of the same positions comprises:
determining the number of layers of the plurality of sets of lung images;
determining first images at the same position in the multiple groups of lung images according to the number of layers;
and obtaining the lung motion sequence image corresponding to the same position according to the first image positioned at the same position in each group of lung images.
3. The method according to claim 1 or 2, wherein said sequentially correcting said predicted positions in the order of the first image in said lung motion sequence to obtain a dynamic detected position of the lesion comprises:
obtaining a first lesion feature corresponding to the predicted position in the first image;
sequentially correcting each first focus characteristic according to the sequence of the first images in the lung motion sequence;
and obtaining a dynamic detection position of the focus based on the corrected first focus characteristic.
4. The method of claim 3, wherein said sequentially correcting each of said first lesion features in the order of the first images in said sequence of lung motion comprises:
obtaining optical flow between the first images in the lung motion sequence;
sequentially correcting the first focus features by using the optical flow according to the sequence of the first images in the lung motion sequence;
wherein the optical flow comprises a forward optical flow obtained in a forward order of the first image in the lung motion sequence and/or a backward optical flow obtained in a backward order of the first image in the lung motion sequence.
5. The method of claim 4, wherein obtaining optical flow between the first images in the sequence of lung motion comprises:
and performing optical flow estimation on each first image in the input lung motion sequence by using an optical flow estimation model to obtain optical flow between each first image in the lung motion sequence.
6. The method of claim 4 or 5, wherein the sequentially correcting the first lesion feature using the optical flow in the order of the first image in the lung motion sequence comprises at least one of:
according to the forward sequence of the first images in the lung motion sequence, sequentially correcting first focus characteristics of the (i + 1) th first image according to a forward optical flow between the (i) th and the (i + 1) th first images;
and correcting the first focus characteristics of the i +1 th first image according to the reverse optical flow between the i and i +1 th first images in turn according to the reverse order of the first images in the lung motion sequence.
7. The method of claim 6, wherein the sequentially correcting the first lesion feature using the optical flow in an order of a first image in the lung motion sequence further comprises:
performing feature fusion processing on a first focus feature obtained through the forward optical flow correction and a first focus feature obtained through the reverse optical flow correction; and/or
And performing optical flow optimization processing on the forward optical flow and/or the backward optical flow, and correcting the first focus feature by using the optimized optical flow.
8. A dynamic focal position detection device, comprising:
the acquisition module is used for acquiring the predicted positions of focuses in a plurality of groups of lung images, wherein the plurality of groups of lung images are respectively acquired at multiple moments in the breathing process;
the extraction module is used for respectively extracting first images at the same positions in the multiple groups of lung images to obtain lung motion sequence images at each same position;
the detection module is used for sequentially correcting the predicted positions of the first images according to the sequence of the first images in the lung motion sequence to obtain the dynamic detection position of the focus; and correcting the predicted position of the (i + 1) th first image by using the dynamic detection position of the ith first image in the lung motion sequence to obtain the dynamic detection position of the (i + 1) th first image.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 7.
10. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 7.
CN202010534757.4A 2020-06-12 2020-06-12 Method and device for dynamically detecting focus position, electronic equipment and storage medium Active CN111738998B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010534757.4A CN111738998B (en) 2020-06-12 2020-06-12 Method and device for dynamically detecting focus position, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010534757.4A CN111738998B (en) 2020-06-12 2020-06-12 Method and device for dynamically detecting focus position, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111738998A true CN111738998A (en) 2020-10-02
CN111738998B CN111738998B (en) 2023-06-23

Family

ID=72648922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010534757.4A Active CN111738998B (en) 2020-06-12 2020-06-12 Method and device for dynamically detecting focus position, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111738998B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258550A (en) * 2020-12-08 2021-01-22 萱闱(北京)生物科技有限公司 Movement direction monitoring method, medium and device of terminal equipment and computing equipment

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009042637A2 (en) * 2007-09-24 2009-04-02 Oregon Health & Science University Non-invasive location and tracking of tumors and other tissues for radiation therapy
US20090185731A1 (en) * 2008-01-23 2009-07-23 Carestream Health, Inc. Method for lung lesion location identification
CN103761745A (en) * 2013-07-31 2014-04-30 深圳大学 Estimation method and system for lung motion model
US20150332454A1 (en) * 2014-05-15 2015-11-19 Vida Diagnostics, Inc. Visualization and quantification of lung disease utilizing image registration
CN106780460A (en) * 2016-12-13 2017-05-31 杭州健培科技有限公司 A kind of Lung neoplasm automatic checkout system for chest CT image
CN108898588A (en) * 2018-06-22 2018-11-27 中山仰视科技有限公司 Therapeutic effect appraisal procedure based on time series, electronic equipment
CN108924556A (en) * 2018-06-27 2018-11-30 戴建荣 Handle method, apparatus, equipment and the storage medium of tomoscan image
US20190102893A1 (en) * 2017-10-03 2019-04-04 Konica Minolta, Inc. Dynamic image processing apparatus
CN109789314A (en) * 2017-07-28 2019-05-21 西安大医集团有限公司 Tumour method for tracing and device, storage medium
CN109816611A (en) * 2019-01-31 2019-05-28 北京市商汤科技开发有限公司 Video repairing method and device, electronic equipment and storage medium
CN109886243A (en) * 2019-03-01 2019-06-14 腾讯科技(深圳)有限公司 Image processing method, device, storage medium, equipment and system
CN110060262A (en) * 2019-04-18 2019-07-26 北京市商汤科技开发有限公司 A kind of image partition method and device, electronic equipment and storage medium
CN111047609A (en) * 2020-03-13 2020-04-21 北京深睿博联科技有限责任公司 Pneumonia focus segmentation method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009042637A2 (en) * 2007-09-24 2009-04-02 Oregon Health & Science University Non-invasive location and tracking of tumors and other tissues for radiation therapy
US20090185731A1 (en) * 2008-01-23 2009-07-23 Carestream Health, Inc. Method for lung lesion location identification
CN103761745A (en) * 2013-07-31 2014-04-30 深圳大学 Estimation method and system for lung motion model
US20150332454A1 (en) * 2014-05-15 2015-11-19 Vida Diagnostics, Inc. Visualization and quantification of lung disease utilizing image registration
CN106780460A (en) * 2016-12-13 2017-05-31 杭州健培科技有限公司 A kind of Lung neoplasm automatic checkout system for chest CT image
CN109789314A (en) * 2017-07-28 2019-05-21 西安大医集团有限公司 Tumour method for tracing and device, storage medium
US20190102893A1 (en) * 2017-10-03 2019-04-04 Konica Minolta, Inc. Dynamic image processing apparatus
CN108898588A (en) * 2018-06-22 2018-11-27 中山仰视科技有限公司 Therapeutic effect appraisal procedure based on time series, electronic equipment
CN108924556A (en) * 2018-06-27 2018-11-30 戴建荣 Handle method, apparatus, equipment and the storage medium of tomoscan image
CN109816611A (en) * 2019-01-31 2019-05-28 北京市商汤科技开发有限公司 Video repairing method and device, electronic equipment and storage medium
CN109886243A (en) * 2019-03-01 2019-06-14 腾讯科技(深圳)有限公司 Image processing method, device, storage medium, equipment and system
CN110060262A (en) * 2019-04-18 2019-07-26 北京市商汤科技开发有限公司 A kind of image partition method and device, electronic equipment and storage medium
CN111047609A (en) * 2020-03-13 2020-04-21 北京深睿博联科技有限责任公司 Pneumonia focus segmentation method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YOUNGKYOO HWANG ET AL.: "Ultrasound image-based respiratory motion tracking", 《PROCEEDINGS OF SPIE MEDICAL IMAGING 2012: ULTRASONIC IMAGING, TOMOGRAPHY, AND THERAPY》, vol. 8320, pages 1 - 7 *
孙长建: "医学影像的四维重建和分割中的关键技术研究", 《中国博士学位论文全文数据库医药卫生科技辑》, no. 2, pages 060 - 2 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258550A (en) * 2020-12-08 2021-01-22 萱闱(北京)生物科技有限公司 Movement direction monitoring method, medium and device of terminal equipment and computing equipment

Also Published As

Publication number Publication date
CN111738998B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN109829920B (en) Image processing method and device, electronic equipment and storage medium
CN109978886B (en) Image processing method and device, electronic equipment and storage medium
JP2022515722A (en) Image segmentation methods and devices, electronic devices and storage media
CN111724361B (en) Method and device for displaying focus in real time, electronic equipment and storage medium
CN112541928A (en) Network training method and device, image segmentation method and device and electronic equipment
TWI765404B (en) Interactive display method for image positioning, electronic device and computer-readable storage medium
CN112967291B (en) Image processing method and device, electronic equipment and storage medium
CN110705626A (en) Image processing method and device, electronic equipment and storage medium
CN111724364B (en) Method and device based on lung lobes and trachea trees, electronic equipment and storage medium
CN112070763A (en) Image data processing method and device, electronic equipment and storage medium
CN110852325B (en) Image segmentation method and device, electronic equipment and storage medium
CN111738998B (en) Method and device for dynamically detecting focus position, electronic equipment and storage medium
US20220301220A1 (en) Method and device for displaying target object, electronic device, and storage medium
CN113554642B (en) Focus robust brain region positioning method and device, electronic equipment and storage medium
EP3916683A1 (en) Method and apparatus for displaying an image, electronic device and computer-readable storage medium
CN111798498A (en) Image processing method and device, electronic equipment and storage medium
CN112686867A (en) Medical image recognition method and device, electronic equipment and storage medium
CN112397198A (en) Image processing method and device, electronic equipment and storage medium
CN113553460B (en) Image retrieval method and device, electronic device and storage medium
CN113034437A (en) Video processing method and device, electronic equipment and storage medium
CN115171873A (en) Method and device for identifying chronic obstructive pulmonary disease, electronic equipment and storage medium
CN112633203A (en) Key point detection method and device, electronic equipment and storage medium
CN116012661A (en) Action recognition method, device, storage medium and terminal
CN112114948A (en) Data loading method and device, electronic equipment and storage medium
CN114972200A (en) Classification method and device for chronic obstructive pulmonary disease, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant