CN113450345A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113450345A
CN113450345A CN202110815707.8A CN202110815707A CN113450345A CN 113450345 A CN113450345 A CN 113450345A CN 202110815707 A CN202110815707 A CN 202110815707A CN 113450345 A CN113450345 A CN 113450345A
Authority
CN
China
Prior art keywords
image
target
sequence
range
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110815707.8A
Other languages
Chinese (zh)
Inventor
赵喜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens Digital Medical Technology Shanghai Co Ltd
Original Assignee
Siemens Digital Medical Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Digital Medical Technology Shanghai Co Ltd filed Critical Siemens Digital Medical Technology Shanghai Co Ltd
Priority to CN202110815707.8A priority Critical patent/CN113450345A/en
Publication of CN113450345A publication Critical patent/CN113450345A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present disclosure relates to an image processing method, apparatus, device, electronic device, and storage medium. In the image processing method provided by the present disclosure, the method may include: acquiring a CT image sequence of computed tomography; projecting the CT image sequence to obtain a volume image corresponding to the CT image sequence; performing target detection processing on the volume image to determine a target image range in the volume image; and determining a target image sequence from the CT image sequence based on the target image range. By utilizing the embodiment provided by the disclosure, the rapid preprocessing of the CT image sequence can be realized.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present invention relates to the technical field of medical devices, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a storage medium.
Background
In the course of a Computed Tomography (CT) examination, to ensure that the image of the target region can be acquired in its entirety, a greater number of CT images and/or images at a greater field angle may be acquired. In the subsequent printing process, the operator selects the required content to print according to the actual situation.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
In view of the above, an aspect of the present invention provides an image processing method, an apparatus, an electronic device, and a storage medium.
According to an aspect of the present disclosure, there is provided an image processing method including: acquiring a CT image sequence of computed tomography; projecting the CT image sequence to obtain a volume image corresponding to the CT image sequence; performing target detection processing on the volume image to determine a target image range in the volume image; and determining a target image sequence from the CT image sequence based on the target image range.
According to another aspect of the present disclosure, there is provided an image processing apparatus including: an image sequence acquisition unit configured to acquire a computed tomography CT image sequence; a volumetric image determination unit configured to project the sequence of CT images to obtain volumetric images corresponding to the sequence of CT images; a target range detection unit configured to perform target detection processing on the volume image to determine a target image range in the volume image; and a target sequence determination unit configured to determine a target image sequence from the CT image sequence based on the target image range.
According to still another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program which, when executed by the at least one processor, implements the method as previously described.
According to yet another aspect of the disclosure, a non-transitory computer readable storage medium is provided storing a computer program, wherein the computer program, when executed by a processor, implements the method as previously described.
According to yet another aspect of the disclosure, a computer program product is provided, comprising a computer program, wherein the computer program realizes the method as described before when executed by a processor.
By determining a target image sequence in a CT image sequence by target detection of volumetric images corresponding to the CT image sequence, a fast pre-processing of the CT image sequence may be achieved.
Drawings
The above and other features and advantages of the present invention will become more apparent to those of ordinary skill in the art by describing in detail embodiments thereof with reference to the attached drawings in which:
fig. 1 shows an exemplary flow chart of an image processing method according to the present disclosure;
FIG. 2 illustrates an exemplary flow diagram for determining a target image range according to an embodiment of the disclosure;
fig. 3 illustrates an exemplary process of an image processing method according to an embodiment of the present disclosure;
fig. 4 shows an exemplary block diagram of an image processing apparatus according to an embodiment of the present disclosure;
FIG. 5 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
In order to more clearly understand the technical features, objects, and effects of the present invention, embodiments of the present invention will now be described with reference to the accompanying drawings, in which like reference numerals refer to like parts throughout.
"exemplary" means "serving as an example, instance, or illustration" herein, and any illustration, embodiment, or steps described as "exemplary" herein should not be construed as a preferred or advantageous alternative.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled.
In this document, "one" means not only "only one" but also a case of "more than one". In this document, "first", "second", and the like are used only for distinguishing one from another, and do not indicate the degree of importance and order thereof, and the premise that each other exists, and the like.
Computed Tomography (CT) is a process of performing a cross-sectional scan one after another around a region of the human body with a precisely collimated X-ray beam. Computed Tomography (CT) systems implementing computed tomography are based on volumetric data acquisition in which an X-ray tube and a receptor are rotated around a patient to collect transmission data from a volume of tissue.
In the CT image acquisition process, in order to ensure that complete data can be acquired through one scanning operation, an operator usually sets scanning parameters so that an image sequence acquired by one scanning includes an image with a larger range than an actually required range. For example, when scanning a patient's lungs, a complete lung image may be acquired by scanning a distance above the lung to a distance below the lung. Similarly, a larger field of view (FOV) may be set to acquire an image of the lungs with a full viewing angle.
After the original image is acquired, the operator may pre-process the original image sequence while printing the image. For example, the operator may delete CT images that do not belong to the region to be observed (e.g., the lung) depending on the actual situation. For another example, the operator may zoom the acquired original image to facilitate subsequent diagnosis.
However, in the current printing process, the operator can only manually judge which partial image of the acquired tomographic image belongs to the portion to be observed based on experience and anatomical knowledge, and manually screen and zoom the image sequence. This greatly increases the workload of the printing process and does not provide standardized pre-processing results since screening of image sequences is dependent on the personal experience of the operator.
In order to solve the above problem, the present disclosure provides a new image processing method.
Fig. 1 shows an exemplary flowchart of an image processing method according to the present disclosure.
As shown in fig. 1, in step S102, a sequence of computed tomography CT images may be acquired. The CT image sequence acquired in step S102 is a sequence of tomographic images generated based on the original CT scan data.
In step S104, the CT image sequence acquired in step S102 may be projected to obtain volumetric images corresponding to the CT image sequence. For example, the volumetric image may utilize projections of the frontal plane obtained from a sequence of CT images. An example of a volumetric image projected in the frontal plane is described below in conjunction with fig. 3.
Additionally, the sequence of CT images may be processed using various reconstruction techniques to obtain corresponding volumetric images. For example, volumetric images may also be obtained by processing a CT image sequence using a backprojection reconstruction technique, a two-dimensional fourier reconstruction technique, a filtered backprojection technique, a multi-planar reconstruction (MPR) technique, a maximum intensity projection reconstruction (MIP) technique, or a minimum intensity projection reconstruction (MinP) technique. The particular manner in which the volumetric image is acquired is not limited herein.
In step S106, the volume image obtained in step S104 may be subjected to object detection processing to determine an object image range in the volume image. The target image range refers to a range of CT images actually required.
The target image range determined in step S106 may correspond to a graphic region in the volume image. The image area used to represent the target image area may be any geometric figure such as a rectangle, a circle, etc. The principle of the present disclosure will be described with a target image range represented by a rectangle in the embodiment of the present disclosure. However, it will be appreciated by those skilled in the art that other geometric figures may be used to represent the extent of the target image, as appropriate.
In some embodiments, the volumetric images may be processed using artificial intelligence based methods to determine target detection results in the volumetric images.
In artificial intelligence based methods, the model may be trained such that the trained model is able to recognize a predetermined type of object appearing in the image. The model referred to herein may be a neural network based model or any other model that may be used in any possible artificial intelligence based approach.
In some examples, in the supervised learning approach, the images in the training image set may be labeled in advance as real data by way of manual labeling. In the training process, by comparing the error between the result output by the current parameter used by the model and the real data, the parameter used by the model can be adjusted based on the error, so that the model learns the optimal parameter capable of effectively identifying the target.
In embodiments provided by the present disclosure, volumetric images may be processed using a target recognition algorithm, such as R-CNN, SSD, or the like, to achieve target detection. The specific algorithm for target detection is not limited herein.
A target image range may be determined in the volumetric image based on the results of the target detection. The target image range may include a designated target content, such as a designated physiological region (chest, abdomen, etc.), among others.
In some implementations, the results of the target detection may be indicative of at least one structural feature present in the volumetric image. The structural feature may be indicative of at least one anatomical physiological feature present in the CT image. A target image range in the volumetric image containing the specified target content may be determined based on the location of the structural feature. For example, for a specified target content, the positions of a plurality of feature structures related to the target content in the image may be determined, and a range including the plurality of feature structures may be determined as the target image range.
In other implementations, the artificial intelligence based model may also be trained to directly output a target image range containing specified target content. For example, the model may be caused to learn by way of manual annotation to identify specified target content included in the image and output a range of target images that contain the specified target content. .
In step S108, a target image sequence may be determined from the CT image sequence based on the target image range determined in step S106.
It will be appreciated that volumetric images are images generated by projecting a sequence of CT images. Thus, after the target image range is determined in the volumetric image, a target image sequence of the CT image sequence corresponding to the target image range may be determined based on a reverse operation of projecting the CT image sequence.
In some embodiments, at least one of the CT images in the sequence of CT images that is not within the range of the target image may be deleted and the remaining sequence of CT images may be determined as the sequence of target images. For example, taking the target image range as the range of the lung region from the lung apex to the lung base of the patient as an example, images outside the range from the lung apex to the lung base in the CT image sequence may be deleted, and the remaining CT image sequence may be determined as the target image sequence.
In some implementations, the number of target image sequences can be determined in response to a user setting. For example, the maximum or minimum number of target image sequences may be determined based on user settings.
As mentioned earlier, a larger FOV than actually needed is set during the scan. In this case, the image content to be actually observed in the CT image occupies a small proportion of the image, and the image may include unnecessary blank regions. Thus, in some implementations, the CT images in the remaining CT image sequence may be scaled, and each of the scaled CT images may be determined as the target image sequence. Wherein, the image acquired by the CT scan can be enlarged in the zoomed CT image.
After determining the target image sequence in the CT image sequence, the target image sequence may be sent to a printing module to obtain a final print result.
By using the method provided by the embodiment of the disclosure, the acquired CT image sequence can be preprocessed by using the target detection method so as to screen the target image sequence corresponding to the target observation part in the acquired CT image sequence, and the efficiency of preprocessing the CT image sequence can be improved by using the method. In addition, the target detection is carried out on the volume images corresponding to the CT image sequence, so that the position of the target image range in the volume images can be accurately detected, the accuracy of the target image sequence is improved, and the inconsistency of the target image sequences obtained by screening due to different personal experiences of operators is avoided.
Fig. 2 illustrates an exemplary flow diagram for determining a target image range according to an embodiment of the present disclosure. Step S106 described in fig. 1 may be implemented using the method 200 shown in fig. 2. In the example shown in fig. 2, the volume image may be subject to object detection processing using a neural network model. The neural network model can be composed of at least one convolution layer, an activation layer, a full connection layer and other structures. The trained neural network model includes various parameters used in convolutional layers, activation layers, and fully-connected layers.
In step S202, the volumetric images may be input into a trained neural network model for the target detection task. In some embodiments, the volumetric image may be scaled to adjust the volumetric image to a size that meets the input requirements of the neural network model.
In step S204, an output result of the neural network model may be acquired. Various image features in the volume image can be extracted by using the trained neural network, and a target object (such as a structural feature) existing in the volume image is detected by using the extracted image features.
In some implementations, the output of the neural network model may indicate a location and a label of at least one structural feature in the volumetric image. For example, for a CT volume image obtained for a chest scan, the presence of structural features such as shoulder joints, clavicles, ribs, rib corners, lung apices, lung bases, etc. in the volume image can be detected by a neural network, and the locations of the structural features identified in the image in the volume image and classification labels of the structural features (e.g., lung apices, lung bases, etc. labels) can be output.
In step S206, a target image range may be determined based on an output result of the neural network model. Wherein the target image range may be determined based on the position of the structural feature indicated by the output result of the neural network.
In some embodiments, target content for a target image range may be determined; a position of the target content in the volumetric image is determined based on the at least one structural feature, and a target image range is determined based on the position of the target content in the volumetric image.
The target content for the target image range may be determined in response to a user input, or may be determined by a default target content stored in advance.
In some examples, a user interaction interface may be provided to a user for the user to input information specifying target content during a CT image scan. In some examples, the user may provide user input through the user interaction interface to specify that the target content is the patient's chest, upper abdomen, or lower abdomen. In other examples, the user may also specify, through the user interaction interface, that the target content includes only bone or both bone and soft tissue. Illustratively, the user may be provided with at least one option for indicating the target content for the user to select the target content.
In other examples, the target content may be pre-stored in the storage device as default target content. When the image processing method according to the embodiment of the present disclosure is executed, it is possible to execute the image processing by reading default target content stored in advance in the storage device.
The location of the target content in the image may be determined based on at least one structural feature included in the neural network model output.
The output of the neural network model may include a number of structural features present in the patient's chest, such as shoulder joints, clavicles, ribs, costal corners, lung tips, lung bottoms, and the like. A target structural feature corresponding to the target content may be selected from the at least one structural feature. The location of the target content in the volumetric image may be determined based on the location of the target structural feature. For example, in the case where the target content is determined to be the patient's lungs, the lung apex and lung floor may be determined as the target structural features. The position between the lung apex and the lung floor in the volumetric image corresponding to the target content (the patient's lungs).
Based on the location of the target content in the volumetric image, a target image range may be determined. For example, a range within a rectangle surrounding the target content may be determined as the target image range.
Target detection is performed on the volumetric images using the trained neural network model, and a predetermined target in the volumetric images can be identified based on the result of the pre-training. With such a target detection method, it is possible to avoid a target image range that an operator obtains from the contents of the tomographic image only by experience. By utilizing the artificial intelligence-based method to identify the volume images obtained by projection, the positions of the obtained structural features are more standardized and do not depend on the personal experience of an operator, so that the target image ranges determined by different CT image sequences have better consistency.
Fig. 3 illustrates an exemplary process of an image processing method according to an embodiment of the present disclosure.
As shown in FIG. 3, sequence 310 illustrates a sequence of CT images obtained from a patient scanned by a CT machine. Only a portion, but not all, of the CT image sequence 310 is shown in fig. 3 by way of example.
By projecting the sequence 310, a volumetric image 320 may be obtained. Wherein the volume image 320 shows a two-dimensional image corresponding to the tomographic images in the sequence 310.
Object detection may be performed on the volumetric image 320 to obtain a plurality of structural features in the volumetric image. The volumetric image 330 represents a volumetric image that has undergone a target recognition process and indicates structural features detected in the image with a "+" mark, where each structural feature indicates a physiological feature point of the scanned portion of the patient.
Based on a user selection or default settings, target content (e.g., lungs) for screening the CT image sequence may be determined, and at least one of the detected structural features that is indicative of a location of the target content may be selected based on the target content (see "+" mark in image 340). A rectangle encompassing all of the target structure features may be determined as the target image range in the volumetric image.
Based on the determined target image range in image 340, a final target sequence 350 may be determined from sequence 310. Where the target sequence 350 is a subset of the sequence 310. Wherein only a portion, but not all, of the target sequence 350 is illustratively shown in fig. 3.
Fig. 4 illustrates an exemplary block diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 4, the image processing apparatus 400 may include an image sequence acquisition unit 410, a volume image determination unit 420, a target range detection unit 430, and a target sequence determination unit 440.
The image sequence acquisition unit 410 may be configured to acquire a computed tomography CT image sequence. The volumetric image determination unit 420 may be configured to project the sequence of CT images to obtain volumetric images corresponding to the sequence of CT images. The target range detection unit 430 may be configured to perform a target detection process on the volumetric image to determine a target image range in the volumetric image. The target sequence determination unit 440 may be configured to determine a target image sequence from the CT image sequence based on the target image range.
The operations of the units 410-440 of the image processing apparatus 400 are similar to the operations of the steps S102-S108 described above, and are not repeated herein. In some embodiments, the image processing apparatus 400 may further include an output unit (not shown) that may be configured to send the sequence of target images to a print module to print the sequence of target images. According to another aspect of the embodiments of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program which, when executed by the at least one processor, implements the image reconstruction method described above.
In some embodiments, the electronic device may include a computed tomography system.
According to another aspect of embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium storing a computer program, wherein the computer program realizes the above method when executed by a processor.
According to another aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program realizes the above method when executed by a processor.
Referring to fig. 5, a block diagram of a structure of an electronic device 500, which may be the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 comprises a computing unit 501 which may perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506, an output unit 507, a storage unit 508, and a communication unit 509. The input unit 506 may be any type of device capable of inputting information to the device 500, and the input unit 506 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote controller. Output unit 507 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The storage unit 508 may include, but is not limited to, a magnetic disk, an optical disk. The communication unit 509 allows the device 500 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 1302.11 devices, WiFi devices, WiMax devices, cellular communication devices, and/or the like.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 performs the respective methods and processes described above, such as the data processing method and the method of training a dictionary according to the embodiment of the present disclosure. For example, in some embodiments, methods according to embodiments of the present disclosure may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured by any other suitable means (e.g., by means of firmware) to perform the methods of the embodiments of the present disclosure.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (16)

1. An image processing method comprising:
acquiring a CT image sequence of computed tomography;
projecting the CT image sequence to obtain a volume image corresponding to the CT image sequence;
performing target detection processing on the volume image to determine a target image range in the volume image; and
a target image sequence is determined from the CT image sequence based on the target image range.
2. The image processing method of claim 1, wherein the performing the object detection process on the volumetric image to determine the target image range in the volumetric image comprises:
inputting the volumetric image into a trained neural network model for a target detection task;
obtaining an output result of the neural network model; and
determining the target image range based on an output result of the neural network model.
3. The image processing method of claim 2, wherein the output result indicates a location and a label of at least one structural feature in the volumetric image,
determining the target image range based on the output result of the neural network model includes:
determining target content for the target image range;
determining a location of the target content in the volumetric image based on the at least one structural feature;
the target image range is determined based on a position of the target content in the volumetric image.
4. The image processing method of claim 3, wherein determining target content for the target image range comprises:
determining the target content in response to a user input; or
And acquiring the default target content stored in advance.
5. The image processing method of claim 3, wherein determining the location of the target content in the volumetric image based on the at least one structural feature comprises:
selecting a target structural feature corresponding to the target content from the at least one structural feature;
the location of the target content in the volumetric image is determined based on the location of the target structural feature.
6. The image processing method of any of claims 1 to 5, wherein determining a target image sequence from the CT image sequence based on the target image range comprises:
deleting CT images in the CT image sequence which are not in the target image range;
the remaining CT image sequence is determined as the target image sequence.
7. The image processing method of claim 6, wherein determining the remaining CT image sequence as the target image sequence comprises:
scaling the CT images in the remaining CT image sequence;
and determining each scaled CT image as the target image sequence.
8. The image processing method of any of claims 1 to 7, further comprising:
and sending the target image sequence to a printing module.
9. An image processing apparatus comprising:
an image sequence acquisition unit configured to acquire a computed tomography CT image sequence;
a volumetric image determination unit configured to project the sequence of CT images to obtain volumetric images corresponding to the sequence of CT images;
a target range detection unit configured to perform target detection processing on the volume image to determine a target image range in the volume image; and
a target sequence determination unit configured to determine a target image sequence from the CT image sequence based on the target image range.
10. The image processing apparatus according to claim 9, wherein the target range detection unit is configured to:
inputting the volumetric image into a trained neural network model for a target detection task;
obtaining an output result of the neural network model; and
determining the target image range based on an output result of the neural network model.
11. The image processing apparatus according to claim 10, wherein the output result indicates a position and a label of at least one structural feature in the volumetric image,
determining the target image range based on the output result of the neural network model includes:
determining target content for the target image range;
determining a location of the target content in the volumetric image based on the at least one structural feature;
the target image range is determined based on a position of the target content in the volumetric image.
12. The image processing apparatus of claim 11, wherein determining target content for the target image range comprises:
determining the target content in response to a user input; or
And acquiring the default target content stored in advance.
13. The image processing apparatus as claimed in claim 11, further comprising:
an output unit configured to send the sequence of target images to a print module.
14. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores a computer program which, when executed by the at least one processor, implements the method according to any one of claims 1-8.
15. A non-transitory computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-8.
16. A computer program product comprising a computer program, wherein the computer program realizes the method according to any of claims 1-8 when executed by a processor.
CN202110815707.8A 2021-07-19 2021-07-19 Image processing method, image processing device, electronic equipment and storage medium Pending CN113450345A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110815707.8A CN113450345A (en) 2021-07-19 2021-07-19 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110815707.8A CN113450345A (en) 2021-07-19 2021-07-19 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113450345A true CN113450345A (en) 2021-09-28

Family

ID=77816752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110815707.8A Pending CN113450345A (en) 2021-07-19 2021-07-19 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113450345A (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101332093A (en) * 2007-06-20 2008-12-31 株式会社东芝 Medical-diagnosis assisting apparatus, medical-diagnosis assisting method, and radiodiagnosis apparatus
CN102525537A (en) * 2009-02-23 2012-07-04 株式会社东芝 Medical image processing apparatus and medical image processing method
CN106256326A (en) * 2015-06-19 2016-12-28 通用电气公司 The generation system and method for computed tomography sectioning image
CN106408610A (en) * 2015-04-16 2017-02-15 西门子公司 Method and system for machine learning based assessment of fractional flow reserve
CN109493328A (en) * 2018-08-31 2019-03-19 上海联影智能医疗科技有限公司 Medical image display method checks equipment and computer equipment
CN110009656A (en) * 2019-03-05 2019-07-12 腾讯科技(深圳)有限公司 Determination method, apparatus, storage medium and the electronic device of target object
CN110400286A (en) * 2019-06-05 2019-11-01 山东科技大学 The detection localization method of metal needle in a kind of X ray CT image
CN111275669A (en) * 2020-01-13 2020-06-12 西安交通大学 Priori information guided four-dimensional cone beam CT image reconstruction algorithm
CN111523547A (en) * 2020-04-24 2020-08-11 江苏盛海智能科技有限公司 3D semantic segmentation method and terminal
CN111539944A (en) * 2020-04-28 2020-08-14 安徽科大讯飞医疗信息技术有限公司 Lung focus statistical attribute acquisition method and device, electronic equipment and storage medium
CN111798451A (en) * 2020-06-16 2020-10-20 北京理工大学 3D guide wire tracking method and device based on blood vessel 3D/2D matching
CN111933250A (en) * 2020-07-17 2020-11-13 东软医疗系统股份有限公司 Method and device for printing medical image and computer equipment
CN112085840A (en) * 2020-09-17 2020-12-15 腾讯科技(深圳)有限公司 Semantic segmentation method, device, equipment and computer readable storage medium
CN112150600A (en) * 2020-09-24 2020-12-29 上海联影医疗科技股份有限公司 Volume reconstruction image generation method, device and system and storage medium
CN112233117A (en) * 2020-12-14 2021-01-15 浙江卡易智慧医疗科技有限公司 New coronary pneumonia CT detects discernment positioning system and computing equipment
CN112258423A (en) * 2020-11-16 2021-01-22 腾讯科技(深圳)有限公司 Deartifact method, device, equipment and storage medium based on deep learning
CN112308765A (en) * 2020-10-13 2021-02-02 杭州三坛医疗科技有限公司 Method and device for determining projection parameters
CN112365498A (en) * 2020-12-10 2021-02-12 南京大学 Automatic detection method for multi-scale polymorphic target in two-dimensional image sequence
CN113096210A (en) * 2021-04-15 2021-07-09 西门子数字医疗科技(上海)有限公司 Image reconstruction method and device, electronic equipment and storage medium
CN113112559A (en) * 2021-04-07 2021-07-13 中国科学院深圳先进技术研究院 Ultrasonic image segmentation method and device, terminal equipment and storage medium

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101332093A (en) * 2007-06-20 2008-12-31 株式会社东芝 Medical-diagnosis assisting apparatus, medical-diagnosis assisting method, and radiodiagnosis apparatus
CN102525537A (en) * 2009-02-23 2012-07-04 株式会社东芝 Medical image processing apparatus and medical image processing method
CN106408610A (en) * 2015-04-16 2017-02-15 西门子公司 Method and system for machine learning based assessment of fractional flow reserve
CN106256326A (en) * 2015-06-19 2016-12-28 通用电气公司 The generation system and method for computed tomography sectioning image
CN109493328A (en) * 2018-08-31 2019-03-19 上海联影智能医疗科技有限公司 Medical image display method checks equipment and computer equipment
CN110009656A (en) * 2019-03-05 2019-07-12 腾讯科技(深圳)有限公司 Determination method, apparatus, storage medium and the electronic device of target object
CN110400286A (en) * 2019-06-05 2019-11-01 山东科技大学 The detection localization method of metal needle in a kind of X ray CT image
CN111275669A (en) * 2020-01-13 2020-06-12 西安交通大学 Priori information guided four-dimensional cone beam CT image reconstruction algorithm
CN111523547A (en) * 2020-04-24 2020-08-11 江苏盛海智能科技有限公司 3D semantic segmentation method and terminal
CN111539944A (en) * 2020-04-28 2020-08-14 安徽科大讯飞医疗信息技术有限公司 Lung focus statistical attribute acquisition method and device, electronic equipment and storage medium
CN111798451A (en) * 2020-06-16 2020-10-20 北京理工大学 3D guide wire tracking method and device based on blood vessel 3D/2D matching
CN111933250A (en) * 2020-07-17 2020-11-13 东软医疗系统股份有限公司 Method and device for printing medical image and computer equipment
CN112085840A (en) * 2020-09-17 2020-12-15 腾讯科技(深圳)有限公司 Semantic segmentation method, device, equipment and computer readable storage medium
CN112150600A (en) * 2020-09-24 2020-12-29 上海联影医疗科技股份有限公司 Volume reconstruction image generation method, device and system and storage medium
CN112308765A (en) * 2020-10-13 2021-02-02 杭州三坛医疗科技有限公司 Method and device for determining projection parameters
CN112258423A (en) * 2020-11-16 2021-01-22 腾讯科技(深圳)有限公司 Deartifact method, device, equipment and storage medium based on deep learning
CN112365498A (en) * 2020-12-10 2021-02-12 南京大学 Automatic detection method for multi-scale polymorphic target in two-dimensional image sequence
CN112233117A (en) * 2020-12-14 2021-01-15 浙江卡易智慧医疗科技有限公司 New coronary pneumonia CT detects discernment positioning system and computing equipment
CN113112559A (en) * 2021-04-07 2021-07-13 中国科学院深圳先进技术研究院 Ultrasonic image segmentation method and device, terminal equipment and storage medium
CN113096210A (en) * 2021-04-15 2021-07-09 西门子数字医疗科技(上海)有限公司 Image reconstruction method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
崔云等: ""肺结节的CT计算机辅助检测和诊断的基本方法和应用"", 《中国医学影像技术》, vol. 23, no. 3, 31 December 2007 (2007-12-31), pages 469 - 472 *

Similar Documents

Publication Publication Date Title
CN107480677B (en) Method and device for identifying interest region in three-dimensional CT image
US11900594B2 (en) Methods and systems for displaying a region of interest of a medical image
CN113066090B (en) Training method and device, application method and device of blood vessel segmentation model
CN110176010B (en) Image detection method, device, equipment and storage medium
CN111488872B (en) Image detection method, image detection device, computer equipment and storage medium
US20120053446A1 (en) Voting in image processing
JPWO2012073769A1 (en) Image processing apparatus and image processing method
CN110811663A (en) Multi-region scanning method, device, equipment and storage medium
CN115409990B (en) Medical image segmentation method, device, equipment and storage medium
JPWO2007013300A1 (en) Abnormal shadow candidate detection method and abnormal shadow candidate detection apparatus
CN112200780B (en) Bone tissue positioning method, device, computer equipment and storage medium
JP2019063504A (en) Image processing device and image processing method
CN112862955A (en) Method, apparatus, device, storage medium and program product for building three-dimensional model
US20230334698A1 (en) Methods and systems for positioning in an medical procedure
CN112132981A (en) Image processing method and device, electronic equipment and storage medium
CN113450345A (en) Image processing method, image processing device, electronic equipment and storage medium
US20100202674A1 (en) Voting in mammography processing
CN115631152A (en) Ultrasonic image interception method and device, electronic equipment and storage medium
CN113962958B (en) Sign detection method and device
CN113160199B (en) Image recognition method and device, computer equipment and storage medium
JPWO2019220871A1 (en) Chest X-ray image abnormality display control method, abnormality display control program, abnormality display control device, and server device
CN115311188B (en) Image recognition method and device, electronic equipment and storage medium
CN112365959B (en) Method and device for modifying annotation of three-dimensional image
CN113990432A (en) Image report pushing method and device based on RPA and AI and computing equipment
WO2022103659A1 (en) System and method for detecting medical conditions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination