CN107292857A - Image processing apparatus and method and computer-readable recording medium - Google Patents
Image processing apparatus and method and computer-readable recording medium Download PDFInfo
- Publication number
- CN107292857A CN107292857A CN201710223092.3A CN201710223092A CN107292857A CN 107292857 A CN107292857 A CN 107292857A CN 201710223092 A CN201710223092 A CN 201710223092A CN 107292857 A CN107292857 A CN 107292857A
- Authority
- CN
- China
- Prior art keywords
- image
- pixel value
- pixel
- unit
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
- G06T7/0016—Biomedical image inspection using an image reference approach involving temporal comparison
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20216—Image averaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30056—Liver; Hepatic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
Abstract
The present invention provides a kind of image processing apparatus and method and computer-readable recording medium.Described image processing unit includes:Obtaining unit, it is configured to obtain the first image of subject and the second image of the subject;Difference unit, it is configured to obtain the difference image after by described first image and second image registration;And changing unit, it is configured to based on the likelihood score using the calculated for pixel values in the pixel value and second image in described first image, the processing for the pixel value being changed in the difference image.
Description
Technical field
The present invention relates to a kind of image processing apparatus and image processing method and computer-readable recording medium.
Background technology
In medical domain, (CT (Computed are hereinafter referred to as using such as computer tomography device is passed through
Tomography, computer tomography) device) etc. the image that obtains of various types of medical science camera devices (mode) enter
Row diagnosis.Especially, in order to shoot the situation of subject with the change of time, the image that will be obtained in different timings is compared
Compared with.One in the image to be compared will hereinafter be referred to as reference picture picture, and other images will hereinafter be referred to as
Deformed article image.
As making the visual image procossing of the change with the time between reference picture picture and deformed article image, it is known that one
Plant the technology for the image (will hereinafter be referred to as difference image) for obtaining the difference between representative image, i.e. subtracting techniques.
In difference image, the part of the change between two images is depicted, and remaining unchanged part is expressed as having
The region of uniform concentration value.
In difference image, due to the difference or subject of imaging conditions between reference picture picture and deformed article image
The change of situation, may depict the region in addition to the region that doctor pays close attention on difference image, so as to deteriorate region-of-interest
Observability.Japanese Unexamined Patent Publication 2014-94036 publications disclose a kind of based on from one in reference picture picture and deformed article image
The noise region that person determines adjusts the technology of the weight of the pixel value on difference image.
However, in technology described in Japanese Unexamined Patent Publication 2014-94036 publications, the region paid close attention to doctor independently,
Noise region is determined from one of reference picture picture and deformed article image, and adjusts the power of the pixel value on difference image
Weight.Therefore, in the region that the doctor on difference image is not concerned with, change is also depicted on image, so as to deteriorate
The observability of difference value in the region of doctor's concern.Further, since only considering reference picture picture when adjusting the weight of pixel value
One of with deformed article image, it is therefore desirable to select operation in these images, adjusting the image that weight is based on.
The present invention is made that in view of the above problems, and the present invention provides a kind of pixel that can be used in multiple images
Value adjusts the image processing techniques of the weight of the pixel value on difference image.The present invention provides a kind of obtain can be by reality
The corresponding arrangement in the pattern of the invention being described later on is applied come function that is realizing but can not being obtained by correlation technique
With the technology of effect.
The content of the invention
According to an aspect of the invention, there is provided a kind of image processing apparatus, described image processing unit includes:Obtain
Unit, it is configured to obtain the first image of subject and the second image of the subject;Difference unit, it is configured to
Obtain the difference image after by described first image and second image registration;And changing unit, it is configured to
Based on the likelihood score calculated using the pixel value in the pixel value in described first image and second image, it is changed
The processing of pixel value in the difference image.
According to an aspect of the invention, there is provided a kind of image processing apparatus, described image processing unit includes:Obtain
Unit, it is configured to obtain the first image of subject and the second image of the subject;Difference unit, it is configured to
Obtain the difference image after by described first image and second image registration;And changing unit, it is configured to
Between pixel value based on each pixel in described first image and the pixel value of each pixel in second image
Compare, the processing for the pixel value being changed in the difference image.
According to an aspect of the invention, there is provided a kind of image processing apparatus, described image processing unit includes:Obtain
Unit, it is configured to obtain the first image of subject and the second image of the subject;Difference unit, it is configured to
Obtain the difference image after by described first image and second image registration;Display processing unit, it is configured to
Described first image and second image are shown on the display unit;And changing unit, it is configured to based on described aobvious
Show the display condition of unit, the processing for the pixel value being changed in the difference image.
According to another aspect of the present invention there is provided a kind of image processing method, described image processing method includes:Obtain
First image of subject and the second image of the subject;Obtain by described first image and second image registration
Difference image afterwards;And based on the calculated for pixel values in the pixel value and second image used in described first image
Likelihood score, the processing for the pixel value being changed in the difference image.
According to another aspect of the present invention there is provided a kind of image processing apparatus, described image processing unit includes:Obtain
Unit, it is configured to the image for obtaining subject;And characteristic quantity obtaining unit, it is configured to, in described image
Concerned pixel, set by multiple predefined paths of the concerned pixel, for each in path, based on the neighbour on path
Similarity between nearly pixel value, calculates the successional evaluation of estimate of the pixel value on delegated path, and based on for each road
The evaluation of estimate that footpath is obtained, obtains the characteristic quantity of the concerned pixel.
In accordance with the invention it is possible to adjust the weight of the pixel value on difference image using the pixel value in multiple images.
In addition, according to the present invention, when adjusting the weight of pixel value, it is not necessary to carry out selection to be used for adjusting the reference picture of weight as
Operation, so as to alleviate the burden to the doctor as user.
Description by following (referring to the drawings) to exemplary embodiment, other features of the invention will be clear.
Brief description of the drawings
Fig. 1 is the block diagram for showing to be arranged according to the function of the image processing apparatus of embodiment;
Fig. 2 is the flow chart of the example for the processing procedure for illustrating image processing apparatus;
Fig. 3 is the figure of the display example of image for showing to be obtained by image acquiring unit;
Fig. 4 is to show the likelihood score based on pixel (pixel is included in the pixel in the region of user's concern)
(likelihood) come the figure of the example that changes each pixel value in difference image;
Fig. 5 is to show the difference when position skew occurs in the result detected to the correspondence position between image
The figure of the example of image;
Fig. 6 be show the image that is obtained by image acquiring unit and each pixel value change after difference image show
The figure of example;
Fig. 7 is the flow chart of the example for the processing procedure for illustrating image processing apparatus;
Fig. 8 is the flow chart for illustrating the processing procedure for extracting the region outside region-of-interest;
Fig. 9 is the figure for being used to illustrate to strengthen the example of the processing in block (massive) region;
Figure 10 is the figure for illustrating the example for the processing for strengthening block region;And
Figure 11 A to Figure 11 D are the figures for illustrating the example for the processing for strengthening block region.
Embodiment
Embodiments of the invention are illustratively described in detail now with reference to accompanying drawing.Note, described in embodiment
Inscape is only example.The technical scope of the present invention is determined by the scope of claim, and is not implemented by following
Example limitation.
<First embodiment>
According between the image processing apparatus of first embodiment acquisition multiple images (reference picture picture and deformed article image)
Difference image, and perform the image procossing of the weight for suitably adjusting the pixel value on difference image.At the image
Reason device is characterized in, is obtaining difference image and during performing the processing of image procossing, from the pixel value in reference picture picture and
The pixel value in the deformed article image with the deformable registration of reference picture picture is experienced, obtaining pixel, (pixel is included in
Pixel in the region of user's concern of doctor etc.) index (likelihood score), and use obtained index (likelihood
Degree) it is used as the weight of difference value.
In the art, for example, can be set by the weight for the pixel being included within the region-of-interest on difference image
The weight for the pixel being set in the region being not concerned with more than user, to improve the visible of the difference value in the region that user pays close attention to
Property.Alternatively, user's concern can be set smaller than by the weight for the pixel being included within the region that user is not concerned with
Region in pixel weight, relatively to improve the observability of the difference value in region-of-interest.
Arrangement and the processing of image processing apparatus according to the present embodiment are described below with reference to Fig. 1.Fig. 1 is to show bag
Include the frame of the example of the arrangement of the image processing system (magic magiscan) of image processing apparatus according to the present embodiment
Figure.Image processing system includes the image processing apparatus 10, network 21 and database 22 as its functional part.Image procossing is filled
Put 10 and be communicatively connected to database 22 via network 21.Network 21 includes such as LAN (LAN) and WAN (wide area network).
Database 22 is kept and management medicine image and the information associated with medical image.Image processing apparatus 10 can
The medical image kept in database 22 is obtained via network 21.Image processing apparatus 10 includes the (communication of communication IF (interface) 31
Unit), ROM (read-only storage) 32, RAM (random access memory) 33, memory cell 34, operating unit 35, display unit
36 and control unit 37.
Communication IF 31 (communication unit) is formed by LAN card etc., and realize external device (ED) (for example, database 22 etc.) with
Communication between image processing apparatus 10.ROM 32 is formed by nonvolatile memory etc., and stores various programs.RAM 33
Formed by volatile memory etc., and the various information of interim storage are used as data.Memory cell 34 is by HDD (hard disk drive)
Deng formation, and various information are stored as data.Operating unit 35 is formed by keyboard and mouse, touch panel etc., and will
Instruction input from user (for example, doctor) is to various devices.
Display unit 36 is formed by display etc., and shows various information to user.Control unit 37 is by CPU (centers
Processing unit) etc. formed, and comprehensively control image processing apparatus 10 in processing.Control unit 37 includes being used as its function
Image acquiring unit 50, anamorphose unit 51, difference processing unit 52, the pixel value of part change unit 53 and display processing
Unit 54.
In the image processing apparatus according to the present embodiment, image acquiring unit 50 obtain subject the first image and by
Second image of a corpse or other object for laboratory examination and chemical testing.First image and the second image are the images in different timing acquisitions.Image acquiring unit 50 is from number
The first image I1 (reference picture picture) and the second image I2 (deformed article image) are obtained according to storehouse 22.These images are by various moulds
The image (medical image) for the subject that state is obtained.It is in different date and time acquisitions that the present embodiment, which will describe medical image,
CT images example.However, medical image can be other kinds of image.How the species of image all can be using this
Embodiment.
Anamorphose unit 51 passes through each picture in the position based on each pixel in the first image and the second image
Corresponding relation between the position of element makes the second anamorphose so that each pixel in the second image and pair in the first image
Pixel matching is answered, by the first image and the second image registration.That is, anamorphose unit 51 is by obtaining the first image I1 (ginsengs
According to image) in the position of each pixel and the position of each pixel in the second image I2 (deformed article image) between pair
It should be related to, and the second image I2 is deformed based on corresponding relation so that each pixel and the first image in the second image I2
Respective pixel matching in I1, by the first image and the second image registration.Second image I2 deformation process result is below
In will be referred to as the image I2' of image I2'(second).In order to calculate corresponding relation, existing linear deformation algorithm can be used, showed
Some nonlinear deformation algorithms or combinations thereof.By carrying out the deformable registration between image by anamorphose unit 51,
It can will represent to include the characteristic point of the characteristic in the first image I1 and represent to include in second image I2 (the second images
I2' the Characteristic points match of the characteristic in).
Difference processing unit 52 obtains the difference image after by the first image and the second image registration.Difference processing list
First 52 image after registration obtains the pixel value of corresponding position, and is carried out by the pixel value for being obtained at difference
Manage to obtain difference image TS.That is, difference processing unit 52 is obtained at the same position in the first image I1 and the second image I2'
Pixel pixel value, difference processing is carried out between the two pixel values, and result of calculation is defeated as difference image TS
Go out to pixel value and change unit 53.The pixel subtracted from the pixel value in the first image I1 in the second image I2' is described below
The difference processing of value.On the contrary, can use at the difference of the pixel value in the pixel value subtracted image I1 in the second image I2'
Reason.
Based on the likelihood score using the calculated for pixel values in the pixel value and the second image in the first image, pixel value changes
Unit 53 is changed the processing of the pixel value in difference image.Pixel value changes unit 53 and uses the area with being paid close attention on user
The distributed intelligence of pixel value in the associated learning data of the information (pixel value information) of the scope of pixel value in domain (is represented
The distributed intelligence of the distribution of pixel value in region-of-interest), based on being calculated seemingly from the first image I1 and the second deformation pattern I2'
Right degree, come the processing of the pixel value of each pixel being changed in difference image TS.
The information (pixel value information) of the scope for the pixel value paid close attention on user, can be by user via operating unit 35
Input, or the first image I1 and/or the second image I2 that are shown from display unit 36 are automatically determined.For example, pixel value
Change unit 53 to obtain in each pixel in difference image on closing based on the information inputted via operating unit 35
The pixel value information of the scope of pixel value in note region.In addition, pixel value change unit 53 can be based on display unit 36
The pixel value letter on the scope of the pixel value in region-of-interest is obtained in display condition, each pixel in difference image
Breath.Pixel value change unit 53 can the display condition based at least one of the first image and the second image, to be closed
The pixel value information of the scope of pixel value in region-of-interest.
Pixel value changes unit 53 based on the pixel value information obtained, to set the pixel value paid close attention in region
The distributed intelligence of distribution.The distributed intelligence of pixel value in the storage learning data of memory cell 34 (pays close attention to the picture in region
Element value distribution distributed intelligence), and pixel value change unit 53 be based on obtained according to pixel value information from memory cell 34
Distributed intelligence distributed intelligence is set.The distributed intelligence of pixel value in learning data (pays close attention to the pixel in region
The distributed intelligence of the distribution of value) correspond to each position (for example, lung, bone, liver etc.) of subject, and it is stored in storage
The information for the different distributions for representing pixel value is used as in unit 34.Pixel value information in region-of-interest and the picture in learning data
The distributed intelligence (distributed intelligence for paying close attention to the distribution of the pixel value in region) of plain value is associated.If for example, pixel value
Change unit 53 obtains the pixel value information on the bone region as region-of-interest, then is obtained and bone region from memory cell 34
(distribution for paying close attention to the distribution of the pixel value in region is believed for the distributed intelligence of pixel value corresponding, in learning data
Breath), and the distributed intelligence is set to likelihood score design conditions (θ).Pixel value changes unit 53 and is based on using the first image
In pixel value and the calculated for pixel values in the second image likelihood score, come the place of pixel value being changed in difference image
Reason.
Note, the distributed intelligence of the pixel value in learning data (pays close attention to the distribution of the distribution of pixel value in region
Information) it is the information resulted in when various mode obtain image.The distributed intelligence of pixel value in learning data (is represented
The distributed intelligence of the distribution of pixel value in region-of-interest) number can be stored in together with the image (medical image) of subject
According in storehouse 22.In this case, point of the pixel value during image acquiring unit 50 can obtain learning data from database 22
Cloth information (distributed intelligence for paying close attention to the distribution of pixel value in region), and store it in memory cell 34.
The result of the processing for the pixel value being changed in difference image TS is exported next, pixel value changes unit 53
To display processing unit 54.The difference image for performing pixel value change processing based on likelihood score will hereinafter be referred to as difference
Image TS'.
Display processing unit 54 is used as the display control unit of the display for control display unit 36.Display processing unit
54 show the difference image TS' calculated by pixel value change unit 53 in the image display area of display unit 36.At display
Reason unit 54 can show the first image all obtained by image acquiring unit 50 in the image display area of display unit 36
I1 and the second image I2, the second image I2' deformed by anamorphose unit 51 and calculated by difference processing unit 52
Difference image TS.For example, display processing unit 54 is capable of the display of control display unit 36, to be displayed side by side difference image TS'
With TS, the first image I1 and the second image I2 (I2'), or it is superimposed and shows some in these images.
Each part of image processing apparatus 10 is operated according to computer program.For example, control unit 37 (CPU) is used
RAM 33 is carried in the computer program stored in ROM 32 or memory cell 34 as working region, and performs them,
So as to realize the function of each part.Note, special circuit can be used realize image processing apparatus 10 part some or
Repertoire.Alternatively, cloud computer can be used come some functions of the part of realizing control unit 37.
For example, the operation device present in the place different from the place of image processing apparatus 10 can be via network 21
It is communicatively connected to image processing apparatus 10.It is then possible to by between image processing apparatus 10 and operation device send/
Data are received, come the function of the part of realizing image processing apparatus 10 or control unit 37.
Referring next to Fig. 2 to and Fig. 6 the example of the processing of the image processing apparatus 10 shown in Fig. 1 described.Fig. 2
It is the flow chart of the example for the processing procedure for illustrating image processing apparatus 10.The present embodiment, which will be enumerated, each includes the medical science figure of bone
Picture.However, the present embodiment may be used on each including the medical image of other concerns position (such as lung, brain or liver).
(step S101:Acquisition/display of image)
In step S101, if user indicates to obtain reference picture picture (the first image I1) and deformation via operating unit 35
Object images (the second image I2), then image acquiring unit 50 multiple images (the reference specified by user is obtained from database 22
Image (the first image I1) and deformed article image (the second image I2)), and be stored in RAM33.In addition, as schemed
Shown in 3, display processing unit 54 shown in the image display area 300 of display unit 36 from database 22 obtain it is multiple
Image (reference picture picture (the first image I1) and deformed article image (the second image I2)).
(step S102:Deformable registration, step S103:Anamorphose)
In step s 102, anamorphose unit 51 reads image from RAM 33, and calculates each in the first image I1
The corresponding relation between each pixel in individual pixel and the second image I2.Represented more specifically, anamorphose unit 51 is calculated
From each pixel in the first image I1 to the deformation vector of the corresponding relation of each pixel in the second image I2.Deformation vector
It is to represent in pair, deformed article image (the second image I2) corresponding with each pixel in reference picture picture (the first image I1)
Each pixel virtual amount of movement (displacement) and the vector of moving direction (deformation direction).For example, in the feelings of 3-D view
Under condition, if the coordinate (x1, y1, z1) of each pixel in reference picture picture (the first image I1) is moved to deformed article image
The coordinate (x2, y2, z2) of each pixel in (the second image I2), then represent deformation vector by (x2-x1, y2-y1, z2-z1).
Note, the linear image deformable registration method of affine transformation etc., such as LDDMM (Large can be passed through
Deformation Diffeomorphic Metric Mapping, large deformation differomorphism measures mapping) etc. non-linear figure
As deformable registration method or combinations thereof, to perform the calculating of the deformation vector between the correspondence position in image.By scheming
As the multiple images of the acquisition of obtaining unit 50 can be the image in addition to the original image obtained by various mode.For example,
The output image of the various image enhancing filters of edge enhancement filter etc. can be used, by extracting region-of-interest and
The area image of acquisition, and these images combination.
In step s 103, anamorphose unit 51 is using the deformation vector obtained in step s 102, according to the second figure
As I2 generates the second image I2' so that in each pixel and deformed article image (the second image I2) in the first image I1
Respective pixel is matched, and the image of generation is stored in RAM 33.Image of the display processing unit 54 in display unit 36
The the second image I2' generated by anamorphose unit 51 is shown in viewing area.
Note, if the position (pixel) for being included in the first image I1 and the subject in the second image I2 is initially right each other
Should, then it can skip the processing in step S102 and S103.
(step S104:Difference processing (generation of difference image) between pixel value)
In step S104, difference processing unit 52 reads the first image I1 and the second image I2' from RAM 33, by entering
Difference processing generation difference diagram between the pixel value of the pixel of row the first image I1 and the corresponding position in the second image I2'
As TS, and the image of generation is stored in RAM 33.
(step S105:Obtain the pixel value information in the region of user's concern)
In step S105, pixel value change unit 53 obtain on user pay close attention to pixel value scope information (as
Plain value information).Pixel value changes display condition of the unit 53 based on the image shown on display unit 36, obtains and is closed on user
The information (pixel value information) of the scope of pixel value in the region of note.It is based on more specifically, pixel value changes unit 53 in step
The display bar for the image (the first image I1 and the second image I2 or I2') being shown in rapid S101 or S103 on display unit 36
Part, for example, such as window position (the window level, WL) and window width (window width, WW) etc. that are changed for concentration value
Arranges value, the pixel value information in region to estimate user's concern.
When carrying out the diagnostic imaging of CT images as the doctor of user, he/her passes through the subject according to doctor's concern
Position change window position (window level, WL) and window width (window width, WW), to change and doctor concern is detected
The setting (concentration value of conversion display image) of the display condition of the corresponding image in the position of body.If more specifically, for example,
Doctor carries out the diagnostic imaging of bone, then window position (WL) is set into value between 350 to 500 [H.U.], and by window width (WW)
It is set to the value between 1500 to 2500 [H.U.].By the setting for the display condition for changing image according to the position of subject,
The display image of the concentration distribution at position (for example, bone) of the display with the subject for making it easy to watch doctor's concern.
By using this point, pixel value changes unit 53 being capable of image (the first figure based on display experience difference processing
As I1 and the second image I2 or I2') display condition for images (for example, window position (WL) and window width (WW)) of display unit 36 set
Put value, come estimate as user doctor concern pixel value scope and/or the position (organ) of subject.
Note, if window position (WL) and window width (WW) be arranged on it is different between the first image I1 and the second image I2',
Expect using the benchmark of deformable registration, the arranges value of reference picture picture (the first image I1) served as between image, but can
To use the second image I2' arranges value.Pixel value change unit 53 can based in the first image and the second image at least
The display condition of one, to obtain the pixel value information on the scope of the pixel value in region-of-interest.
The pixel value information of user's concern can be inputted from user via operating unit 35.Each picture in difference image
In element, pixel value changes unit 53 can be based on the information inputted via operating unit 35, to obtain the picture on region-of-interest
The pixel value information of the scope of element value.Pixel value change unit 53 can be obtained from storage destination is stored in ROM 32 or RAM
The pixel value information of predetermined (predetermined) in 33.
(step S106:Change the pixel value (image subtraction operation result) in difference image)
In step s 106, pixel value change unit 53 uses the study number associated with the pixel value information that user pays close attention to
The distributed intelligence (distributed intelligence for paying close attention to the distribution of the pixel value in region) of pixel value in is calculated as likelihood score
Condition (θ), based on the likelihood score according to the first image I1 and the second image I2' calculating, to change each in difference image TS
The pixel value of pixel, so as to generate difference image TS'.
It is the information being expressed as follows according to the first image I1 and the second image I2' likelihood scores calculated:With difference image TS
In position is corresponding, the pixel at the first image I1 and/or position (same coordinate) place in the second image I2 (I2'), quilt
It is included in the possibility in the region of user's concern.
In order to calculate the likelihood score of pixel value, it is assumed that the distribution of pixel value is normal distribution, and pixel value changes unit
53 calculate the likelihood score of pixel value using the distributed intelligence of the distribution for the pixel value paid close attention in region.Pixel value changes single
Member 53 from known learning data, be obtained ahead of time as the pixel value paid close attention in region distribution it is distributed intelligence, just
The parameter (average value and variance) of state distribution, and by using the distributed intelligence (parameter of normal distribution) obtained as seemingly
Design conditions (θ) are so spent, to calculate the likelihood score of each pixel in image to be processed.Therefore, carrying out likelihood score calculating
Before, pixel value changes the distributed intelligence that unit 53 obtains the distribution as the pixel value in paying close attention to region from learning data
, the parameter of normal distribution.
For example, in the case of CT images, pixel value changes unit 53 and is directed to each portion existed with each concentration range
Position (organ) (for example, the abdomen organ of the lung areas comprising large quantity of air including the liver being mainly made up of soft tissue or
Person's bone), obtain distributed intelligence, normal distribution the parameter of the distribution as the pixel value paid close attention in region.
In being calculated according to the likelihood score of the present embodiment, pixel value changes unit 53 by the pixel in the region paid close attention to user
The distributed intelligence of pixel value in the associated learning data of value information (pays close attention to the distribution of the distribution of pixel value in region
Information) likelihood score design conditions (θ) are set to, and based on the pixel in the pixel value and the second image used in the first image
The likelihood score that value is calculated, the processing for the pixel value being changed in difference image.Make more specifically, pixel value changes unit 53
With weight of the likelihood score as the pixel value (difference value) in difference image.Pixel value changes unit 53 and used based on likelihood score institute
The weight coefficient of acquisition changes the pixel value in difference image.Pixel value changes the pixel value that unit 53 will be paid close attention to including user
Region in difference value be set to comparatively be more than remaining area in difference value.This can only strengthen user and want seizure
Change, so as to improve the observability of difference image.
Fig. 4 shows the image display example when region paid close attention to as user is bone.Reference picture 4, difference image TS 403
Be based on the reference picture shown in Fig. 4 as 401 (the first image I1) and deformed article image 402 (the second image I2) therebetween
Difference and the image that obtains, and difference image TS'404 is by performing the difference diagram that pixel value changes processing and obtained
Picture.In Fig. 4 difference image TS and TS', principal-employment score value, minus tolerance score value and difference are represented by white, black and grey respectively
Value 0.Dotted line represents the profile of area (organic region).However, area (organic region) is necessarily depicted in difference
On image.Fig. 4 example is shown when the learning data in the bone region from the area paid close attention to as user obtains expression bone
Situation during distributed intelligence (parameter) of the distribution of the pixel value in region.In this case, based on the use distributed intelligence
Difference between the likelihood score that (parameter) is calculated, the pixel that will be likely included in the area (bone region) of user's concern
The weight of value is set greater than the weight of the difference value between the pixel in remaining area, and changes pixel value.As described above,
Can by enhancing be likely included in user pay close attention to area (bone region) in pixel between difference, suppress except
The description of the change with the time in region (for example, abdomen organ) beyond the area of user's concern.
In the present embodiment, pixel value changes unit 53 based on being obtained seemingly according to the first image I1 and the second image I2'
Right degree, come the weight coefficient W (p) of the multiplication that defines the pixel value that be used to change the pixel p in difference image TS, as following
Given by formula (1).Pixel value changes unit 53 based on the pixel value (I1 (p)) used in the first image and pays close attention to region
In pixel value distribution the likelihood score Pr (I1 (p) | θ) that is calculated of distributed intelligence (θ) and using the picture in the second image
Plain value (I2'(p)) and pay close attention to pixel value in region distribution the likelihood score Pr (I2' that are calculated of distributed intelligence (θ)
(p) | θ)) in the greater, come the processing of pixel value being changed in difference image.
W (p)=max (Pr (I1 (p) | θ), Pr (I2'(p) | θ)) ... (1)
Wherein, function max (A, B) is the function for extracting from the maximum in variables A and B, and Pr (C | D) represent when to shaping
The probability (likelihood score) of condition C is obtained during part D.In addition, I1 (p) represents the pixel value of the pixel p in the first image I1, I2'(p)
Represent the pixel value of the pixel p in the second image I2'.If the second image I2 is not deformed, I2'(p) represent the second image
Pixel value in I2.
Moreover, it is assumed that θ represent being obtained from learning data, pay close attention to region in pixel value distribution distributed intelligence
(parameter).Then, pixel value changes each in all pixels that unit 53 is directed in difference image TS, by by difference image
Pixel value TS (p) in TS is multiplied by weight coefficient W (p) to change pixel value, and will be as by performing at pixel value change
The difference image TS' for managing the result obtained is stored in RAM 33.
Both images (the first image I1 and the second image I2') are used in the formula (1) for calculating weight coefficient W (p)
As difference image TS calculating source the reason for be, in order to prevent mistakenly suppressing the difference in the region-of-interest on difference image
Value.
If for example calculating weight coefficient W (p) using only one of image, if there is having in one image
There is high likelihood score but there is the pixel of low likelihood score in another image, then weight may diminish.If more specifically, with to
The difference with the time between the image that timing section is shot is object, then has in one image and represent normal organ structure
The pixel of concentration value, may have the pixel value outside the scope of normal pixel value in another image due to lesion.
Therefore, in this case, if diminished using the image including lesion to calculate the weight at weight, respective pixel.
If on the contrary, calculate weight coefficient W (p) both as difference image TS calculating source using image, if
Pixel value falls in the range of the likelihood score of in the picture at least one, then the change of the pixel in the region that can be paid close attention to user
(difference value) provides big weight.If in addition, using both images, when calculating weight coefficient W (p), it is not necessary to select
The step of one in first image I1 and the second image I2'.
In Fig. 4 in shown example, in the difference as the simple differencing between the first image I1 and the second image I2'
In image TS, in addition to the area (bone region) in the region paid close attention to as user, the region being not concerned with as user
Other areas (liver area) also serve as difference value reservation.CT values in liver area generally fall in 60 to 70 [H.U.'s]
In the range of, and CT values in bone region are equal to or more than 150 [H.U.].Therefore, by learn the expression in learning data with
Pr (I1 (p) | θ) and Pr (I2'(p) in distributed intelligence (parameter) θ of the distribution of the relevant pixel value in bone region, bone region | θ)
Higher than the Pr (I1 (p) | θ) and Pr (I2'(p) in liver area | θ).Therefore, by the pixel value TS (p) in difference image TS
It is multiplied by weight coefficient W (p) result (that is, TS'(p)) in, the pixel value TS'(p in other areas (liver area)) with
Pixel value TS'(p in the area (bone region) in the region paid close attention to as user) compare, with the difference value close to 0.Knot
Really, the difference value in the difference image TS' as shown in Fig. 4, other areas (liver area) is suppressed, and is being made
The relative enhancing of difference value in the area (bone region) in the region paid close attention to for user, so that being easier visually to perceive
It is used as the change of the bone of region-of-interest.
Note having when calculating weight coefficient W (p) using image both as the calculating source of difference image following excellent
Point.That is, as difference image TS feature, the meter of the correspondence position in the image that is carried out in step S102 and S103 can be made
Error in calculation is clearly visualized.This feature is described in detail in reference picture 5.Note, in Figure 5 in shown difference image,
Similar to Fig. 4, principal-employment score value, minus tolerance score value and difference value 0 are represented by white, black and grey respectively.Assuming that user's concern
Region is the bone region in Fig. 5.
Reference picture 5, image 501 is reference picture picture (the first image I1), and image 502 is deformed article image (the second image
), and image 503 is deformed article image (the second image I2') I2.Image 504 shows (the reference picture picture (first of image 501
Image I1)) and image 502 (deformed article image (the second image I2)) superimposed image.Image 505 shows the (ginseng of image 501
According to image (the first image I1)) and deformed article image (the second image I2') superimposed image.
Image 506 is illustrated as use image (the first image I1 and the second image I2') both as difference image
Calculating source calculates weight coefficient and changes each pixel value in difference image by the way that pixel value is multiplied by into weight coefficient
Result difference image TS'.
It is small for occurring in the detection of correspondence position in the picture as shown in Fig. 5 image 506 (difference image)
The place of position skew, it is close to each other on the occasion of being depicted as with negative value on difference image.The difference on image is seen as user
The rendering results of value, for example, positive pixel value and negative pixel values locally inverted place when, he/her can recognize pixel value
The place that is really changed and the wrong place that position skew is there occurs in correspondence position testing result between images.
Image 507 is illustrated as using one of image (the first image I1) to be counted as the calculating source of difference image
Calculate weight coefficient and change the result of each pixel value in difference image by the way that pixel value is multiplied by into weight coefficient
Difference image TS'.Image 508 is illustrated as using one of image (the second image I2') as the meter of difference image
Calculation source calculates weight coefficient and changes each pixel value in difference image by the way that pixel value is multiplied by into weight coefficient
The difference image TS' of result.
If as described above, position skew occurs in correspondence position testing result between images, then it is assumed that for the
Pixel in the pixel at same coordinate point in one image I1 and the second image I2', an image has high bone likelihood score,
And the pixel in another image has low bone likelihood score.Therefore, if using only the likelihood calculated according to one of image
Spend to determine the weight of difference value, then as shown in image 507 and image 508, the weight of one in difference value is undesirably
It is small.Therefore, exist close to each other on difference image on the occasion of the Character losing with negative value, therefore, it is difficult to recognize pixel
Place and the wrong place of correspondence position testing result that value has veritably been changed.
On the other hand, if calculating power using both images used when calculating difference value (Fig. 5 image 506)
Weight, then keep on the occasion of the feature existed close to each other with negative value.Therefore, using both images (the first image I1 and the second figure
As I2') weight coefficient is calculated as difference image TS calculating source there is following effect:As Fig. 5 image 506 in institute
Show, maintain the feature of the easy wrong identification of difference image TS correspondence position testing result.
(step S107:The output display of result of calculation)
In step s 107, difference image TS' is output to display processing unit 54 by pixel value change unit 53.Such as Fig. 6
Shown in, display processing unit 54 shows difference image TS' in the image display area 300 of display unit 36.Display processing
The display of the control display unit 36 of unit 54 be displayed side by side difference image TS', reference picture picture (the first image I1) and deformation pair
As image (the second image I2).Display processing unit 54 is capable of the display of control display unit 36 further to show the second image
(I2')。
, can be based on the scope of the pixel value with being paid close attention on user in the image processing techniques according to the present embodiment
The distributed intelligence of pixel value in the associated learning data of information (pixel value information) (pays close attention to pixel value in region
The distributed intelligence of distribution) and reference picture as and deformed article image both, to adjust each pixel value on difference image
Weight.The processing can strengthen the description of the change for changing and suppressing non-interesting position at concern position, so as to improve user
The observability of the change at the position of concern.
(modified example 1)
Although in step s 106, pixel value change unit 53 uses the pixel p in the first image I1 and the second image I2'
Pixel value calculate the weight coefficient at pixel p, but the pixel value of the neighborhood pixels of pixel p can be used (for example, being directed to
6 or 26 neighborhood pixels of 3-D view) calculate.It can be based on using in the first image for example, pixel value changes unit 53
Pixel and its likelihood that is calculated of the pixel value of neighborhood pixels and the pixel in the second image and its pixel value of neighborhood pixels
Degree, come the processing of pixel value being changed in difference image.In the calculating for obtaining likelihood score, the pixel for assigning pixel is used
Value carrys out defined formula (formula (1)).The image enhancing filter of smoothing filter or edge enhancement filter etc. can be used
Output valve be used as the value.
In addition, if it is possible to carried by the known method for extracting region of threshold process or pattern cut segmentation etc.
The pixel in region-of-interest is taken, then probability 1 is assigned to the region extracted, and the region of the region exterior to being extracted is assigned
Probability 0, weight coefficient W is used as thereby using probability.
Note, pixel value changes the minimum value and maximum that unit 53 can be based on the weight coefficient W (p) calculated, come
By weight coefficient W normalization in preset range (for example, [0,1]).
(modified example 2:The modified example 1 of the definition of weight)
In first embodiment and modified example 1, by using two images (the first image I1 and the second figures based on formula (1)
As I2') in pixel value obtain likelihood score, to determine the weight coefficient W for changing the pixel value TS (p) in difference image TS
(p).As another method, as provided by following formula (2), for by the pixel value in two images to defined point
Cloth obtains likelihood score.In modified example 2, pixel value changes unit 53 can be based on using the pixel value paid close attention in region
Distribution distributed intelligence (θ) and the pixel value obtained is calculated based on the pixel value in the first image and the second image
Likelihood score, come the processing of pixel value being changed in difference image.
W (p)=Pr (I1 (p), I2'(p) | θ) ... (2)
Although using the likelihood score based on single argument normal distribution in formula (1), using based on double changes in formula (2)
Measure the likelihood score of normal distribution.Similar to the situation using single argument normal distribution, equally in multivariable (two or more
Variable) in normal distribution, pixel value changes unit 53 also can be by obtaining parameter (average value of variable and side from learning data
Difference-covariance matrix) as the distributed intelligence (θ) for the distribution for representing pixel value, to calculate likelihood score.Expression picture in formula (2)
The distributed intelligence (θ) of the distribution of element value includes average value, the second image of the pixel value in the region-of-interest in the first image I1
The average value and the variance-covariance matrix of value of the pixel value in region-of-interest in I2'.Pixel value changes unit 53 can
The distributed intelligence that is obtained using the distributed intelligence that obtains when shooting the first image and when shooting the second image calculates likelihood
Spend the distributed intelligence (θ) as the distribution for representing pixel value.In this case, pixel value changes unit 53 for the first image
Pixel value in I1, using the distributed intelligence obtained when shooting the first image, and for the picture in the second image I2 (I2')
Element value, using the distributed intelligence obtained when shooting the second image, to calculate likelihood score.Note, pixel value changes the energy of unit 53
One of enough distributed intelligences obtained using the distributed intelligence obtained when shooting the first image and when shooting the second image,
To calculate the likelihood score for the pixel value in the first image and the second image.
As the method being distributed using multivariate normal, except the picture of the pixel p in the first image I1 and the second image I2'
Beyond element value, additionally it is possible to defined by combining the output result of the image processing filter for the first image and the second image
Likelihood score.In addition it is possible to use the output knot of the neighborhood pixels value of pixel p and the neighborhood pixels for passing through image processing filter
Really.
Note, minimum value and maximum that can be based on the value calculated, by the way that weight coefficient W (p) is normalized pre-
Determine in scope (for example, [0,1]), to obtain weight coefficient W.
(modified example 3:The modified example 2 of the definition of weight)
In first embodiment and modified example 1 and 2, the likelihood score based on learning data is defined, and determines to be used to change
The weight coefficient W (p) of pixel value (p) in difference image TS.However, as given by following formula (3), image can be used
In pixel value be used as weight coefficient.In modified example 3, pixel value changes unit 53 based on each pixel in the first image
Comparison between pixel value and the pixel value of each pixel in the second image, come the pixel value that is changed in difference image
Processing.Pixel value changes unit 53 and changes the pixel in difference image TS using based on comparing the weight coefficient W (p) of acquisition
Value.
W (p)=max (I1 (p), I2'(p)) ... (3)
As can applying equation (3) example, user concern region in pixel value comparatively be more than other regions in
Pixel value.More specifically, for example it is to be noted that bone region in CT images.There is the X-ray than other regions to inhale in known bone region
Comparatively pixel value in the higher X-ray absorption rate of yield, therefore bone region in CT images is more than in other regions
Pixel value.Therefore, it is possible to by setting big weight to the big region of the pixel value in the first image I1 or the second image I2',
So as to compared with other regions, strengthen the difference value in the bone region in difference image TS.
Note, expect weight coefficient W being calculated as 0 or bigger value.Minimum value that can be based on the value calculated and
Maximum, by weight coefficient W (p) normalization in preset range (for example, [0,1]).
In this example, weight coefficient W (p) is provided so that weight becomes bigger as pixel value is more big.However,
If for example, noting the region of the lung in CT images etc., can set bigger by weight as pixel value is smaller.
More specifically, for example, cause minimum value to be 1 by correction pixels value, obtain that its is reciprocal, and be used as weight coefficient W
(p), weight becomes bigger as pixel value is smaller.
Note, in the calculating of formula (3), have been described that the calculating using the pixel value for assigning pixel.However, it is possible to make
Pixel value is used as with the output valve of smoothing filter or edge enhancement filter.Alternatively, the neighbouring picture of pixel p can be used
The output result of element value and the neighborhood pixels for passing through image processing filter.
(modified example 4:The modified example 3 of the definition of weight)
As from that according to the first embodiment method that sets weight coefficient W different with modified example 1 to 3, can make as follows
With the arranges value of the display condition of the image shown on display unit 36, such as it is shown in step S101 on display unit 36
Image window position (WL) and window width (WW).Display condition includes representing the arranges value of the intermediate value of pixel value range and represents relative
In the arranges value of the width of the pixel value range of intermediate value.In modified example 4, display processing unit 54 is shown on display unit 36
First image and the second image, and pixel value change unit 53 is changed difference based on the display condition of display unit 36
The processing of pixel value in image.Pixel value changes unit 53 by the weight coefficient that is obtained based on display condition, to change
Pixel value in difference image.Display condition includes representing the arranges value of the intermediate value of pixel value range and represented relative to intermediate value
The arranges value of the width of pixel value range.
In general, window position (WL) and window width (WW) based on the display condition for being arranged to CT images are determined by from black
The scope of pixel value on color to the picture of gray scale (tone) expression of white.Window position (WL) is the picture for representing to be reached by gray scale chart
The arranges value of the intermediate value of element value scope, and window width (WW) is the setting of the width of pixel value range for representing to be reached by gray scale chart
Value.That is, on picture, (WL-WW/2) or smaller pixel value are expressed by black, and expressed (WL+WW/2) by white
Or bigger pixel value.That is, due to falling the pixel value quilt outside the pixel value range specified by window position (WL) and window width (WW)
Trimming, even if therefore pixel value in original image is different, they are also expressed by identical black or white on picture.
By using this point, pixel value change unit 53 using the arranges value based on the intermediate value for representing pixel value range and
The weight coefficient that is obtained of arranges value of the width of pixel value range is represented, come the place of pixel value being changed in difference image
Reason.If the pixel value in the first image I1 or the second image I2' falls in the range of [WL-WW/2, WL+WW/2], then it will add
Weight coefficient W is set to 1;Otherwise, weight coefficient W is set to 0.Therefore, it is possible to the scope for the pixel value for only extracting user's concern
Interior difference.Note, if window position (WL) and window width (WW) be arranged on it is different between the first image I1 and the second image I2',
Then pixel value, which changes unit 53, can use benchmark, the first image I1 the arranges value served as in deformable image registration.Note
Meaning, pixel value, which changes unit 53, can also use window position (WL) and the arranges value of window width (WW) for the second image I2'.As
Selection, pixel value change unit 53 can use pixel value range and the second image I2' concern of the first image I1 concern
Pixel value range between overlapping part pixel value range.
<Second embodiment>(pixel outside region-of-interest is extracted, and it is used for change pixel in difference result
Value.)
In the first embodiment, pixel value change unit uses the distribution of the pixel value in region-of-interest, extracts concern area
Result of pixel in domain etc., to change the pixel value in difference result.On the other hand, the picture outside region-of-interest can be used
The distribution of plain value and extraction result change the pixel value in difference result.
The present embodiment will illustrate for example following situation:If bone is set into region-of-interest, is extracted or strengthened
The information in the region of heart or liver outside region-of-interest etc., and the pixel in difference result is changed based on the information
Value.
In addition to changing the addition function of unit 53 to pixel value, according to the arrangement of the image processing apparatus of second embodiment
It is identical with the arrangement according to first embodiment.In a second embodiment, as shown in Figure 7, the region of user's concern is being obtained
After pixel value information, pixel value change unit 53 obtain following information, in the information, from input image data group enhancing or
The region that user is not concerned with is extracted, and the pixel value in difference image is changed based on the information.
Reference picture 7 to Figure 10 and Figure 11 A to Figure 11 D is described according to the image processing apparatus 10 of second embodiment
Processing procedure.
Fig. 7 is illustrated since the data in image processing apparatus 10 obtain processing, and shows change pixel value
The flow chart of the example of the processing of difference image.Among step in the flow chart, in step S1010 to S1050 and
In S1070, carry out and the processing identical in the step S101 to S105 and S107 according to first embodiment shown in Fig. 2
Processing.That is, the processing in step S1055 and S1060 is different from first embodiment.The processing of addition will only be described below and with the
The difference of one embodiment.
(step S1055:Extract the region outside region-of-interest)
In step S1055, pixel value changes unit 53 from reference picture as being carried with both deformed article images after deformation
Take the region that family is not concerned with.For example, if region-of-interest is bone, the organic region of heart and liver etc. is concern area
The region in overseas portion.Using the characteristic of pixel value information, for example, by the processing procedure shown in Fig. 8, to extract region-of-interest
These outside regions.
As shown in Figure 4, when being compared with bone structure, the organ structure of liver has following characteristic:In the presence of with phase
Like the set of the pixel of pixel value to form block (mass).On the contrary, for bone structure, each bone generally has small, long and narrow
Structure, and the surface (cortex bone) of bone and the pixel value changes of inner side (cancellous bone) are very big.Therefore, it is possible to by strengthening by having
There is the block region of the pixel groups formation of similar pixel value, and by the known extracting method of threshold process etc. from image
The region is extracted, to extract the region in addition to bone region.
The separated viewpoint in region by bone region and in addition to bone region from CT images, will illustrate to be based on picture in addition
The effect of the characteristic quantity in the continuity enhancing block region of element value.On CT images, bone region has about 150 [H.U.] or bigger
High level.The organic region of heart and liver etc. has about 60 [H.U.] or bigger pixel value, but these pixel values can
It can be uprised by inspection method (mainly by using contrast agent).Therefore, according to radiography imaging conditions, especially such as heart
There is the distribution of the pixel value overlapping with the distribution of the pixel value in bone region with the organic region of liver etc..Therefore, at some
In the case of, for the method for extracting region of the threshold process of pixel value itself etc., it is impossible to which reply images bar in various radiographies
Extracted region under part.On the other hand, if the successional characteristic quantity based on pixel value is calculated, even if radiography imaging conditions
Difference, also results in constant calculated value, and part unrelated with radiography shooting bar, as long as being used as the organic region quilt of extracting object
Level dyeing.Therefore, it is used to strengthen the spy of block region rather than pixel value by using in the extraction of liver or heart
The amount of levying, can be expected for the difference of radiography imaging conditions to obtain stable extracted region result.
In fig. 8 in shown processing procedure, in step s 201, the image of subject is obtained.In step S202, pin
Image is smoothed, so as to reduce the noise in image.Note, can skip in step S202 be used for reduce noise
The processing of purpose.Afterwards, in step S203, the parameter (threshold value) for evaluating the similarity between pixel value is obtained.Can
Using parameter (threshold value) successional characteristic quantity is represented to calculate.More specifically, predefined parameter can be used, or can by with
Family is via the input parameter of operating unit 35.Alternatively, the variance of pixel value that can be in image automatically determines parameter.Such as
Fruit automatically determines parameter by this way, then expects to set bigger by the value of parameter (threshold value) as variance is bigger.In step
In rapid S204, the successional characteristic quantity for representing pixel value is calculated, and strengthen the block region that image includes.Pixel value changes
Unit 53 is directed to the concerned pixel in image, multiple predefined paths by concerned pixel is set, based on the neighbouring picture on path
Similarity between element value, calculates the successional evaluation of estimate of the pixel value on each path of representative in each path, and
The characteristic quantity (acquisition of characteristic quantity) of concerned pixel is obtained based on the evaluation of estimate obtained for each path.Pixel value changes unit
53 calculate the likelihood score that concerned pixel is in block region also based on the characteristic quantity obtained.In step S205, pixel value
Change the extraction unit that unit 53 is used as being used for extracting region from image, and use such as threshold process or pattern cut point
The known extracting method of section etc. extracts the block region as the region in addition to bone region.
As the method in block region is strengthened in step S204, for example, it is contemplated that following method:When giving in attention image
When fixation is plain, the quantity of the similar pixel of continued presence on the predefined paths through given pixel is calculated.In this example, in advance
The quantity for determining path is one or more, strengthens block region it is desirable to multiple paths.Especially, at least pin is expected
Path is set to each axial direction of image.I.e. it is desired to when image is three-dimensional, at least for each in three axial directions
Individual setting path, and when image is two dimension, at least for each setting path in two axial directions.
Note, in order to calculate the successional characteristic quantity on pixel value, can in advance limit and be arranged to calculate object
Pixel (concerned pixel).More specifically, it is considered to following method:There is provided threshold value for pixel value, and be equal to for having or
Pixel more than the pixel value (or the pixel value equal to or less than threshold value or pixel value less than threshold value) of threshold value calculates feature
Amount.Consider following situation:If the bone in CT images is arranged to pay close attention to region, extracts or strengthen outside region-of-interest
The region of heart and liver etc..In the example according to the present embodiment, it is not necessary to for not in organic region
The pixel for the pixel value (for example, -100 [H.U.] or smaller) that pixel is kept carries out following calculating., can by skipping calculating
Accelerate following characteristic quantity calculates processing.Especially, the model for the pixel value obtained in step S1050 and user paid close attention to
It is effective that pixel in enclosing, which is set to calculate object (concerned pixel),.
Reference picture 9 to Figure 10 and Figure 11 A to Figure 11 D is described to strengthen the example of the processing in block region.
To describe it is shown in fig .9 in the case of calculate the successional method of concentration value.Fig. 9 is shown to be surrounded by thick line
The set of pixel represent predefined paths, and concerned pixel is the situation of the pixel at x coordinate and y-coordinate (2,3) place.Note,
The numerical value described in each shown pixel represents the pixel value of pixel in fig .9.In this case, will be described below as
Under method:Calculate the quantity of the similar pixel of continued presence on the predefined paths by concerned pixel.
As shown in Figure 10, when calculating characteristic quantity, start to calculate with concerned pixel (2,3).First, by concerned pixel
The pixel value of (2,3) is compared with the pixel value of neighborhood pixels (3,3).If the poor absolute value between pixel value is less than
The parameter (threshold value) obtained in step S203, it is determined that pixel value is continuous, and calculating proceeds to next step.In Figure 10
Example in, the pixel (2,3) of the object in being calculated as first and the pixel value of (3,3) are 100 and 110, and pixel value
Between poor absolute value be 10.If the threshold value obtained in step S203 is 20, the comparative result of pixel value is less than threshold
Value, it is thus determined that pixel (2,3) and (3,3) have continuity.In this case, in the stage, will as pixel (2,3) and
The distance between (3,3), the value of the width of pixel, interim storage is successional characteristic quantity (evaluation of estimate).Afterwards, only
Want the continuity of pixel to continue, then just slide the pixel pair as calculating object, and repeat calculating.As actually showing
Example, will describe to determine that there is pixel next calculating after continuity to walk in the calculating for pixel to (2,3) and (3,3)
Suddenly.In next calculation procedure in this case, pixel is slided to so that pixel (3,3) and pixel (4,3) are set into object,
And to be compared their pixel value with aforesaid way identical mode.In second calculates, the pixel for calculating object is used as
The pixel value of (3,3) and (4,3) is 0 all for the poor absolute value between 110, and pixel value.Therefore, because absolute value is less than
As the 20 of the threshold value obtained in step S203, it is thus determined that pixel (3,3) and (4,3) have continuity.Then, by one
The width of pixel is added with recorded successional characteristic quantity, so that the value of successional characteristic quantity is updated into two pictures
The width of element.Repeat this calculating, and continue to calculate, until the poor absolute value between pixel value is equal to or more than picture
Element value between comparison when threshold value.In case of fig. 10, the pixel value for the pixel for calculating object is used as in being calculated the 4th
For 120 and 200, the poor absolute value (=80) between pixel value is more than threshold value (=20), therefore now stops calculating.With three
The corresponding value of the width of pixel is stored in the successional characteristic quantity in concerned pixel as pixel value.Note, continuity
Width unit can be " pixel " or " mm " etc. actual size.In addition, Figure 10 shows that the positive side in x directions is searched
The successional example of rope.However, it is possible to search in minus side or on both sides continuity.By entering for all pixels in image
The above-mentioned calculating of row, can calculate the successional characteristic quantity for representing pixel value for each pixel.
Note, the continuity of pixel value is determined using the poor absolute value between neighborhood pixels.However, it is possible to use its
His index determines continuity.For example, whether can come according to the ratio between the pixel value of neighborhood pixels equal to or less than threshold value
Determine continuity.Alternatively, the pixel values of the multiple pixels that can be included based on predefined paths determines continuity.Example
Such as, when it is determined that in Fig. 9 from pixel (2,3) to the continuity of pixel (5,3) when, can use pixel (2,3), (3,3), (4,
3) with the variance of the pixel value of (5,3).In this case, can if the value of the variance of pixel value is less than given threshold value
Determine that pixel has continuity.
In addition, determining condition as another continuity, it may be considered that following condition:It is equal to or more than give when reaching to have
When determining the pixel of pixel value (or the pixel value equal to or less than given threshold value or pixel value less than given threshold value) of threshold value,
Stop successional calculating.Especially, obtained if reached as the pixel value for the pixel for calculating object in step S1050
, user concern pixel value scope outside value, then can stop calculate.These continuitys need not be used alone true
Each in fixed condition, and the combination of multiple conditions can be used.
It can be any road in three dimensions for the predefined paths that calculate the successional characteristic quantity for representing pixel value
Footpath, as long as the path passes through concerned pixel.If in addition, using multiple predefined paths, will can make a reservation for for each
The characteristic quantity of path computing is set to multivalue vector data, and remains the characteristic quantity of concerned pixel.Alternatively, Ke Yitong
Cross and calculate individually calculated using multiple predefined paths successional characteristic quantity and to obtain the value of a characteristic quantity, and
It is set to the characteristic quantity of concerned pixel.The representative illustration of predefined paths is the example shown in Figure 11 A to Figure 11 D.By
In space limitation, Figure 11 A to Figure 11 D show following example:It is neighbouring by 8 on two dimensional surface including concerned pixel
Path in the positive direction of pixel and the four direction of negative direction is set to predefined paths.In the present embodiment, it is possible to using such as
Under method:Extension and this method identical method in three dimensions, and by the positive direction including 26 neighborhood pixels and
13 directions of negative direction are set to predefined paths.Note, edge that can be from concerned pixel to image sets predefined paths.Such as
The body region of object in fruit image is known, then expects only to set predefined paths in body region.Alternatively, may be used
Using by the length limitation of predefined paths as the length shorter than predetermined length, and can be without for equal or longer than pre- fixed length
The calculating of the length of degree.
When pixel value along predefined paths equably in the presence of, represent the output of successional characteristic quantity of pixel value often
It is big.Therefore, it is if there is the block region (such as liver or heart) formed by the pixel groups with similar pixel value, then special
Become in each in multiple paths of the output valve for the amount of levying big.On the contrary, for the region of the narrow shape with rib etc., output
Value becomes big only in the path along shape, and diminishes in each of residual paths.By using this point, using multiple
Predefined paths represent the successional characteristic quantity of pixel value for each path computing, and use result in combination, so that
The block region (region in addition to bone region) of liver etc. can be strengthened by obtaining.As by for the characteristic quantity in multiple paths
The method that is combined of result of calculation, can have acquisition average value, intermediate value, maximum, minimum value, minimum value and maximum it
Between difference ratio or variance yields method.
Note, the method that the characteristic quantity calculates concerned pixel based on multiple paths is in the likelihood score in the region to be extracted
(method in enhancing block region) is not limited to the above method, and the technology of machine learning etc. can be used to be formed.For example,
In the case of the study in the region for having been defined for being extracted, created by learning for calculating the feature for each path
Measure and feature based amount determines the discriminator whether concerned pixel is in the region to be extracted.At this point it is possible to which characteristic quantity is turned
Change invariable rotary characteristic quantity into, and can be learnt based on the value after conversion.It is then possible to use created discriminator
To extract the region of image.Note, the example of the computational methods for characteristic quantity to be converted into invariable rotary characteristic quantity is as follows
Method:One in the maximum all paths of characteristic quantity is set to X-direction, and the path orthogonal with the path is worked as
In, the path that characteristic quantity is maximum be set to Y direction.
In step S205, step is extracted in by the known extracting method of threshold process or pattern cut segmentation etc.
Enhanced piece of region (region in addition to bone region) in rapid S204.Note, in the extraction process, in each pixel,
In addition to the characteristic quantity for strengthening block region, pixel value itself or other characteristic quantities can also be used in combination.
Using above-mentioned processing, in each pixel in the picture, using multiple predefined paths by pixel, based on each
The similarity between neighborhood pixels value on path, to calculate the successional characteristic quantity for representing pixel value, and feature based
Measure and extract region, be enable to extract include with pixel value in bone region it is close it is pixel value, except bone region
Block region in addition.That is, by using the above method for each representing successional characteristic quantity, it can solve the problem that the pixel value of bone connects
The pixel value of the nearly organ that experienced radiography shooting, and therefore, it is difficult to distinguished based on the pixel value between bone and organ
Problem.
(step S1060:Change the pixel value in difference image)
In step S1060, pixel value, which changes unit 53, to assign probability 0 to the region extracted in step S1055,
And the region of the region exterior to being extracted assigns probability 1, and weight coefficient W is used as thereby using probability.Alternatively, except
Beyond the weight coefficient W of first embodiment, the extraction result of pixel outside region-of-interest can also be used as second
Weight coefficient W2.That is, pixel value change unit 53 can by using the pixel p in difference image TS difference value TS (p), plus
Weight coefficient W (p) and the second weight coefficient W2 (p) calculates TS (p) × W (p) × W2 (p), poor to use multiple weight coefficients to change
Pixel value in partial image.The combination of multiple weight coefficients can be such as TS (p) × (W (p)+W2 (p)) linear combination.
Note, the situation for extracting the pixel outside region-of-interest is described above.However, it is possible to according to representing pixel
The value of the successional characteristic quantity of value obtains weight coefficient W2.It is set to by bone region in the present embodiment of region-of-interest, it is excellent
Selection of land is configured so that bigger with the value of characteristic quantity, then the weight coefficient that the pixel value in difference image is multiplied becomes to get over
Small (close to 0), and it is smaller with the value of characteristic quantity, then and weight coefficient becomes bigger (close to 1).Note, in this case,
The extracted region processing in step S205 can be skipped.By setting weight coefficient by this way, it can be anticipated that with as follows
Effect:When noting the difference value of bone, relatively in the block region of heart or liver outside reduction region-of-interest etc.
Difference value.
In the image processing techniques according to the present embodiment, it can be adjusted based on the information in the region being not concerned with user
The weight of pixel value on whole difference image.The processing can suppress retouching for the change on the position in the region that user is not concerned with
Paint, so as to improve the observability of the change on the position of user's concern.
Notice that the characteristic quantity calculating method in the block region according to described in the step S1055 of above-described embodiment can be used in
Purpose in addition to improving the observability of the region-of-interest on difference image.For example, in order to extract/medical imaging analysis it is all
The another object of the organic region of such as heart or liver, can use the characteristic quantity for representing the likelihood score as organic region.
Other embodiment
(it also can also more completely be referred to as that " non-transitory computer can by reading and performing record in storage medium
Read storage medium ") on computer executable instructions (for example, one or more programs) to perform one in above-described embodiment
Individual or more function and/or including for performing one of one or more functions in above-described embodiment
Or more circuit (for example, application specific integrated circuit (ASIC)) system or the computer of device, to realize the implementation of the present invention
Example, and it is possible to can using for example being read by the computer by system or device and performing the computer from storage medium
Execute instruction is to perform one or more function in above-described embodiment and/or control one or more circuits
Method to perform one or more functions in above-described embodiment, to realize embodiments of the invention.Computer can be with
Including one or more processors (for example, CPU (CPU), microprocessing unit (MPU)), and it can include dividing
The computer or the network of separated processor opened, to read and perform computer executable instructions.Computer executable instructions
For example computer can be provided to from network or storage medium.Storage medium can include such as hard disk, random access memory
Device (RAM), read-only storage (ROM), the memory of distributed computing system, CD (such as compact disk (CD), digital universal
CD (DVD) or Blu-ray Disc (BD)TM), it is one or more in flash memory device and storage card etc..
Embodiments of the invention can also be realized by following method, i.e. pass through network or various storage mediums
The software (program) of function that above-described embodiment will be performed is supplied to system or device, the computer of the system or device or in
The method that Central Processing Unit (CPU), microprocessing unit (MPU) read simultaneously configuration processor.
Although with reference to exemplary embodiment, invention has been described, but it is to be understood that public the invention is not restricted to institute
The exemplary embodiment opened.Most wide explanation should be given to scope of the following claims, so that it covers all these changes
Type example and equivalent 26S Proteasome Structure and Function.
Claims (24)
1. a kind of image processing apparatus, described image processing unit includes:
Obtaining unit, it is configured to obtain the first image of subject and the second image of the subject;
Difference unit, it is configured to obtain the difference image after by described first image and second image registration;
And
Change unit, it is configured to based on using the pixel value in the pixel value and second image in described first image
The likelihood score calculated, the processing for the pixel value being changed in the difference image.
2. image processing apparatus according to claim 1, wherein, the change unit is based on using in described first image
Pixel value and pay close attention to pixel value in region distribution the likelihood score that is calculated of distributed intelligence and using described the
Pixel value in two images and pay close attention to pixel value in region distribution the likelihood score that is calculated of distributed intelligence in compared with
Big person, come the processing of pixel value being changed in the difference image.
3. image processing apparatus according to claim 1, wherein, the change unit is based on using in described first image
Pixel pixel value and the pixel neighborhood pixels pixel value and the pixel in second image pixel value and should
The likelihood score that the pixel value of the neighborhood pixels of pixel is calculated, come the processing of pixel value being changed in the difference image.
4. image processing apparatus according to claim 1, wherein, the unit that changes using following item based on being calculated
Likelihood score, come the processing of pixel value being changed in the difference image:Pay close attention to the distribution of pixel value in region
Distributed intelligence and pixel value in described first image and the pixel value in second image pair and the picture that obtains
Element value.
5. image processing apparatus according to claim 1, described image processing unit also includes:
Deformation unit, it is configured to by the position based on each pixel in described first image and second image
Each pixel position between corresponding relation make second anamorphose so that each pixel in second image
Matched with the respective pixel in described first image, by described first image and second image registration.
6. image processing apparatus according to claim 5, wherein, the difference unit obtains correspondence from the image after registration
Pixel value at position, and obtain difference image by carrying out difference processing for the pixel value obtained.
7. image processing apparatus according to claim 1, described image processing unit also includes:
Display processing unit, it is configured to show described first image and second image on the display unit,
Wherein, for each pixel in the difference image, the display condition of the change unit based on the display unit,
Obtain the pixel value information of the scope on the pixel value in region-of-interest.
8. image processing apparatus according to claim 7, wherein, the change unit is based on described first image and described
The display condition of at least one of second image, obtains the pixel value information of the scope on the pixel value in region-of-interest.
9. image processing apparatus according to claim 1, wherein, it is described for each pixel in the difference image
Change unit based on the information inputted via operating unit, obtain the pixel value letter of the scope on the pixel value in region-of-interest
Breath.
10. image processing apparatus according to claim 7, wherein, the change unit is based on the pixel value information, changes
Become the distributed intelligence of the distribution for the pixel value paid close attention in region.
11. image processing apparatus according to claim 10, described image processing unit also includes:
Memory cell, it is configured to the distributed intelligence of the distribution of the pixel value in storage expression region-of-interest,
Wherein, the memory cell stores the multiple distributed intelligences corresponding with the region-of-interest of the subject, and
The change unit is according to the distributed intelligence obtained based on the pixel value information from the memory cell, to set point
Cloth information.
12. image processing apparatus according to claim 1, wherein, the change unit, which is used, is based on the likelihood score institute
The weight coefficient of acquisition, to change the pixel value in the difference image.
13. a kind of image processing apparatus, described image processing unit includes:
Obtaining unit, it is configured to obtain the first image of subject and the second image of the subject;
Difference unit, it is configured to obtain the difference image after by described first image and second image registration;
And
Change unit, it is configured in the pixel value based on each pixel in described first image and second image
Comparison between the pixel value of each pixel, the processing for the pixel value being changed in the difference image.
14. image processing apparatus according to claim 13, wherein, the change unit is used to be obtained based on described compare
The weight coefficient obtained, to change the pixel value in the difference image.
15. a kind of image processing apparatus, described image processing unit includes:
Obtaining unit, it is configured to obtain the first image of subject and the second image of the subject;
Difference unit, it is configured to obtain the difference image after by described first image and second image registration;
Display processing unit, it is configured to show described first image and second image on the display unit;And
Change unit, it is configured to the display condition based on the display unit, is changed the picture in the difference image
The processing of element value.
16. image processing apparatus according to claim 15, wherein, the change unit, which is used, is based on the display condition
And the weight coefficient obtained, to change the pixel value in the difference image.
17. image processing apparatus according to claim 16, wherein
The display condition includes, represent pixel value range intermediate value arranges value and represent relative to intermediate value pixel value model
The arranges value of the width enclosed, and
It is described to change the width that unit uses arranges value and expression pixel value range based on the intermediate value for representing pixel value range
Arranges value and the weight coefficient that obtains, come the processing of pixel value being changed in the difference image.
18. image processing apparatus according to claim 1, wherein, described first image and second image are with not
The image obtained with timing.
19. a kind of image processing method, described image processing method includes:
Obtain the first image of subject and the second image of the subject;
Obtain the difference image after by described first image and second image registration;And
Based on the likelihood score calculated using the pixel value in the pixel value in described first image and second image, carry out
Change the processing of the pixel value in the difference image.
20. a kind of image processing apparatus, described image processing unit includes:
Obtaining unit, it is configured to the image for obtaining subject;And
Characteristic quantity obtaining unit, it is configured to, and for the concerned pixel in described image, sets by the concerned pixel
Multiple predefined paths, for each in path, based on the similarity between the neighborhood pixels value on path, calculate delegated path
On pixel value successional evaluation of estimate, and based on the evaluation of estimate obtained for each path, obtain the concerned pixel
Characteristic quantity.
21. image processing apparatus according to claim 20, wherein, the path is arranged to include each of described image
Axial direction.
22. image processing apparatus according to claim 20, wherein, the characteristic quantity obtaining unit is also based on the feature
Amount calculates the likelihood score that the concerned pixel is in block region.
23. image processing apparatus according to claim 20, described image processing unit also includes:
Extraction unit, it is configured to extract region from described image based on the characteristic quantity.
24. a kind of computer-readable recording medium for the computer program that is stored with, the computer program perform claim requirement 19
The step of described image processing method.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016-080477 | 2016-04-13 | ||
JP2016080477 | 2016-04-13 | ||
JP2016170070A JP6877109B2 (en) | 2016-04-13 | 2016-08-31 | Image processing equipment, image processing methods, and programs |
JP2016-170070 | 2016-08-31 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107292857A true CN107292857A (en) | 2017-10-24 |
CN107292857B CN107292857B (en) | 2021-09-21 |
Family
ID=58644796
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710223092.3A Active CN107292857B (en) | 2016-04-13 | 2017-04-07 | Image processing apparatus and method, and computer-readable storage medium |
Country Status (3)
Country | Link |
---|---|
US (1) | US10388018B2 (en) |
EP (1) | EP3236418B1 (en) |
CN (1) | CN107292857B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111954494A (en) * | 2018-04-09 | 2020-11-17 | 东芝能源系统株式会社 | Medical image processing apparatus, medical image processing method, and program |
US10854419B2 (en) | 2018-09-07 | 2020-12-01 | Toshiba Memory Corporation | Contour extraction method, contour extraction device, and non-volatile recording medium |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018061067A1 (en) * | 2016-09-27 | 2018-04-05 | 株式会社日立ハイテクノロジーズ | Defect inspection device and defect inspection method |
DE112018004148T5 (en) * | 2017-11-02 | 2020-04-30 | Hoya Corporation | PROCESSING DEVICE FOR ELECTRONIC ENDOSCOPE AND ELECTRONIC ENDOSCOPE SYSTEM |
GB2569547B (en) * | 2017-12-19 | 2021-05-12 | Samsung Electronics Co Ltd | Reconstruction of original images from modified images |
US11151726B2 (en) * | 2018-01-10 | 2021-10-19 | Canon Medical Systems Corporation | Medical image processing apparatus, X-ray diagnostic apparatus, and medical image processing method |
JP7383371B2 (en) * | 2018-02-28 | 2023-11-20 | キヤノン株式会社 | Image processing device |
US11055532B2 (en) * | 2018-05-02 | 2021-07-06 | Faro Technologies, Inc. | System and method of representing and tracking time-based information in two-dimensional building documentation |
CN110874821B (en) * | 2018-08-31 | 2023-05-30 | 赛司医疗科技(北京)有限公司 | Image processing method for automatically filtering non-sperm components in semen |
DE112019005308T5 (en) * | 2018-10-25 | 2021-07-22 | Fujifilm Corporation | WEIGHTED IMAGE GENERATING DEVICE, METHOD AND PROGRAM, DETERMINING DEVICE, METHOD AND PROGRAM, AREA EXTRACTION DEVICE, METHOD AND PROGRAM AND DETERMINATOR |
EP3726318B1 (en) * | 2019-04-17 | 2022-07-13 | ABB Schweiz AG | Computer-implemented determination of a quality indicator of a production batch-run that is ongoing |
US11080833B2 (en) * | 2019-11-22 | 2021-08-03 | Adobe Inc. | Image manipulation using deep learning techniques in a patch matching operation |
US11501478B2 (en) | 2020-08-17 | 2022-11-15 | Faro Technologies, Inc. | System and method of automatic room segmentation for two-dimensional laser floorplans |
CN112465886A (en) * | 2020-12-09 | 2021-03-09 | 苍穹数码技术股份有限公司 | Model generation method, device, equipment and readable storage medium |
JP2023128704A (en) * | 2022-03-04 | 2023-09-14 | キヤノン株式会社 | Image processing device, method, program, and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5611000A (en) * | 1994-02-22 | 1997-03-11 | Digital Equipment Corporation | Spline-based image registration |
WO2002050771A1 (en) * | 2000-04-19 | 2002-06-27 | The Victoria University Of Manchester | Image subtraction |
CN1879553A (en) * | 2005-06-15 | 2006-12-20 | 佳能株式会社 | Method for detecting boundary of heart, thorax and diaphragm, device and storage medium thereof |
CN101447080A (en) * | 2008-11-19 | 2009-06-03 | 西安电子科技大学 | Method for segmenting HMT image on the basis of nonsubsampled Contourlet transformation |
US20090310843A1 (en) * | 2006-06-26 | 2009-12-17 | Fujifilm Corporation | Image display device |
CN101826159A (en) * | 2009-03-07 | 2010-09-08 | 鸿富锦精密工业(深圳)有限公司 | Method for realizing partitioned binarization of gray scale image and data processing equipment |
CN101930611A (en) * | 2009-06-10 | 2010-12-29 | 霍尼韦尔国际公司 | Multiple view face tracking |
CN102596035A (en) * | 2009-10-09 | 2012-07-18 | 株式会社日立医疗器械 | Medical image processing device, x-ray image capturing device, medical image processing program, and medical image processing method |
CN103168462A (en) * | 2011-10-14 | 2013-06-19 | 株式会社摩如富 | Image compositing device, image compositing method, image compositing program, and recording medium |
US8737740B2 (en) * | 2011-05-11 | 2014-05-27 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and non-transitory computer-readable storage medium |
CN105027161A (en) * | 2013-02-28 | 2015-11-04 | 日本电气株式会社 | Image processing method and image processing device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
IL191615A (en) * | 2007-10-23 | 2015-05-31 | Israel Aerospace Ind Ltd | Method and system for producing tie points for use in stereo-matching of stereoscopic images and method for detecting differences in a photographed scenery between two time points |
JP6071444B2 (en) | 2012-11-07 | 2017-02-01 | キヤノン株式会社 | Image processing apparatus, operation method thereof, and program |
-
2017
- 2017-04-05 EP EP17164941.1A patent/EP3236418B1/en active Active
- 2017-04-07 CN CN201710223092.3A patent/CN107292857B/en active Active
- 2017-04-11 US US15/484,400 patent/US10388018B2/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5611000A (en) * | 1994-02-22 | 1997-03-11 | Digital Equipment Corporation | Spline-based image registration |
WO2002050771A1 (en) * | 2000-04-19 | 2002-06-27 | The Victoria University Of Manchester | Image subtraction |
CN1879553A (en) * | 2005-06-15 | 2006-12-20 | 佳能株式会社 | Method for detecting boundary of heart, thorax and diaphragm, device and storage medium thereof |
US20090310843A1 (en) * | 2006-06-26 | 2009-12-17 | Fujifilm Corporation | Image display device |
CN101447080A (en) * | 2008-11-19 | 2009-06-03 | 西安电子科技大学 | Method for segmenting HMT image on the basis of nonsubsampled Contourlet transformation |
CN101826159A (en) * | 2009-03-07 | 2010-09-08 | 鸿富锦精密工业(深圳)有限公司 | Method for realizing partitioned binarization of gray scale image and data processing equipment |
CN101930611A (en) * | 2009-06-10 | 2010-12-29 | 霍尼韦尔国际公司 | Multiple view face tracking |
CN102596035A (en) * | 2009-10-09 | 2012-07-18 | 株式会社日立医疗器械 | Medical image processing device, x-ray image capturing device, medical image processing program, and medical image processing method |
US8737740B2 (en) * | 2011-05-11 | 2014-05-27 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and non-transitory computer-readable storage medium |
CN103168462A (en) * | 2011-10-14 | 2013-06-19 | 株式会社摩如富 | Image compositing device, image compositing method, image compositing program, and recording medium |
CN105027161A (en) * | 2013-02-28 | 2015-11-04 | 日本电气株式会社 | Image processing method and image processing device |
Non-Patent Citations (6)
Title |
---|
E-LIANG CHEN 等: "An automatic diagnostic system for CT liver image classification", 《IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING》 * |
XIN LIU 等: "A MAXIMUM LIKELIHOOD CLASSIFICATION METHOD FOR IMAGE SEGMENTATION CONSIDERING SUBJECT VARIABILITY", 《2010 IEEE SOUTHWEST SYMPOSIUM ON IMAGE ANALYSIS & INTERPRETATION》 * |
YAN KANG 等: "A New Accurate and Precise 3-D Segmentation Method for Skeletal Structures in Volumetric CT Data", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 * |
王世伟: "《医学影像实用技术教程》", 31 August 2007, 中国铁道出版社 * |
董放: "基于分形维数对肝脏CT图像的纹理特征研究", 《武警医学》 * |
陈灵娜: "基于分维特征的肝脏CT图像识别", 《南华大学学报(自然科学版)》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111954494A (en) * | 2018-04-09 | 2020-11-17 | 东芝能源系统株式会社 | Medical image processing apparatus, medical image processing method, and program |
CN111954494B (en) * | 2018-04-09 | 2023-12-01 | 东芝能源系统株式会社 | Medical image processing device, medical image processing method, and recording medium |
US10854419B2 (en) | 2018-09-07 | 2020-12-01 | Toshiba Memory Corporation | Contour extraction method, contour extraction device, and non-volatile recording medium |
TWI736843B (en) * | 2018-09-07 | 2021-08-21 | 日商東芝記憶體股份有限公司 | Contour extraction method, contour extraction device and non-volatile recording medium |
Also Published As
Publication number | Publication date |
---|---|
CN107292857B (en) | 2021-09-21 |
US10388018B2 (en) | 2019-08-20 |
EP3236418B1 (en) | 2020-10-28 |
US20170301093A1 (en) | 2017-10-19 |
EP3236418A3 (en) | 2018-03-21 |
EP3236418A2 (en) | 2017-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107292857A (en) | Image processing apparatus and method and computer-readable recording medium | |
US11514573B2 (en) | Estimating object thickness with neural networks | |
US9792703B2 (en) | Generating a synthetic two-dimensional mammogram | |
US20200074634A1 (en) | Recist assessment of tumour progression | |
El-Baz et al. | Automatic analysis of 3D low dose CT images for early diagnosis of lung cancer | |
JP4311598B2 (en) | Abnormal shadow detection method and apparatus | |
JP6570145B2 (en) | Method, program, and method and apparatus for constructing alternative projections for processing images | |
Pluim et al. | The truth is hard to make: Validation of medical image registration | |
Loog et al. | Filter learning: application to suppression of bony structures from chest radiographs | |
US10692215B2 (en) | Image processing apparatus, image processing method, and storage medium | |
Li et al. | Low-dose CT image denoising with improving WGAN and hybrid loss function | |
US20230007835A1 (en) | Composition-guided post processing for x-ray images | |
US20160275357A1 (en) | Method and system for tracking a region in a video image | |
US20090060332A1 (en) | Object segmentation using dynamic programming | |
US9672600B2 (en) | Clavicle suppression in radiographic images | |
Kurugol et al. | Centerline extraction with principal curve tracing to improve 3D level set esophagus segmentation in CT images | |
EP3896649A1 (en) | Medical image synthesis of abnormality patterns associated with covid-19 | |
Hibbard | Region segmentation using information divergence measures | |
EP4302268A1 (en) | System and methods for inferring thickness of object classes of interest in two-dimensional medical images using deep neural networks | |
Pandey et al. | Recognition of X-rays bones: challenges in the past, present and future | |
CN113538419A (en) | Image processing method and system | |
JP5954846B2 (en) | Shape data generation program, shape data generation method, and shape data generation apparatus | |
JP2016171961A (en) | Image processing device, image processing method, and program | |
Naseem | Cross-modality guided Image Enhancement | |
Nikolikos et al. | Multi-contrast MR Image/Volume Alignment via ECC Maximization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |