CN109559285B - Image enhancement display method and related device - Google Patents

Image enhancement display method and related device Download PDF

Info

Publication number
CN109559285B
CN109559285B CN201811259583.4A CN201811259583A CN109559285B CN 109559285 B CN109559285 B CN 109559285B CN 201811259583 A CN201811259583 A CN 201811259583A CN 109559285 B CN109559285 B CN 109559285B
Authority
CN
China
Prior art keywords
target object
image
object region
region
enhanced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811259583.4A
Other languages
Chinese (zh)
Other versions
CN109559285A (en
Inventor
徐燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Neusoft Medical Equipment Co Ltd
Original Assignee
Beijing Neusoft Medical Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Neusoft Medical Equipment Co Ltd filed Critical Beijing Neusoft Medical Equipment Co Ltd
Priority to CN201811259583.4A priority Critical patent/CN109559285B/en
Publication of CN109559285A publication Critical patent/CN109559285A/en
Application granted granted Critical
Publication of CN109559285B publication Critical patent/CN109559285B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image enhancement display method and a related device, wherein the method comprises the following steps: acquiring a plurality of acquired original images, wherein the plurality of original images respectively comprise target object areas; registering the plurality of original images according to the positions corresponding to the target object areas in the plurality of original images to obtain a plurality of registered images; obtaining an enhanced target object region according to the plurality of registration images; taking one of the plurality of original images as a reference image, and determining a non-target object area of the reference image; and synthesizing the enhanced target object region and the non-target object region of the reference image to obtain an enhanced image. The image enhanced by the method not only can be enhanced and displayed in the target object area, but also can improve the image definition of the non-target object area, so that the position relation between the target object in the target object area and the object in the non-target object area is reflected.

Description

Image enhancement display method and related device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image enhancement display method and a related apparatus.
Background
In some medical treatment procedures, it may be desirable to implant an implant, such as a stent, into an organ, such as a blood vessel. For example, the implantation of coronary stents into coronary arteries is an important means for interventional therapy of cardiovascular diseases, and the accurate implantation of the implant has an important influence on the therapeutic effect.
To ensure that the implant can be implanted accurately into the organ, imaging is usually performed by means of digital image technology. For example, in the implantation process of the implant, imaging is performed by using an X-ray imaging technology, and an imaging image capable of displaying the implant is obtained, so that a doctor can observe the implantation condition of the implant in an organ, and abnormal conditions such as implant fracture, inaccurate implantation position and the like can be found in time. However, the sharpness of the imaged image is poor due to the influence of factors such as image noise.
In order to solve the problem of poor image definition, image enhancement processing needs to be performed according to a plurality of imaged images at different moments. Because the coordinates correspondingly displayed on the multiple imaging images generated by the same implant at different times may be different, the multiple imaging images need to be registered first according to the coordinates correspondingly displayed on the multiple imaging images by the implant, and the pixel values of the registered multiple images are averaged to obtain an enhanced image. Wherein, the image area corresponding to the implant in the enhanced image is enhanced.
However, in the enhanced image, although the image region corresponding to the implant is enhanced, the other regions of the image are blurred because they are not registered, and therefore the positional relationship between the implant and the object in the other regions cannot be determined from the enhanced image. Therefore, in the above-mentioned scenario or other similar scenarios, how to enhance the image area of the target object such as an implant and improve the image sharpness of the non-target object area at the same time is a technical problem to be solved at present.
Disclosure of Invention
The technical problem to be solved by the present application is to provide an image enhancement display method, which can enhance a target object region in an image and improve the image definition of a non-target object region.
Therefore, the technical scheme for solving the technical problem is as follows:
the embodiment of the application provides an image enhancement display method, which comprises the following steps:
acquiring a plurality of acquired original images, wherein the plurality of original images respectively comprise target object areas;
registering the plurality of original images according to the positions corresponding to the target object areas in the plurality of original images to obtain a plurality of registered images;
obtaining an enhanced target object region according to the plurality of registration images;
taking one of the plurality of original images as a reference image, and determining a non-target object area of the reference image;
and synthesizing the enhanced target object region and the non-target object region of the reference image to obtain an enhanced image.
Optionally, obtaining an enhanced target object region according to the plurality of registration images includes:
carrying out weighted average processing on pixel values of the multiple registration images to obtain an initial synthetic image, wherein the initial synthetic image comprises the enhanced target object area;
synthesizing the enhanced target object region and the non-target object region of the reference image to obtain an enhanced image, comprising:
and synthesizing the non-target object area of the initial synthesized image and the reference image to obtain an enhanced image.
Optionally, synthesizing the initial synthesized image and the non-target object region of the reference image to obtain an enhanced image, including:
determining a target object region and a non-target object region of the initial composite image;
obtaining an enhanced image according to the target object region and the non-target object region of the initial composite image and the non-target object region of the reference image;
wherein the target object region in the enhanced image is obtained from the target object region of the initial composite image, the non-target object region in the enhanced image includes a transition region with the target object region in the enhanced image and a non-transition region other than the transition region, the transition region is obtained from the non-target object region of the reference image and the non-target object region of the initial composite image, and the non-transition region is obtained from the non-target object region of the reference image.
Optionally, determining the target object region and the non-target object region of the initial synthesized image includes:
generating an image segmentation map according to the initial synthetic image; the pixel value corresponding to the target object region in the image separation map is a first pixel value, and the pixel value corresponding to the non-target object region in the image separation map is a second pixel value;
obtaining an enhanced image according to the target object region and the non-target object region of the initial synthesized image and the non-target object region of the reference image, including:
obtaining the distance value of each pixel point from a target object area in the image separation map according to the image separation map;
obtaining a first weight value set and a second weight value set according to the distance value of each pixel point from a target object region in the image partition map and a preset maximum distance; the first weight value set comprises weight values corresponding to all pixel points in the initial synthetic image, and the second weight value set comprises weight values corresponding to all pixel points in the reference image;
obtaining an enhanced image from the initial composite image, the reference image, the first set of weight values, and the second set of weight values.
Optionally, obtaining an enhanced target object region according to the plurality of registration images includes:
extracting target object regions in the plurality of registered images respectively;
and carrying out weighted average processing on pixel values of the target object areas of the plurality of registration images to obtain the enhanced target object area.
An embodiment of the present application provides an image enhancement display device, including:
a first obtaining unit configured to obtain a plurality of acquired original images, each of the plurality of original images including a target object region;
a registration unit, configured to register the multiple original images according to positions corresponding to target object regions in the multiple original images, so as to obtain multiple registered images;
a second obtaining unit, configured to obtain an enhanced target object region according to the multiple registration images;
a determination unit configured to determine a non-target object region of a reference image, using one of the plurality of original images as the reference image;
and the synthesis unit is used for synthesizing the enhanced target object region and the non-target object region of the reference image to obtain an enhanced image.
Optionally, the second obtaining unit is specifically configured to perform weighted average processing on pixel values of the multiple registration images to obtain an initial synthesized image, where the initial synthesized image includes the enhanced target object region;
the synthesis unit is specifically configured to synthesize the non-target object region of the initial synthesized image and the reference image to obtain an enhanced image.
Optionally, the synthesizing unit includes:
a determining subunit, configured to determine a target object region and a non-target object region of the initial synthesized image;
an obtaining subunit, configured to obtain an enhanced image according to a target object region and a non-target object region of the initial synthesized image and a non-target object region of the reference image;
wherein the target object region in the enhanced image is obtained from the target object region of the initial composite image, the non-target object region in the enhanced image includes a transition region with the target object region in the enhanced image and a non-transition region other than the transition region, the transition region is obtained from the non-target object region of the reference image and the non-target object region of the initial composite image, and the non-transition region is obtained from the non-target object region of the reference image.
Optionally, the determining subunit is specifically configured to generate an image segmentation map according to the initial composite image; the pixel value corresponding to the target object region in the image separation map is a first pixel value, and the pixel value corresponding to the non-target object region in the image separation map is a second pixel value;
the obtaining subunit includes:
the distance obtaining subunit is used for obtaining the distance value of each pixel point from the target object area in the image partition map according to the image partition map;
the weight obtaining subunit is configured to obtain a first weight value set and a second weight value set according to a distance value between each pixel point and a target object region in the image partition map and a preset maximum distance; the first weight value set comprises weight values corresponding to all pixel points in the initial synthetic image, and the second weight value set comprises weight values corresponding to all pixel points in the reference image;
an image obtaining subunit, configured to obtain an enhanced image according to the initial synthesized image, the reference image, the first set of weight values, and the second set of weight values.
Optionally, the second obtaining unit includes:
an extraction subunit, configured to extract target object regions in the plurality of registration images, respectively;
and the region obtaining subunit is configured to perform weighted average processing on pixel values of the target object regions of the multiple registration images to obtain the enhanced target object region.
According to the technical scheme, the enhanced target object region is obtained according to the multiple registration images, the non-target object region of the reference image is also determined, and the enhanced image is synthesized according to the target object region and the non-target object region. Therefore, in the enhanced image, not only the target object region is subjected to the enhanced display processing, but also the non-target object region of the reference image is used, so that the image clarity of the non-target object region can be improved, and the positional relationship between the target object in the target object region and the object in the non-target object region can be expressed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a schematic flow chart of an embodiment of a method provided in an embodiment of the present application;
FIG. 2 is an exemplary diagram of a plurality of original images provided by an embodiment of the present application;
fig. 3 is an exemplary diagram of a plurality of original images after being registered according to an embodiment of the present application;
FIG. 4 is an exemplary diagram of a synthesized image after registration provided by an embodiment of the present application;
FIG. 5 is an exemplary diagram of a region of interest including a target object region extracted from an initial composite image according to an embodiment of the present application;
FIG. 6 is an exemplary diagram of texture features extracted from the image shown in FIG. 5 according to an embodiment of the present disclosure;
FIG. 7 is an exemplary diagram of the image shown in FIG. 4 after threshold segmentation according to an embodiment of the present disclosure;
FIG. 8 is an exemplary diagram of an enhanced image provided by an embodiment of the present application;
FIG. 9 is an exemplary diagram of an image corresponding to a second set of weight values provided by an embodiment of the present application;
fig. 10 is a schematic structural diagram of an embodiment of an apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, an embodiment of a method for enhancing a display image is provided. The method of the embodiment comprises the following steps:
s101: acquiring a plurality of acquired original images, wherein the plurality of original images respectively comprise target object areas.
The plurality of original images may be images obtained by image acquisition of the same target object at different times, and image acquisition by an acquisition device such as a Digital Subtraction Angiography (DSA) device may be obtained. For example, a plurality of raw images are obtained by image-acquiring an implanted stent such as a coronary stent at different times by a DSA apparatus.
In the embodiment of the present application, each original image includes a target object region, where the target object region refers to an image region where a target object is displayed, that is, an image region that needs to be enhanced. For example, each of the original images shows an image region in which a stent such as a coronary stent is implanted.
S102: and registering the plurality of original images according to the positions corresponding to the target object areas in the plurality of original images to obtain a plurality of registered images.
Since the position of the target object region in each raw image may be different, for example, as shown in fig. 2, 4 raw images acquired at different times: in the image a, the image b, the image c, and the image d, the coordinates of an implanted stent such as a coronary stent in the original image are different due to the motion of the heart. Therefore, it is necessary to register the multiple original images according to the positions corresponding to the target object regions in the multiple original images to obtain multiple registered images. The positions of the target object regions in the plurality of registered images are aligned.
Wherein the target object region in the plurality of raw images may be detected prior to the registration. The target object region may be detected by providing marker points in the vicinity of the target object, for example, when implanting a stent such as a coronary stent, the stent such as a coronary stent needs to be placed on a guide wire, and one marker point is provided on the guide wire at positions corresponding to both ends of the stent such as a coronary stent, and the X-ray attenuation rate of the material of the marker points is larger than that of the material of the other portion, so that the image region corresponding to the marker points is displayed as black points on the acquired original image, and the target object region can be identified by detecting the black points.
When the plurality of original images are registered, one reference image can be selected from the plurality of original images, coordinate transformation parameters of other original images are calculated according to the coordinates of the target object region in the reference image and the coordinates of the target object region in other original images except the reference image, and the other original images are subjected to displacement operation such as rotation according to the coordinate transformation parameters, so that the target object region in other original images is aligned with the target object region in the reference image, and a plurality of registered images after registration are obtained. For example, an image a in fig. 2 is used as a reference image, coordinates of the coronary stent in the images a, b, c and d are respectively obtained, and coordinate transformation parameters corresponding to the images b, c and d are respectively calculated according to the coordinates of the coronary stent in the images a, b, c and d, and a displacement operation such as rotation is performed on the images b, c and d according to the coordinate transformation parameters, so that the position of the coronary stent in the images b, c and d after the displacement operation is aligned with the position of the coronary stent in the image a, for example, as shown in fig. 3.
S103: and obtaining the enhanced target object region according to the plurality of registration images.
In this embodiment, the enhanced target object region may be obtained in various ways, which will be exemplarily described below.
In an alternative embodiment, the plurality of registration images are subjected to weighted average processing of pixel values, so as to obtain an initial synthesized image obtained by synthesizing the plurality of registration images, wherein the initial synthesized image includes the enhanced target object region. For example, the images a, b, c and d in fig. 3 are subjected to weighted average processing of pixel values to obtain an initial composite image, and the initial composite image includes an image region of an implanted stent such as an enhanced coronary stent. It should be noted that, when registering a plurality of original images, the target object region is registered based on the position of the target object region in the plurality of original images, and the non-target object regions other than the target object region are not registered, so that if the plurality of registered images are synthesized to obtain an initial synthesized image, the target object region in the initial synthesized image is enhanced and the non-target object regions other than the target object region are blurred.
In another optional embodiment, target object regions of the multiple registration images are respectively extracted, and the extracted target object regions of the multiple registration images are subjected to weighted average processing of pixel values, so as to obtain the enhanced target object region. For example, image regions of the coronary artery stent are extracted from the image a, the image b, the image c, and the image d in fig. 3, and weighted average processing of pixel values is performed on the extracted image regions, thereby obtaining an enhanced image region of the stent-implanted such as the coronary artery stent.
S104: and taking one of the plurality of original images as a reference image, and determining a non-target object area of the reference image.
Any one of the plurality of original images may be used as a reference image, and a non-target region of the reference image may be determined. For example, a reference image used in registration is obtained as a reference image. If the other images except the reference image are used as the reference image, the reference image may be shifted according to the position of the enhanced target object region, so that the target object region of the reference image matches the position of the enhanced target object region.
The non-target object region refers to an image region where the target object is not displayed, and corresponds to the target object region. When the non-target object region is specifically determined, the target object region in the reference image may be identified first, and the other image regions except the target object region may be used as the determined non-target object region. For example, an image region of the coronary artery stent is recognized from the image a in fig. 2, and the image region other than the image region in the image a is regarded as a non-target object region.
S105: and synthesizing the enhanced target object region and the non-target object region of the reference image to obtain an enhanced image.
If the enhanced target object region is directly obtained in S103, the enhanced target object region and the non-target object region of the reference image are directly synthesized, and the synthesized secondary synthesized image is used as the enhanced image. If the initial composite image including the enhanced target object region is obtained in S103, the initial composite image and the non-target object region of the reference image may be combined, and the combined secondary composite image may be used as the enhanced image.
As can be seen, in the embodiment of the present application, not only the enhanced target object region is obtained from the plurality of registered images, but also the non-target object region of the reference image is determined, so that the enhanced image is obtained by synthesizing the target object region and the non-target object region. Therefore, in the enhanced image, not only the target object region is subjected to the enhancement processing, but also the non-target object region of the reference image is used, so that the image clarity of the non-target object region can be improved, and the positional relationship between the target object in the target object region and the object in the non-target object region can be expressed.
Since the target object region of the initial synthesized image is the synthesized image region and the non-target object region of the reference image is the non-synthesized image region, if the target object region in the initial synthesized image and the non-target object region of the reference image are directly synthesized, the transition between the target object region and the non-target object region may be abrupt and unnatural. Therefore, it is possible to obtain a transition region from the non-target object region of the initial synthesized image and the non-target object region of the reference image, thereby solving the above-described problem. This will be explained in detail below.
S105 includes S1051 and S1052.
S1051: a target object region and a non-target object region of the initial composite image are determined.
The target object area of the initial synthesized image can be identified, and the other image areas except the target object area are used as the determined non-target object area.
In an alternative embodiment, an image segmentation map may be generated according to the initial composite image, wherein the pixel values of the target object region and the non-target object region in the image segmentation map are different, so as to distinguish the two regions. For example, the pixel value corresponding to the target object region in the image separation map is a first pixel value, the pixel value corresponding to the non-target object region in the image separation map is a second pixel value, and the first pixel value and the second pixel value are different. The process of generating the image segmentation map will be described in detail below.
In order to reduce the workload of the threshold segmentation process, a region of interest (ROI) may be extracted from the initial composite image, where the region of interest is a partial image region of the initial composite image including the target object region, and the region of interest is subjected to threshold segmentation to extract the target object region. For example, when a marker point is provided near an implanted stent such as a coronary stent, the ROI can be identified by detecting an image region corresponding to the marker point. For example, as shown in fig. 4, in the initial synthesized image, image regions M and N corresponding to two marking points are identified, a rectangular ROI is selected by using coordinates of the image regions M and N corresponding to the two marking points, and the selected ROI is shown in fig. 5. Wherein, the coordinates of the image areas M and N corresponding to the two mark points are respectively (X)1,Y1),(X2,Y2) Then the coordinates of the four vertices A, B, C, D of the rectangular ROI may be:
A:(min(X1,X2)-offset,min(Y1,Y2)-offset)
B:(min(X1,X2)-offset,max(Y1,Y2)+offset)
C:(max(X1,X2)+offset,max(Y1,Y2)+offset)
D:(max(X1,X2)+offset,min(Y1,Y2)-offset)
wherein, min (X)1,X2) Is X1And X2Max (X1, X2) is X1And X2Maximum value of (1), min (X)1,X2) Is X1And X2Max (X1, X2) is X1And X2Maximum value of (1), min (Y)1,Y2) Is Y1And Y2Minimum value of (d), max (Y)1,Y2) Is Y1And Y2Maximum value of (2). The offset is a positive number indicating the distance from the image region corresponding to any one of the markers to the nearest ROI edge of the marker.
The texture features of the image are extracted for the ROI in fig. 5. The texture feature refers to an image feature capable of representing spatial distribution of pixel values, and may be obtained by counting pixel values in a region block corresponding to a pixel point. For example, it may be determined that an area block in the ROI is centered on each pixel point, and a standard deviation of pixel values in the area block is used as a texture feature corresponding to the pixel point, specifically, a standard deviation σ of pixel values of the area block of the pixel point Q centered on the pixel point Q is:
Figure BDA0001843584410000101
wherein N is the total number of pixel points in the block, xiAnd the mu is the average value of the pixel values of all the pixel points in the area block. And taking the calculated standard deviation sigma as a texture feature corresponding to the pixel point Q.
The size of the region block may be set according to the image size, and for example, the size of the region block may be set to 11 pixels × 11 pixels. The texture features of the extracted image are shown in fig. 6. In the target object region in fig. 5, since there is an object such as a stent or a guide wire, the variation in pixel value is large, and the calculated standard deviation is higher than that of the non-target object region. Therefore, the image is subjected to threshold segmentation based on the standard deviation, a target object region can be identified from the ROI, and an image segmentation map is generated according to the identified target object region. For example, if the standard deviation corresponding to a certain pixel point is greater than a preset threshold, the pixel value of the pixel point is set to 1, otherwise, the pixel value of the pixel point is set to 0, and all the pixel values corresponding to the image regions outside the ROI in the initial synthesized image are set to 0, so as to generate an image segmentation map, for example, a binary map shown in fig. 7. In fig. 7, the pixel value corresponding to the black image portion is 0, the pixel value corresponding to the white image portion is 1, and the white image portion having the pixel value of 1 indicates the position of the target object region. In addition, the segmentation results of the threshold segmentation may be stored in a memory.
In the embodiment of the present application, the target object region may be identified in another manner, for example, in addition to the standard deviation as the texture feature, the local entropy of the image, the maximum value of the local pixel value, the minimum value of the local pixel value, and the like may be used as the texture feature, and in addition to the threshold segmentation, another image segmentation manner, for example, a segmentation method based on active contourr, a segmentation manner based on a watershed algorithm, and the like may be used.
S1052: after the target object region and the non-target object region of the initial synthesized image are determined, an enhanced image may be obtained according to the target object region of the initial synthesized image, the non-target object region of the initial synthesized image, and the non-target object region of the reference image.
The enhanced image may be as shown in fig. 8. The target object region of the enhanced image may be obtained from the target object region of the initial synthesized image, for example, the target object region of the initial synthesized image may be directly used as the target object region in the enhanced image. And the non-target object region in the enhanced image may include a transition region with the target object region in the enhanced image and a non-transition region other than the transition region.
The transition region refers to an image region that is transitioned from the target object region to the non-target object region, that is, an image region adjacent to the target object region in the non-target object region. That is, in the enhanced image, the target object region of the initial synthesized image is not transited directly to the non-target object region of the reference image, but transited from the target object region of the initial synthesized image to the non-target object region of the reference image through the transition region. The transition region may be obtained according to the non-target object region of the reference image and the non-target object region of the initial synthesized image, for example, a pixel value of a pixel point of the transition region may be a weighted sum of a pixel value of a pixel point of the non-target object region of the reference image and a pixel value of a pixel point of the non-target object region of the initial synthesized image. It can be seen that the problem of abrupt and unnatural transitions is solved, since the transition regions refer to the non-target object regions of the initial composite image.
In order to realize the synthesis of the enhanced image in the above manner, the weight value sets of the initial synthesized image and the reference image may be set, and the synthesis may be performed according to the weight value sets, which will be described in detail below.
S1052 may include S1052A, S1052B, and S1052C.
S1052A: and obtaining the distance value of each pixel point from the target object area in the image partition map according to the image partition map.
For example, as for the pixel value of a certain pixel point in fig. 7, if the pixel value of the pixel point is 1, it indicates that the pixel point is located in the target object region, and therefore the distance value between the pixel point and the target object region may be set to be 0; if the pixel value of the pixel point is 0, the distance value of the pixel point from the target object region may be: the distance from this pixel to the nearest pixel having a value of 1.
Wherein, the definition of the distance may be: let the coordinates of two points be (x)1,y1),(x2,y2) The distance between two points is
Figure BDA0001843584410000121
S1052B: and obtaining a first weight value set and a second weight value set according to the distance value of each pixel point from the target object region in the image partition map and a preset maximum distance.
The preset maximum distance is used to indicate a distance from the outer boundary of the target object region to the outer boundary of the transition region, that is, the preset maximum distance is related to the area size of the transition region. Specifically, the larger the preset maximum distance is, the larger the area of the transition region is; conversely, the smaller the preset maximum distance, the smaller the area of the transition region.
Wherein the first set of weight values includes weight values corresponding to respective pixel points in the initial synthesized image. Specifically, when the distance value from a certain pixel point to the target object region in the image partition map is 0, it indicates that the pixel point is located in the target object region, and the weight value corresponding to the pixel point in the first weight value set may be 1, so that all the target object regions of the enhanced image are obtained according to the target object region of the initial synthesized image. For example, the distance value E (i, j) from a certain pixel point to the target object region in the image partition map is 0, and the weight value W _ edge (i, j) corresponding to the pixel point is 1; when a distance value from a certain pixel point to a target object region in the image separation map is greater than 0 and less than a preset maximum distance, it indicates that the pixel point is located in a transition region of a non-target object region, a weight value corresponding to the pixel point in the first weight value set may be between 0 and 1, and is related to the distance value between the pixel point and the target object region, specifically, when the distance value is larger, the weight value is smaller, and conversely, when the distance value is smaller, the weight value is larger. Such that a transition region in the non-target object region of the enhanced image is simultaneously obtained from the non-target object region of the initial composite image and the non-target object region of the reference image. For example, a distance value E (i, j) from a certain pixel point to a target object region in the image separation map is greater than 0 and less than max _ distance, where a weight value W _ edge (i, j) corresponding to the pixel point is 1-E (i, j)/max _ distance, where E (i, j) is the distance value between the pixel point and the target object region, and max _ distance is a preset maximum distance; when the distance value of a certain pixel point from the target object region in the image partition map is greater than or equal to the preset maximum distance, the pixel point is located in the non-transition region of the non-target object region, and the weight value of the pixel point in the first weight value set is 0, so that the non-transition region in the non-target object region of the enhanced image is unrelated to the non-target object region of the initial composite image. For example, a distance value E (i, j) from a certain pixel point to the target object region in the image partition map is greater than or equal to max _ distance, and a weight value W _ edge (i, j) corresponding to the pixel point is 0. i is greater than 0 and j is greater than 0.
Wherein the second set of weight values includes weight values corresponding to each pixel point in the reference image. Specifically, when the distance value from a certain pixel point to the target object region in the image partition map is 0, it indicates that the pixel point is located in the target object region, and the weight value corresponding to the pixel point in the second weight value set may be 0, so that the target object region of the enhanced image is unrelated to the target object region of the reference image. For example, the distance value E (i, j) from a certain pixel point to the target object region in the image partition map is 0, and the weight value W _ ori (i, j) corresponding to the pixel point is 0; when a distance value from a certain pixel point to a target object region in the image partition map is greater than 0 and less than a preset maximum distance, it indicates that the pixel point is located in a transition region of a non-target object region, a weight value corresponding to the pixel point in the second weight value set may be between 0 and 1, and is related to the distance value between the pixel point and the target object region, specifically, when the distance value is greater, the weight value is greater, and when the distance value is smaller, the weight value is smaller. Such that a transition region in the non-target object region of the enhanced image is simultaneously obtained from the non-target object region of the initial composite image and the non-target object region of the reference image. For example, a distance value E (i, j) from a certain pixel point to a target object region in the image partition map is greater than 0 and less than max _ distance, where a weight value W _ ori (i, j) corresponding to the pixel point is E (i, j)/max _ distance, where E (i, j) is the distance value between the pixel point and the target object region, and max _ distance is a preset maximum distance; when the distance value from a certain pixel point to the target object region in the image partition map is greater than or equal to the preset maximum distance, the pixel point is located in the non-transition region of the non-target object region, and the weight value of the pixel point in the second weight value set is 1, so that all the non-transition regions in the non-target object region of the enhanced image are obtained according to the non-target object region of the reference image. For example, a distance value E (i, j) from a certain pixel point to the target object region in the image partition map is greater than or equal to max _ distance, and a weight value W _ ori (i, j) corresponding to the pixel point is 1. Wherein the second set of weight values W _ ori may be represented as shown in fig. 9, in which the pixel value of the white image portion is 1, the pixel value of the black image portion is 0, and the pixel value of the transition region between the white image portion and the black image portion is a decimal between 0 and 1 (excluding 0 and 1).
Wherein after the first set of weight values may be calculated, a second set of weight values is obtained from the first set of weight values. Alternatively, after the second set of weight values is calculated, the first set of weight values may be obtained from the second set of weight values. For example, the first set of weight values W _ inhace is calculated, and a second set of weight values W _ ori having the same size as the first set of weight values is obtained, such that each element W _ ori (i, j) in the second set of weight values W _ ori has a value that is a difference between 1 and W _ inhace (i, j), where W _ inhace (i, j) is a value of an element corresponding to the element W _ ori (i, j) in the first set of weight values W _ inhace.
It should be noted that, in addition to obtaining the first weight value set and the second weight value set by calculating the distance transform value, the first weight value set and the second weight value set may be obtained from the image segmentation map shown in fig. 7 by performing gaussian smoothing processing on the image segmentation map and performing gaussian balance processing on the image segmentation map, which is not limited in the embodiment of the present application.
S1052C: obtaining an enhanced image from the initial composite image, the reference image, the first set of weight values, and the second set of weight values.
Optionally, a first matrix is obtained by multiplying a weight value in the first weight value set by a corresponding pixel value in the initial synthesized image, a second matrix is obtained by multiplying a weight value in the second weight value set by a corresponding pixel value in the reference image, and a sum of the first matrix and the second matrix is used as an image matrix corresponding to the enhanced image.
For example, enhanced image I _ comb:
I_comb=I_ori.*W_ori+I_enhance.*W_enhance
wherein, I _ ori is an image matrix of the reference image, and I _ enhance is an image matrix of the initial synthesized image, which represents dot multiplication of two matrices, that is, multiplication of elements corresponding to the matrices. I _ comb can be as shown in fig. 8, the coronary artery stent region in the image maintains the enhanced effect, and the other regions maintain the effect of the original image.
Corresponding to the above method embodiments, the present application further provides corresponding apparatus embodiments, which are specifically described below.
Referring to FIG. 10, an embodiment of an apparatus for an image intensifier display device is provided. The embodiment comprises the following steps: a first obtaining unit 1001, a registration unit 1002, a second obtaining unit 1003, a determination unit 1004, and a synthesis unit 1005.
A first obtaining unit 1001 configured to obtain a plurality of acquired original images, where each of the plurality of original images includes a target object region;
a registration unit 1002, configured to register the multiple original images according to positions corresponding to target object regions in the multiple original images, so as to obtain multiple registered images;
a second obtaining unit 1003, configured to obtain an enhanced target object region according to the multiple registration images;
a determination unit 1004 for determining a non-target object region of a reference image, using one of the plurality of original images as the reference image;
a synthesizing unit 1005, configured to synthesize the enhanced target object region and the non-target object region of the reference image to obtain an enhanced image.
Optionally, the second obtaining unit 1003 is specifically configured to perform weighted average processing on pixel values of the multiple registration images to obtain an initial composite image, where the initial composite image includes the enhanced target object region;
the synthesizing unit 1005 is specifically configured to synthesize the non-target object region of the initial synthesized image and the reference image to obtain an enhanced image.
Optionally, the synthesizing unit 1005 includes:
a determining subunit, configured to determine a target object region and a non-target object region of the initial synthesized image;
an obtaining subunit, configured to obtain an enhanced image according to a target object region and a non-target object region of the initial synthesized image and a non-target object region of the reference image;
wherein the target object region in the enhanced image is obtained from the target object region of the initial composite image, the non-target object region in the enhanced image includes a transition region with the target object region in the enhanced image and a non-transition region other than the transition region, the transition region is obtained from the non-target object region of the reference image and the non-target object region of the initial composite image, and the non-transition region is obtained from the non-target object region of the reference image.
Optionally, the determining subunit is specifically configured to generate an image segmentation map according to the initial composite image; the pixel value corresponding to the target object region in the image separation map is a first pixel value, and the pixel value corresponding to the non-target object region in the image separation map is a second pixel value;
the obtaining subunit includes:
the distance obtaining subunit is used for obtaining the distance value of each pixel point from the target object area in the image partition map according to the image partition map;
the weight obtaining subunit is configured to obtain a first weight value set and a second weight value set according to a distance value between each pixel point and a target object region in the image partition map and a preset maximum distance; the first weight value set comprises weight values corresponding to all pixel points in the initial synthetic image, and the second weight value set comprises weight values corresponding to all pixel points in the reference image;
an image obtaining subunit, configured to obtain an enhanced image according to the initial synthesized image, the reference image, the first set of weight values, and the second set of weight values.
Optionally, the second obtaining unit 1003 includes:
an extraction subunit, configured to extract target object regions in the plurality of registration images, respectively;
and the region obtaining subunit is configured to perform weighted average processing on pixel values of the target object regions of the multiple registration images to obtain the enhanced target object region.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a logistics management server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (4)

1. An image enhancement display method, comprising:
acquiring a plurality of acquired original images, wherein the plurality of original images respectively comprise target object areas; the target object area is an image area where a target object is displayed, and the target object comprises an implant;
registering the plurality of original images according to the positions corresponding to the target object areas in the plurality of original images to obtain a plurality of registered images;
obtaining an enhanced target object region according to the plurality of registration images;
taking one of the plurality of original images as a reference image, and determining a non-target object area of the reference image;
synthesizing the enhanced target object region and the non-target object region of the reference image to obtain an enhanced image;
obtaining an enhanced target object region from the plurality of registered images, comprising:
carrying out weighted average processing on pixel values of the multiple registration images to obtain an initial synthetic image, wherein the initial synthetic image comprises the enhanced target object area;
synthesizing the enhanced target object region and the non-target object region of the reference image to obtain an enhanced image, comprising:
synthesizing the initial synthesized image and the non-target object area of the reference image to obtain an enhanced image;
synthesizing the initial synthesized image and the non-target object region of the reference image to obtain an enhanced image, including:
determining a target object region and a non-target object region of the initial composite image;
obtaining an enhanced image according to the target object region and the non-target object region of the initial composite image and the non-target object region of the reference image;
wherein the target object region in the enhanced image is obtained from the target object region of the initial composite image, the non-target object region in the enhanced image includes a transition region with the target object region in the enhanced image and a non-transition region other than the transition region, the transition region is obtained from the non-target object region of the reference image and the non-target object region of the initial composite image, and the non-transition region is obtained from the non-target object region of the reference image;
determining a target object region and a non-target object region of the initial composite image, comprising:
generating an image segmentation map according to the initial synthetic image; the pixel value corresponding to the target object region in the image segmentation map is a first pixel value, and the pixel value corresponding to the non-target object region in the image segmentation map is a second pixel value;
obtaining an enhanced image according to the target object region and the non-target object region of the initial synthesized image and the non-target object region of the reference image, including:
obtaining the distance value of each pixel point from a target object area in the image segmentation map according to the image segmentation map;
obtaining a first weight value set and a second weight value set according to the distance value of each pixel point from a target object region in the image segmentation graph and a preset maximum distance; the first weight value set comprises weight values corresponding to all pixel points in the initial synthetic image, and the second weight value set comprises weight values corresponding to all pixel points in the reference image;
obtaining an enhanced image from the initial composite image, the reference image, the first set of weight values, and the second set of weight values;
in the transition region, a weight value W _ enhace (i, j) in the first weight value set corresponding to the pixel point (i, j) is 1-E (i, j)/max _ distance, and a weight value W _ ori (i, j) in the second weight value set corresponding to the pixel point (i, j) is E (i, j)/max _ distance;
wherein E (i, j) is a distance value between the pixel point (i, j) and the target object region, and max _ distance is a preset maximum distance;
in the non-transition region, the weight value corresponding to the pixel point in the first weight value set is 0, and the weight value corresponding to the pixel point in the second weight value set is 1;
in the target object region, the weight value of the pixel point corresponding to the first weight value set is 1, and the weight value of the pixel point corresponding to the second weight value set is 0.
2. The method of claim 1, wherein obtaining an enhanced target object region from the plurality of registered images comprises:
extracting target object regions in the plurality of registered images respectively;
and carrying out weighted average processing on pixel values of the target object areas of the plurality of registration images to obtain the enhanced target object area.
3. An image intensifier display device, comprising:
a first obtaining unit configured to obtain a plurality of acquired original images, each of the plurality of original images including a target object region; the target object area is an image area where a target object is displayed, and the target object comprises an implant;
a registration unit, configured to register the multiple original images according to positions corresponding to target object regions in the multiple original images, so as to obtain multiple registered images;
a second obtaining unit, configured to obtain an enhanced target object region according to the multiple registration images;
a determination unit configured to determine a non-target object region of a reference image, using one of the plurality of original images as the reference image;
a synthesizing unit configured to synthesize the enhanced target object region and a non-target object region of the reference image to obtain an enhanced image;
the second obtaining unit is specifically configured to perform weighted average processing on pixel values of the multiple registration images to obtain an initial synthesized image, where the initial synthesized image includes the enhanced target object region;
the synthesis unit is specifically configured to synthesize the initial synthesized image and the non-target object region of the reference image to obtain an enhanced image;
the synthesis unit includes:
a determining subunit, configured to determine a target object region and a non-target object region of the initial synthesized image;
an obtaining subunit, configured to obtain an enhanced image according to a target object region and a non-target object region of the initial synthesized image and a non-target object region of the reference image;
wherein the target object region in the enhanced image is obtained from the target object region of the initial composite image, the non-target object region in the enhanced image includes a transition region with the target object region in the enhanced image and a non-transition region other than the transition region, the transition region is obtained from the non-target object region of the reference image and the non-target object region of the initial composite image, and the non-transition region is obtained from the non-target object region of the reference image;
the determining subunit is specifically configured to generate an image segmentation map according to the initial composite image; the pixel value corresponding to the target object region in the image segmentation map is a first pixel value, and the pixel value corresponding to the non-target object region in the image segmentation map is a second pixel value;
the obtaining subunit includes:
the distance obtaining subunit is used for obtaining a distance value of each pixel point from a target object area in the image segmentation map according to the image segmentation map;
the weight obtaining subunit is configured to obtain a first weight value set and a second weight value set according to a distance value between each pixel point and a target object region in the image segmentation map and a preset maximum distance; the first weight value set comprises weight values corresponding to all pixel points in the initial synthetic image, and the second weight value set comprises weight values corresponding to all pixel points in the reference image;
an image obtaining subunit, configured to obtain an enhanced image according to the initial synthesized image, the reference image, the first set of weight values, and the second set of weight values;
in the transition region, a weight value W _ enhace (i, j) in the first weight value set corresponding to the pixel point (i, j) is 1-E (i, j)/max _ distance, and a weight value W _ ori (i, j) in the second weight value set corresponding to the pixel point (i, j) is E (i, j)/max _ distance;
wherein E (i, j) is a distance value between the pixel point (i, j) and the target object region, and max _ distance is a preset maximum distance;
in the non-transition region, the weight value corresponding to the pixel point in the first weight value set is 0, and the weight value corresponding to the pixel point in the second weight value set is 1;
in the target object region, the weight value of the pixel point corresponding to the first weight value set is 1, and the weight value of the pixel point corresponding to the second weight value set is 0.
4. The apparatus of claim 3, wherein the second obtaining unit comprises:
an extraction subunit, configured to extract target object regions in the plurality of registration images, respectively;
and the region obtaining subunit is configured to perform weighted average processing on pixel values of the target object regions of the multiple registration images to obtain the enhanced target object region.
CN201811259583.4A 2018-10-26 2018-10-26 Image enhancement display method and related device Active CN109559285B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811259583.4A CN109559285B (en) 2018-10-26 2018-10-26 Image enhancement display method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811259583.4A CN109559285B (en) 2018-10-26 2018-10-26 Image enhancement display method and related device

Publications (2)

Publication Number Publication Date
CN109559285A CN109559285A (en) 2019-04-02
CN109559285B true CN109559285B (en) 2021-08-06

Family

ID=65865361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811259583.4A Active CN109559285B (en) 2018-10-26 2018-10-26 Image enhancement display method and related device

Country Status (1)

Country Link
CN (1) CN109559285B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445408A (en) * 2020-03-25 2020-07-24 浙江大华技术股份有限公司 Method, device and storage medium for performing differentiation processing on image
CN111441541A (en) * 2020-04-13 2020-07-24 湖南文理学院 Roof garden rainwater utilization system and method
CN111901525A (en) * 2020-07-29 2020-11-06 西安欧亚学院 Multi-camera artificial intelligence image processing method
CN112200022A (en) * 2020-09-23 2021-01-08 上海联影医疗科技股份有限公司 Image processing method, medical imaging apparatus, and storage medium
CN112949767B (en) * 2021-04-07 2023-08-11 北京百度网讯科技有限公司 Sample image increment, image detection model training and image detection method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3061063A4 (en) * 2013-10-22 2017-10-11 Eyenuk, Inc. Systems and methods for automated analysis of retinal images
CN105046676A (en) * 2015-08-27 2015-11-11 上海斐讯数据通信技术有限公司 Image fusion method and equipment based on intelligent terminal
CN107481272A (en) * 2016-06-08 2017-12-15 瑞地玛医学科技有限公司 A kind of radiotherapy treatment planning image registration and the method and system merged

Also Published As

Publication number Publication date
CN109559285A (en) 2019-04-02

Similar Documents

Publication Publication Date Title
CN109559285B (en) Image enhancement display method and related device
JP5643304B2 (en) Computer-aided lung nodule detection system and method and chest image segmentation system and method in chest tomosynthesis imaging
US9547894B2 (en) Apparatus for, and method of, processing volumetric medical image data
CN110766735B (en) Image matching method, device, equipment and storage medium
EP2100267A1 (en) Texture-based multi-dimensional medical image registration
EP3201876B1 (en) Medical image processing apparatus, medical image processing method
US20010007593A1 (en) Method and unit for displaying images
JP2019028887A (en) Image processing method
CN105374023B (en) Target area segmentation method, and image reconstruction method and device thereof
CN111723836A (en) Image similarity calculation method and device, electronic equipment and storage medium
JP4668289B2 (en) Image processing apparatus and method, and program
CN116844734B (en) Method and device for generating dose prediction model, electronic equipment and storage medium
CN117911432A (en) Image segmentation method, device and storage medium
US20180268547A1 (en) Image processing apparatus, control method of image processing apparatus, and storage medium
Kim et al. Automatic localization of anatomical landmarks in cardiac MR perfusion using random forests
JP2019164071A (en) Computer program and image processor
WO2024002110A1 (en) Methods and systems for determining image control point
CN112767314B (en) Medical image processing method, device, equipment and storage medium
CN114708283A (en) Image object segmentation method and device, electronic equipment and storage medium
JP2022052210A (en) Information processing device, information processing method, and program
CN108305245B (en) Image data analysis method
KR20170098388A (en) The apparatus and method for correcting error be caused by overlap of object in spatial augmented reality
CN115131388B (en) Extraction method, device and equipment for bone quantity directional superposition calculation
CN112017146B (en) Bone segmentation method and device
CN117690551B (en) Isodose line-based complication occurrence probability determining device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant