CN116740182B - Ghost area determining method and device, storage medium and electronic equipment - Google Patents

Ghost area determining method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116740182B
CN116740182B CN202311013549.XA CN202311013549A CN116740182B CN 116740182 B CN116740182 B CN 116740182B CN 202311013549 A CN202311013549 A CN 202311013549A CN 116740182 B CN116740182 B CN 116740182B
Authority
CN
China
Prior art keywords
segmented image
image
segmented
pair
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311013549.XA
Other languages
Chinese (zh)
Other versions
CN116740182A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Moore Threads Technology Co Ltd
Original Assignee
Moore Threads Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Moore Threads Technology Co Ltd filed Critical Moore Threads Technology Co Ltd
Priority to CN202311013549.XA priority Critical patent/CN116740182B/en
Publication of CN116740182A publication Critical patent/CN116740182A/en
Application granted granted Critical
Publication of CN116740182B publication Critical patent/CN116740182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The application provides a ghost area determining method, a ghost area determining device, a storage medium and electronic equipment. In the ghost area determining method provided by the application, a first image and a second image to be fused are acquired; dividing the first image and the second image by adopting the same dividing mode to obtain each first block image of the first image and each second block image of the second image; determining a first segmented image and a second segmented image which are positioned at the same position as a segmented image pair; determining, for each segmented image pair, a bi-directional difference for the segmented image pair, the bi-directional difference being used to characterize a degree of matching of a first segmented image with a second segmented image in the segmented image pair; and judging whether a ghost area is generated after the first block image and the second block image in the block image pair are fused according to the bidirectional difference.

Description

Ghost area determining method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and apparatus for determining a ghost area, a storage medium, and an electronic device.
Background
In the field of image processing, image fusion is a widely applied technical means. Image registration is a critical process in the preprocessing stage of image fusion. Currently, limited to the development of technology, no matter what way to achieve image registration, local misalignment may occur after registration, and the misaligned region is generally called a ghost region. And elimination of the ghost areas generated during the image registration process is a critical step in the image fusion technique.
In the prior art, when determining a ghost area generated in an image registration process, a frame difference method is generally adopted, namely, two images to be fused are subjected to pixel-by-pixel difference, and pixel points with difference of pixel values larger than a set threshold value are used as the ghost area. However, in the prior art, if the brightness of the two images to be fused is not uniform, there is a high possibility that the pixel value difference of the pixel points of the same texture in the two images is greater than the set threshold value, so that the aligned region is judged as a ghost region, and the prediction of the ghost region is deviated.
Therefore, how to accurately predict the ghost areas occurring during image registration is an urgent problem to be solved.
Disclosure of Invention
The present application provides a ghost area determining method, apparatus, storage medium and electronic device to at least partially solve the above-mentioned problems of the prior art.
The application adopts the following technical scheme:
the application provides a ghost area determining method, which comprises the following steps:
acquiring a first image and a second image to be fused;
dividing the first image and the second image by adopting the same dividing mode to obtain each first block image of the first image and each second block image of the second image;
determining a first segmented image and a second segmented image which are positioned at the same position as a segmented image pair;
determining, for each segmented image pair, a bi-directional difference for the segmented image pair, the bi-directional difference being used to characterize a degree of matching of a first segmented image with a second segmented image in the segmented image pair;
and judging whether a ghost area is generated after the first block image and the second block image in the block image pair are fused according to the bidirectional difference.
Optionally, determining the bidirectional difference of the segmented image pair specifically includes:
a backward variance and a forward variance of the segmented image are determined.
Optionally, before determining the first and second tile images located at the same position as the pair of tile images, the method further includes:
inputting the first block images into a pre-trained matching model, determining a backward matching area of each first block image in the second image, inputting the second block images into the matching model, and determining a forward matching area of each second block image in the first image;
the determining of the backward difference and the forward difference of the segmented image specifically comprises:
for each segmented image pair, determining a backward difference of the segmented image pair according to a first segmented image in the segmented image pair and a backward matching area of the first segmented image in the segmented image pair, and determining a forward difference of the segmented image pair according to a second segmented image in the segmented image pair and a forward matching area of the second segmented image in the segmented image pair.
Optionally, determining the backward difference of the segmented image pair according to the first segmented image in the segmented image pair and the backward matching area of the first segmented image in the segmented image pair specifically includes:
Determining, for each pixel point of a first segmented image in the segmented image pair, an absolute value of a difference between a pixel value of the pixel point at a corresponding position in a backward matching region of the first segmented image in the segmented image pair and a pixel value of the pixel point, as a first residual error of the pixel point, and determining a sum of the first residual errors of the pixel points as a backward difference of the segmented image pair;
determining a forward difference of the segmented image pair according to the second segmented image in the segmented image pair and a forward matching area of the second segmented image in the segmented image pair specifically comprises:
for each pixel point of the second segmented image in the segmented image pair, determining an absolute value of a difference between a pixel value of the pixel point at a corresponding position in a forward matching region of the second segmented image in the segmented image pair and a pixel value of the pixel point, as a second residual of the pixel point, and determining a sum of the second residuals of the pixel points as a forward difference of the segmented image pair.
Optionally, according to the bidirectional difference, judging whether the first segmented image and the second segmented image in the segmented image pair generate a ghost area after fusion, specifically including:
Determining that the first and second segmented images in the segmented image pair do not produce a ghosted region after fusing when an absolute value of a difference between a backward difference and a forward difference of the segmented image pair is not greater than a first specified threshold or at least one of the backward difference and the forward difference of the segmented image pair is not greater than a second specified threshold;
and determining that the first segmented image and the second segmented image in the segmented image pair generate a ghost area after fusion when the absolute value of the difference between the backward difference and the forward difference of the segmented image pair is greater than a first specified threshold and both the backward difference and the forward difference of the segmented image pair are greater than a second specified threshold.
Optionally, the method further comprises:
when the first segmented image and the second segmented image in the segmented image pair do not generate a ghost area after fusion, the first segmented image and the second segmented image in the segmented image pair are fused in a first preset mode according to the pixel value of each pixel point in the first segmented image and the second segmented image in the segmented image pair, and the target segmented image with the same position as the segmented image pair in the target image is obtained.
Optionally, the method further comprises:
when the first segmented image and the second segmented image in the segmented image pair can generate a ghost area after fusion, the first segmented image and the second segmented image in the segmented image pair are fused in a second preset mode, and then the target segmented image with the same position as the segmented image pair in the target image is obtained.
Optionally, before fusing the first segmented image and the second segmented image in the segmented image pair in the second preset manner, the method further includes:
determining a backward direction of a segmented image pair according to the position offset direction of a backward matching area of a first segmented image of the segmented image pair relative to the first segmented image of the segmented image pair aiming at each segmented image pair which generates a ghost area after fusion; and is combined with the other components of the water treatment device,
determining the forward direction of the segmented image pair according to the position offset direction of the forward matching region of the second segmented image of the segmented image pair relative to the second segmented image of the segmented image pair;
and re-judging whether the first segmented image and the second segmented image in the segmented image pair generate a ghost area after fusion according to the backward direction and the forward direction of the segmented image pair.
Optionally, the determining, according to the backward direction and the forward direction of the segmented image pair, whether the first segmented image and the second segmented image in the segmented image pair generate a ghost area after fusion specifically includes:
if the included angle between the backward direction and the forward direction of the segmented image pair is within a specified range, determining that a ghost area cannot be generated after the first segmented image and the second segmented image of the segmented image pair are fused;
otherwise, determining that the first segmented image and the second segmented image of the segmented image pair will generate a ghosted region after fusion.
Optionally, fusing the first segmented image and the second segmented image in the segmented image pair in a second preset manner specifically includes:
and fusing the first segmented image and the second segmented image in the segmented image pair by adopting a second preset mode, and fusing the first segmented image and the second segmented image in the segmented image pair corresponding to the position in the preset range of the position of the segmented image pair.
The application provides a ghost area determining device, which comprises:
the acquisition module is used for acquiring a first image and a second image to be fused;
The dividing module is used for dividing the first image and the second image in the same dividing mode to obtain first block images of the first image and second block images of the second image;
a combination module for determining the first block image and the second block image which are positioned at the same position as a block image pair;
a determining module, configured to determine, for each segmented image pair, a bidirectional difference of the segmented image pair, where the bidirectional difference is used to characterize a matching degree of a first segmented image and a second segmented image in the segmented image pair;
and the judging module is used for judging whether the first segmented image and the second segmented image in the segmented image pair generate a ghost area after fusion according to the bidirectional difference.
The present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-described ghost area determination method.
The application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the ghost area determination method described above when executing the program.
The at least one technical scheme adopted by the application can achieve the following beneficial effects:
in the ghost area determining method provided by the application, a first image and a second image to be fused are acquired; dividing the first image and the second image by adopting the same dividing mode to obtain each first block image of the first image and each second block image of the second image; determining a first segmented image and a second segmented image which are positioned at the same position as a segmented image pair; determining, for each segmented image pair, a bi-directional difference for the segmented image pair, the bi-directional difference being used to characterize a degree of matching of a first segmented image with a second segmented image in the segmented image pair; and judging whether a ghost area is generated after the first block image and the second block image in the block image pair are fused according to the bidirectional difference.
When the ghost area determination method provided by the application is adopted to judge the ghost area generated when the first image and the second image are fused, the integral fusion of the images can be converted into the separate fusion of the segmented images which is easier to predict by dividing the images; meanwhile, whether the segmented image at one position generates a ghost area after fusion can be more reasonably judged in a bidirectional prediction mode. The method can more accurately determine the position of the possible ghost area generated during image fusion, reduce the calculated amount during the ghost area prediction, and better eliminate the ghost image during the subsequent fusion.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of a method for determining a ghost area according to the present application;
FIG. 2 is a schematic illustration of the same way of dividing a first image and a second image in accordance with the present application;
FIG. 3 is a schematic view of a first segmented image and a second segmented image of the same location in a first image and a second image, and a backward matching region and a forward matching region in the present application;
FIG. 4 is a schematic view of a backward direction and a forward direction in the present application;
FIG. 5 is a schematic view of a ghost area determining apparatus provided by the present application;
fig. 6 is a schematic diagram of an electronic device corresponding to fig. 1 provided in the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
FIG. 1 is a flow chart of a method for determining a ghost area according to the present application, which specifically includes the following steps:
s100: and acquiring a first image and a second image to be fused.
All steps in the ghost area determining method provided by the application can be realized by any electronic device with a computing function, such as a terminal, a server and the like.
The ghost area determining method provided by the application is mainly applied to determining the ghost area possibly generated when two related images are fused. Application scenarios may include, but are not limited to, aspects of video interpolation, high dynamic range imaging (High Dynamic Range Imaging, HDR) synthesis of images or video, image or video noise reduction, and the like. The first image and the second image may be continuous frames or discontinuous frames in the same video, or may be images continuously acquired by the image acquisition device, which is not particularly limited in the present application. It is conceivable that the first image and the second image should have the same size, and that the origin positions defined in the first image and the second image should be the same. In other words, the first image and the second image have the same positions corresponding to each other.
In this step, a first image and a second image for fusion may be acquired for use in a subsequent step.
S102: and dividing the first image and the second image by adopting the same dividing mode to obtain each first block image of the first image and each second block image of the second image.
The first image and the second image may first be pre-processed including at least image registration prior to fusing the first image and the second image. In the ghost area determining method provided by the application, in order to achieve a better fusion effect, the blocking fusion can be realized by dividing the first image and the second image.
In general, it is necessary to fuse images located at the same position in the first image and the second image. Based on this, in this step, the first image and the second image may be divided in the same manner so that the distribution of each of the obtained first block images belonging to the first image and each of the obtained second block images belonging to the second image are the same. That is, the number of the first block images is the same as the number of the second block images, and each of the first block images and each of the second block images are in one-to-one correspondence in position. The specific dividing mode and the number of divided block images can be set according to requirements, and the application is not particularly limited to this.
Fig. 2 is a schematic diagram of the same way of dividing the first image and the second image. As shown in fig. 2, the first image and the second image are the same in size, and are 32×48 pixels. In the division method shown in fig. 2, the first image and the second image are divided with 8×8 pixels as a block size, and a divided image shown by a broken line is obtained. Wherein each small square formed by the broken line is a block image. Of course, fig. 2 only shows one possible case, and in practical application, the sizes of the first image and the second image and the sizes of the divided block images can be determined according to specific requirements; the sizes of the block images can be set to different sizes according to specific requirements.
S104: the first and second block images located at the same position are determined as a pair of block images.
After the first block image and the second block image are divided, the first image and the second image are fused to obtain a block image, the block image can be converted into a part of images of which the first block image and the second block image at the same position are fused to obtain a target image, and the part of images are spliced to obtain a final target image. Thus, in this step, the first and second divided images located at the same position can be determined as one divided image pair for application in the subsequent step.
S106: for each segmented image pair, a bi-directional difference for the segmented image pair is determined, the bi-directional difference being used to characterize a degree of matching of a first segmented image with a second segmented image in the segmented image pair.
In the ghost area determining method provided by the application, when preprocessing of image registration is performed, a bidirectional prediction mode is adopted to determine whether the fused image can generate a ghost area. In this step, a bi-directional difference may be determined that characterizes the degree of matching of the first and second tile images. Specifically, bi-prediction includes a backward difference obtained by backward prediction and a forward difference obtained by forward prediction. There is one backward difference and one forward difference for each segmented image pair.
There are many different ways in which the bi-directional difference for each segmented image pair may be determined, and a specific embodiment is provided herein for reference. Specifically, before determining the first segmented image and the second segmented image located at the same position as the segmented image pair, the first segmented images are input into a pre-trained matching model to determine a backward matching area of each first segmented image in the second image, and the second segmented images are input into the matching model to determine a forward matching area of each second segmented image in the first image. Then, in determining the bi-directional difference, for each segmented image pair, a backward difference for the segmented image pair may be determined from a first segmented image of the segmented image pair and a backward matching region of the first segmented image of the segmented image pair, and a forward difference for the segmented image pair may be determined from a second segmented image of the segmented image pair and a forward matching region of the second segmented image of the segmented image pair.
In the ghost area determining method provided by the application, a matching model can be trained in advance, and the matching model can be used for determining an area matched with an image with a relatively small size from an image with a relatively large size. Whether the images are matched or not can be judged according to the similarity degree, and in the images with relatively large sizes, the area with the highest similarity degree with the images with relatively small sizes is the area matched with the images. The input of the matching model is two images with different sizes, and the output is a partial region in the image with relatively large size. During training, any two images with different sizes can be used as training samples, the area with the highest cognition similarity degree of the user in the image with the larger size in the image with the smaller size is used as a label, and the matching model is trained.
The matching model trained based on the above manner can be applied in this step. For each first segmented image, the first segmented image may be used as an image with a relatively smaller size, and the second image may be used as an image with a relatively larger size, and the second image may be input into a matching model, where a backward matching region in the second image that matches the first segmented image is given by the matching model. Likewise, for each second segmented image, the second segmented image may be used as a relatively smaller-sized image, and the first image may be used as a relatively larger-sized image, and the matching model may be used to provide a forward matching region in the first image that matches the second segmented image.
For ease of explanation, in embodiments provided herein the acquisition time of the first image is before the acquisition time of the second image. In practical applications, the above limitations do not exist, and the acquisition time of the first image and the second image may be arbitrary.
In many cases, the textures of the pixels at the same position in the first image and the second image used for fusion are not necessarily matched in the process of image fusion. Taking video interpolation as an example, assuming that the first image and the second image are two adjacent frames of images in a video, a new image needs to be obtained by image fusion and is inserted between the first image and the second image. Assuming a continuously moving object is present in the video, the position of the object changes in the first image and in the second image. At this time, the texture of the pixel point at the position of the target object in the first image is different from the texture of the pixel point at the same position in the second image, and the textures are not matched. Thus, it is necessary to determine the region in the second image that matches the first segmented image of the first image and the region in the first image that matches the second segmented image of the second image by means of, for example, a matching model.
Wherein, since in the embodiment provided by the application the acquisition time of the first image is before the acquisition time of the second image, the second image is relatively backward on the time line for the first image and the first image is relatively forward on the time line for the second image. Therefore, for convenience of distinction, a region in the second image that matches the first block image is functionally referred to as a backward matching region, and a region in the first image that matches the second block image is referred to as a forward matching region.
Fig. 3 is a schematic diagram of a first segmented image and a second segmented image, and a backward matching region and a forward matching region, at the same position in the first image and the second image. Wherein A is a first segmented image in the first image, and A' is a backward matching area matched with A in the second image; b is a second segmented image in the second image, and B' is a forward matching region in the second image, which is matched with B. It can be seen that the positional relationship between a and a 'and the positional relationship between B and B' are not necessarily corresponding, and there may be no correlation therebetween. For example, the segmented images a and B may represent two different objects, respectively, that have different movements.
It should be noted that, since the motion trajectories of the objects in the first image and the second image are generally uncertain, it is not ensured that the first segmented image is necessarily matched with a certain second segmented image, and it is also not ensured that the second segmented image is necessarily matched with a certain first segmented image. In other words, in the second image, the backward matching region matching one first block image does not necessarily coincide with one second block image, and in the first image, the forward matching region matching one second block image does not necessarily coincide with one first block image. That is, in the present method, the first segmented image and the forward matching region, and the second segmented image and the backward matching region are all completely different concepts and cannot be identical.
In determining the bi-directional difference of a segmented image pair, the backward difference is derived from a first segmented image of the segmented image pair and a matching backward matching region, and the forward difference is derived from a second segmented image of the segmented image pair and a matching forward matching region.
Specifically, when determining the forward difference and the backward difference of one segmented image pair, determining, for each pixel point of the first segmented image in the segmented image pair, an absolute value of a difference between a pixel value of a pixel point at a corresponding position in a backward matching region of the first segmented image in the segmented image pair and a pixel value of the pixel point, as a first residual error of the pixel point, and determining a sum of the first residual errors of the pixel points as the backward difference of the segmented image pair; and determining, for each pixel point of the second segmented image in the segmented image pair, an absolute value of a difference between a pixel value of the pixel point at a corresponding position in a forward matching region of the second segmented image in the segmented image pair and a pixel value of the pixel point, as a second residual of the pixel point, and determining a sum of the second residuals of the pixel points as a forward difference of the segmented image pair.
Taking the example of calculating the backward difference, assuming that the size of one first block image in the first image is 8×8 pixels, the size of the backward matching region that matches in the second image is also 8×8 pixels. When the backward difference is calculated, for each pixel point in the first block image, determining a pixel point at the same position as the pixel point in the matching area, forming a pixel point pair with the pixel point, calculating the difference of pixel values of the two pixel points at the same position in the pixel point pair, and taking the absolute value as a first residual error of the pixel point. In the 8×8 first segmented image, there are 64 pixels, that is, 64 first residuals can be obtained, and the sum of the 64 first residuals is the backward difference of the segmented image pair.
The calculation mode of the forward difference is the same as that of the backward difference, and only the first block image in the process is replaced by the second block image, and the backward matching area is replaced by the forward matching area, so that the application is not repeated here.
S108: and judging whether a ghost area is generated after the first block image and the second block image in the block image pair are fused according to the bidirectional difference.
In the subsequent image fusion process, the first segmented image and the second segmented image at the same position need to be fused, that is, the two segmented images in the same segmented image pair are fused. Before fusion, it may be determined whether a ghost area is generated after fusion according to the bi-directional prediction difference determined in step S106, so that a corresponding response is made in the subsequent fusion.
In general, if a difference in one segmented image pair is large, a ghost area is likely to be generated after the fusion of the first segmented image and the second segmented image in the segmented image pair. In the ghost area determining method provided by the application, the ghost area is determined by bi-directional prediction, so that further determination can be made.
Specifically, when the absolute value of the difference between the backward difference and the forward difference of the segmented image pair is not greater than a first specified threshold, or at least one of the backward difference and the forward difference of the segmented image pair is not greater than a second specified threshold, it is determined that the first segmented image and the second segmented image in the segmented image pair do not generate a ghost area after fusion; and determining that the first segmented image and the second segmented image in the segmented image pair generate a ghost area after fusion when the absolute value of the difference between the backward difference and the forward difference of the segmented image pair is greater than a first specified threshold and both the backward difference and the forward difference of the segmented image pair are greater than a second specified threshold.
In determining whether or not the first and second segmented images in one segmented image pair generate a ghost area after fusion, it may be most preferentially determined whether or not the absolute value of the difference between the backward difference and the forward difference is smaller than a first specified threshold. When the absolute value of the difference between the backward difference and the forward difference is not greater than a first specified threshold, the result of the backward prediction and the forward prediction is similar, then the result of the backward prediction and the result of the forward prediction can be judged to be matched with each other, the segmented images corresponding to the backward difference and the forward difference and the matching area can be jointly adopted to be fused in the follow-up process, and the ghost area can not be generated after the fusion.
And when the above condition is not satisfied, that is, the absolute value of the difference between the backward difference and the forward difference is greater than the first specified threshold, it may be determined whether the first and second block images in the pair of block images generate a ghost area after fusion, based on the backward difference alone or the forward difference alone. At this time, if any one of the backward difference and the forward difference is not greater than the second specified threshold, it is considered that the difference not greater than the second specified threshold is accurate, and the segmented image corresponding to the difference may be used alone to fuse with the matching region at a later time, without generating a ghost region after the fusion. It should be noted that, in some cases, although the absolute value of the difference between the backward difference and the forward difference is greater than the first specified threshold, but the backward difference and the forward difference are not greater than the second specified threshold, the backward difference and the forward difference may be considered to be both accurate at this time, and the segmented image and the matching region corresponding to the bidirectional prediction may be used for fusion in the subsequent fusion process, so that no ghost region is generated after the fusion.
And if the backward difference and the forward difference of the segmented image pair do not satisfy all the conditions, that is, the absolute value of the difference between the backward difference and the forward difference of the segmented image pair is greater than the first specified threshold, and the backward difference and the forward difference of the segmented image pair are both greater than the second specified threshold, then the position of the segmented image pair can be considered to generate a ghost area after fusion.
The first specified threshold and the second specified threshold may be set according to requirements, which is not particularly limited in this specification. By the method, the ghost area which is generated in the process of image fusion can be accurately determined.
When the ghost area determination method provided by the application is adopted to judge the ghost area generated when the first image and the second image are fused, the integral fusion of the images can be converted into the separate fusion of the segmented images which is easier to predict by dividing the images; meanwhile, whether the segmented image at one position generates a ghost area after fusion can be more reasonably judged in a bidirectional prediction mode. The method can more accurately determine the position of the possible ghost area generated during image fusion, reduce the calculated amount during the ghost area prediction, and better eliminate the ghost image during the subsequent fusion.
Additionally, after the ghost area determination method provided by the specification is adopted to determine the possible ghost area, better image fusion can be performed according to the determined ghost area. Specifically, when the first segmented image and the second segmented image in the segmented image pair do not generate a ghost area after fusion, the first segmented image and the second segmented image in the segmented image pair are fused in a first preset mode according to the pixel value of each pixel point in the first segmented image and the second segmented image in the segmented image pair, so that a target segmented image with the same position as the segmented image pair in the target image is obtained.
In the image fusion, the size of the finally fused target image is the same as that of the first image and the second image. Therefore, after the first image and the second image are divided, the fusion process of the target image is also converted into respectively fusing the first segmented image and the second segmented image in each segmented image pair, so as to obtain target segmented images with the same positions as the segmented images, and splicing each target segmented image into a final target image.
For the segmented image pair that does not generate the ghost area after the fusion determined in step S108, the first segmented image and the second segmented image may be fused in a first preset manner according to the pixel values of each pixel point in the first segmented image and the second segmented image in the segmented image pair. The first preset manner may be to average pixel values of the pixel points at the same positions from pixel point to pixel point. And, particularly if the average is taken, can be determined based on the backward and forward differences that are employed when determining that the segmented image pair will not produce a ghost region after fusion. Since the backward difference is obtained from the first block image and the backward matching region and the forward difference is obtained from the second block image and the forward matching region, it is possible to determine which part of the images to use for fusion based on the backward difference and the forward difference.
When the absolute value of the difference between the backward difference and the forward difference of the segmented image pair is not greater than a first specified threshold, the backward difference and the forward difference are mutually matched, so that the first segmented image and the backward matching area corresponding to the backward difference and the second segmented image and the forward matching area corresponding to the forward difference are required to be adopted at the same time, and the fusion of the target segmented image which is positioned in the same position as the segmented image is completed together. In short, the pixel values of the pixel points at the corresponding positions in the first segmented image, the second segmented image, the corresponding backward matching region and the corresponding forward matching region in the segmented image pair are added and divided by four to obtain the pixel values of the pixel points at the corresponding positions in the target segmented image.
And when the absolute value of the difference between the backward difference and the forward difference in the segmented image pair is larger than a first specified threshold, but at least one of the backward difference and the forward difference is not larger than a second specified threshold, the segmented image and the matching region corresponding to the difference not larger than the second specified threshold can be used for fusion. In other words, in the case where the absolute value of the difference between the backward difference and the forward difference is greater than the first specified threshold, if only the backward difference is not greater than the second specified threshold, the pixel value of the pixel point at each corresponding position in the first block image and the backward matching region is added and divided by two, to obtain the pixel value of the pixel point at the corresponding position in the target block image; if only the forward difference is not greater than the second specified threshold, adding the pixel value of the pixel point at each corresponding position in the second block image and the forward matching region, and dividing the pixel value by two to obtain the pixel value of the pixel point at the corresponding position in the target block image; and if the backward difference and the forward difference are not greater than the second specified threshold, dividing the sum of the pixel values of the pixel points at the corresponding positions in the first block image and the second block image in the pair of block images and the corresponding backward matching region and the forward matching region by four to obtain the pixel value of the pixel point at the corresponding position in the target block image, wherein the absolute value of the difference between the backward difference and the forward difference is not greater than the first specified threshold.
By the above method, the target segmented image of the same position of each segmented image pair which does not generate the ghost area after fusion can be obtained.
And for the segmented image pair which does not meet any of the conditions, namely, can generate the ghost area during fusion, specifically, when the first segmented image and the second segmented image in the segmented image pair generate the ghost area after fusion, the first segmented image and the second segmented image in the segmented image pair are fused in a second preset mode, so that the target segmented image with the same position as the segmented image pair in the target image is obtained.
For the segmented image pair which is judged that the first segmented image and the second segmented image can generate the ghost area after fusion in the prediction, if the image fusion is still carried out by adopting the first preset mode, the ghost area can be necessarily generated after fusion. In order to avoid the generation of the ghost area, a second preset mode different from the first preset mode can be adopted to fuse the first segmented image. The second preset manner may be, for example, a dense optical flow algorithm, which is not specifically limited in this specification.
When the target block images of the same position of all the block images are respectively obtained through fusion, the target block images can be directly spliced according to the positions, and a final target image obtained through fusion of the first image and the second image is obtained.
Additionally, when judging whether the first segmented image and the second segmented image in one segmented image pair generate a ghost area after fusion, if the absolute value of the difference between a backward difference and a forward difference is larger than a first designated threshold, and the segmented image pair with the backward difference and the forward difference both larger than the second designated threshold is judged to generate the ghost area after fusion, one step of additional judgment can be performed again to determine whether the first segmented image and the second segmented image in the segmented image pair generate the ghost area after fusion.
Specifically, for each determined segmented image pair that will generate a ghost area after fusion, determining a backward direction of the segmented image pair according to a position offset direction of a backward matching area of a first segmented image of the segmented image pair relative to the first segmented image of the segmented image pair; determining the forward direction of the segmented image pair according to the position offset direction of the forward matching area of the second segmented image of the segmented image pair relative to the second segmented image of the segmented image pair; and re-judging whether the first segmented image and the second segmented image in the segmented image pair generate a ghost area after fusion according to the backward direction and the forward direction of the segmented image pair.
In steps S106-S108, bi-prediction is performed based only on the forward and backward differences of the segmented image pair, wherein the forward and backward differences are determined based on the pixel values. The present application additionally provides herein a bi-prediction implemented based on the offset between the segmented image and the matching region.
For one segmented image pair, determining the offset direction between the first segmented image and the backward matching area according to the position difference between the first segmented image in the segmented image pair and the backward matching area of the first segmented image, and taking the offset direction as the backward direction of the segmented image pair; also, an offset direction between the second segmented image and the forward matching region may be determined as the forward direction of the segmented image pair based on a position difference between the second segmented image of the segmented image pair and the forward matching region of the second segmented image.
And if a certain condition is met between the backward direction and the forward direction between the segmented image pairs, then the bi-prediction at this time can be considered to still be a match. Specifically, if the included angle between the backward direction and the forward direction of the segmented image pair is within a specified range, determining that the first segmented image and the second segmented image of the segmented image pair do not generate a ghost area after fusion; otherwise, determining that the first segmented image and the second segmented image of the segmented image pair will generate a ghosted region after fusion.
It is conceivable that when the backward direction and the forward direction are directed approximately in two opposite directions, it is considered that, for the position of the pair of divided images, when the first divided image at the position moves toward the position of the backward matching region, it is opposite to the direction when the second divided image at the position moves toward the forward matching region, indicating that the forward prediction and the backward prediction match each other at this time, and it is considered that the first divided image and the second divided image in the pair of divided images do not generate a ghost region after fusion.
The specified range may be set according to specific requirements, for example, may be set to (90 °,270 °), or the like, and this specification is not particularly limited.
In addition, a backward direction angle and a forward direction angle can be determined according to the backward direction and the forward direction, and whether the bidirectional prediction is matched is judged according to whether the forward direction angle and the backward direction angle are symmetrical.
Fig. 4 is a schematic view of a backward direction and a forward direction in the present application. As shown in fig. 4, a plane rectangular coordinate system is established with the horizontal direction as the x direction and the vertical direction as the y direction, wherein the direction angle 1 and the direction angle 3 are symmetrical, and the direction angle 2 and the direction angle 4 are symmetrical. The backward and forward directions are as shown. It can be seen that the direction of the backward direction and the forward direction is approximately opposite, the backward direction belongs to the direction angle 2, the forward direction belongs to the direction angle 4, and the backward direction and the forward direction are symmetrical. Therefore, it is considered that the backward direction and the forward direction are opposite, that is, the offset direction of the first segmented image and the offset direction of the second segmented image in the segmented image pair are opposite, and the bi-prediction matching can determine that no ghost area is generated after the first segmented image and the second segmented image in the segmented image pair are fused.
Similarly, the method of fusing the first and second segmented images in the segmented image pair is the same as the method of fusing when the absolute value of the difference between the forward difference and the backward difference is not greater than the first specified threshold, and the application is not described here again.
Additionally, after determining that the first segmented image and the second segmented image in one segmented image pair may generate a ghost area after fusion, an additional consideration is required that an error may exist in the determination of the ghost area. That is, there is a possibility that a pair of segmented images adjacent to the position where the pair of segmented images is located may generate a ghost area after fusion. Therefore, the scope of the determined possible ghost areas can be enlarged by using the idea of the corrosion expansion algorithm, that is, the adjacent segmented image pair of the segmented image pair is also determined as the possible ghost areas after fusion.
Specifically, a second preset mode may be adopted to fuse the first segmented image and the second segmented image in the segmented image pair, and the first segmented image and the second segmented image in the segmented image pair corresponding to the position in the preset range of the position of the segmented image pair.
In practice, for whether the first and second segmented images in a segmented image pair will generate a ghosting region after fusion, the final difference is that the manner adopted in the subsequent fusion is different. Therefore, for a segmented image pair which determines that a ghost area is generated after fusion, the segmented image pair at a position in a preset range around the position of the segmented image pair can be directly fused with the segmented image pair in a second preset mode during subsequent fusion, so that errors possibly generated during the judgment of the ghost area are avoided.
The preset range may be set according to specific requirements, and only one specific embodiment of the present application is given herein for reference. Taking the first image and the second image pair shown in fig. 2 as an example, the first image and the second image may be divided again in a manner of 16×16 pixels, to obtain newly divided regions. At this time, 4 block images are included in each newly divided region. For any newly divided region, if a segmented image which is determined to generate a ghost region after fusion exists in the region, all the segmented images in the region are fused in a second preset mode during subsequent fusion. That is, all the block images in the same dividing area as the block image determined to generate the ghost area after the fusion are fused in the second preset manner.
The method for determining the ghost area provided by the application is based on the same thought, and the application also provides a corresponding device for determining the ghost area, as shown in fig. 5.
Fig. 5 is a schematic diagram of a ghost area determining apparatus provided by the present application, specifically including:
an acquisition module 200, configured to acquire a first image and a second image to be fused;
the dividing module 202 is configured to divide the first image and the second image in the same dividing manner, so as to obtain each first block image of the first image and each second block image of the second image;
a combining module 204, configured to determine the first segmented image and the second segmented image that are located at the same position as a segmented image pair;
a determining module 206, configured to determine, for each segmented image pair, a bidirectional difference of the segmented image pair, where the bidirectional difference is used to characterize a matching degree of a first segmented image and a second segmented image in the segmented image pair;
a judging module 208, configured to judge whether a ghost area is generated after the first block image and the second block image in the pair of block images are fused according to the bidirectional difference;
optionally, the determining module 206 is specifically configured to determine a backward difference and a forward difference of the segmented image.
Optionally, the apparatus further includes a matching module 210, specifically configured to input the first segmented images into a pre-trained matching model, determine a backward matching area of each first segmented image in the second image, and input the second segmented images into the matching model, determine a forward matching area of each second segmented image in the first image;
the determining module 206 is specifically configured to determine, for each segmented image pair, a backward difference of the segmented image pair according to a first segmented image in the segmented image pair and a backward matching area of the first segmented image in the segmented image pair, and determine, for each segmented image pair, a forward difference of the segmented image pair according to a second segmented image in the segmented image pair and a forward matching area of the second segmented image in the segmented image pair.
Optionally, the determining module 206 is specifically configured to determine, for each pixel point of the first segmented image in the segmented image pair, an absolute value of a difference between a pixel value of a pixel point of the pixel point at a corresponding position in a backward matching area of the first segmented image in the segmented image pair and a pixel value of the pixel point, as a first residual error of the pixel point, and determine a sum of the first residual errors of the pixel points as a backward difference of the segmented image pair; and determining, for each pixel point of the second segmented image in the segmented image pair, an absolute value of a difference between a pixel value of the pixel point at a corresponding position in a forward matching region of the second segmented image in the segmented image pair and a pixel value of the pixel point, as a second residual of the pixel point, and determining a sum of the second residuals of the pixel points as a forward difference of the segmented image pair.
Optionally, the determining module 208 is specifically configured to determine that the first segmented image and the second segmented image in the segmented image pair do not generate the ghost area after the fusion when an absolute value of a difference between a backward difference and a forward difference of the segmented image pair is not greater than a first specified threshold, or at least one of the backward difference and the forward difference of the segmented image pair is not greater than a second specified threshold; and determining that the first segmented image and the second segmented image in the segmented image pair generate a ghost area after fusion when the absolute value of the difference between the backward difference and the forward difference of the segmented image pair is greater than a first specified threshold and both the backward difference and the forward difference of the segmented image pair are greater than a second specified threshold.
Optionally, the apparatus further includes a first fusion module 212, specifically configured to, when the first segmented image and the second segmented image in the segmented image pair do not generate a ghost area after being fused, fuse the first segmented image and the second segmented image in the segmented image pair in a first preset manner according to pixel values of each pixel point in the first segmented image and the second segmented image in the segmented image pair, so as to obtain a target segmented image in the same position as the segmented image pair in the target image.
Optionally, the apparatus further includes a second fusion module 214, specifically configured to fuse, when the first segmented image and the second segmented image in the segmented image pair generate a ghost area after fusion, the first segmented image and the second segmented image in the segmented image pair in a second preset manner, so as to obtain a target segmented image in the same position as the segmented image pair in the target image.
Optionally, the second fusing module 214 is specifically configured to determine, for each determined pair of segmented images that will generate a ghost area after fusing, a backward direction of the pair of segmented images according to a position offset direction of a backward matching area of a first segmented image of the pair of segmented images with respect to the first segmented image of the pair of segmented images; determining the forward direction of the segmented image pair according to the position offset direction of the forward matching area of the second segmented image of the segmented image pair relative to the second segmented image of the segmented image pair; and re-judging whether the first segmented image and the second segmented image in the segmented image pair generate a ghost area after fusion according to the backward direction and the forward direction of the segmented image pair.
Optionally, the second fusing module 214 is specifically configured to determine that the first segmented image and the second segmented image of the segmented image pair do not generate a ghost area after being fused if an included angle between a backward direction and a forward direction of the segmented image pair is within a specified range; otherwise, determining that the first segmented image and the second segmented image of the segmented image pair will generate a ghosted region after fusion.
Optionally, the second fusing module 214 is specifically configured to fuse the first segmented image and the second segmented image in the segmented image pair and the first segmented image and the second segmented image in the segmented image pair corresponding to the position within the preset range of the position of the segmented image pair in a second preset manner.
The present application also provides a computer-readable storage medium storing a computer program operable to execute the ghost area determining method provided in fig. 1 described above.
The application also provides a schematic block diagram of the electronic device shown in fig. 6. At the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, as illustrated in fig. 6, although other hardware required by other services may be included. The processor reads the corresponding computer program from the non-volatile memory into the memory and then runs to implement the ghost area determination method described above with respect to fig. 1. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present application, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (10)

1. A ghost area determination method, comprising:
acquiring a first image and a second image to be fused;
dividing the first image and the second image by adopting the same dividing mode to obtain each first block image of the first image and each second block image of the second image;
determining a first segmented image and a second segmented image which are positioned at the same position as a segmented image pair;
Determining, for each segmented image pair, a bi-directional difference for the segmented image pair, the bi-directional difference being used to characterize a degree of matching of a first segmented image with a second segmented image in the segmented image pair;
judging whether a first segmented image and a second segmented image in the segmented image pair generate a ghost area after fusion according to the bidirectional difference;
the determining the bidirectional difference of the segmented image pair specifically includes:
determining a backward difference and a forward difference of the segmented image;
the method specifically comprises the following steps before the first segmented image and the second segmented image which are positioned at the same position are determined as the segmented image pair:
inputting the first block images into a pre-trained matching model, determining a backward matching area of each first block image in the second image, inputting the second block images into the matching model, and determining a forward matching area of each second block image in the first image;
the determining the bidirectional difference of the block image specifically includes:
for each segmented image pair, determining a backward difference of the segmented image pair according to a first segmented image in the segmented image pair and a backward matching area of the first segmented image in the segmented image pair, and determining a forward difference of the segmented image pair according to a second segmented image in the segmented image pair and a forward matching area of the second segmented image in the segmented image pair;
The determining the backward difference of the segmented image pair according to the first segmented image in the segmented image pair and the backward matching area of the first segmented image in the segmented image pair specifically comprises the following steps:
determining, for each pixel point of a first segmented image in the segmented image pair, an absolute value of a difference between a pixel value of the pixel point at a corresponding position in a backward matching region of the first segmented image in the segmented image pair and a pixel value of the pixel point, as a first residual error of the pixel point, and determining a sum of the first residual errors of the pixel points as a backward difference of the segmented image pair;
determining a forward difference of the segmented image pair according to the second segmented image in the segmented image pair and a forward matching area of the second segmented image in the segmented image pair specifically comprises:
for each pixel point of the second segmented image in the segmented image pair, determining an absolute value of a difference between a pixel value of the pixel point at a corresponding position in a forward matching region of the second segmented image in the segmented image pair and a pixel value of the pixel point, as a second residual of the pixel point, and determining a sum of the second residuals of the pixel points as a forward difference of the segmented image pair.
2. The method according to claim 1, wherein determining whether the first and second segmented images in the segmented image pair generate a ghosting region after fusion according to the bi-directional difference comprises:
determining that the first and second segmented images in the segmented image pair do not produce a ghosted region after fusing when an absolute value of a difference between a backward difference and a forward difference of the segmented image pair is not greater than a first specified threshold or at least one of the backward difference and the forward difference of the segmented image pair is not greater than a second specified threshold;
and determining that the first segmented image and the second segmented image in the segmented image pair generate a ghost area after fusion when the absolute value of the difference between the backward difference and the forward difference of the segmented image pair is greater than a first specified threshold and both the backward difference and the forward difference of the segmented image pair are greater than a second specified threshold.
3. The method of claim 1, wherein the method further comprises:
when the first segmented image and the second segmented image in the segmented image pair do not generate a ghost area after fusion, the first segmented image and the second segmented image in the segmented image pair are fused in a first preset mode according to the pixel value of each pixel point in the first segmented image and the second segmented image in the segmented image pair, and the target segmented image with the same position as the segmented image pair in the target image is obtained.
4. The method of claim 1, wherein the method further comprises:
when the first segmented image and the second segmented image in the segmented image pair can generate a ghost area after fusion, the first segmented image and the second segmented image in the segmented image pair are fused in a second preset mode, and then the target segmented image with the same position as the segmented image pair in the target image is obtained.
5. The method of claim 4, wherein prior to fusing the first segmented image and the second segmented image in the pair of segmented images in the second predetermined manner, the method further comprises:
determining a backward direction of a segmented image pair according to the position offset direction of a backward matching area of a first segmented image of the segmented image pair relative to the first segmented image of the segmented image pair aiming at each segmented image pair which generates a ghost area after fusion; and is combined with the other components of the water treatment device,
determining the forward direction of the segmented image pair according to the position offset direction of the forward matching region of the second segmented image of the segmented image pair relative to the second segmented image of the segmented image pair;
And re-judging whether the first segmented image and the second segmented image in the segmented image pair generate a ghost area after fusion according to the backward direction and the forward direction of the segmented image pair.
6. The method of claim 5, wherein the re-judging whether the first segmented image and the second segmented image in the segmented image pair generate ghost areas after fusion according to the backward direction and the forward direction of the segmented image pair, specifically comprises:
if the included angle between the backward direction and the forward direction of the segmented image pair is within a specified range, determining that a ghost area cannot be generated after the first segmented image and the second segmented image of the segmented image pair are fused;
otherwise, determining that the first segmented image and the second segmented image of the segmented image pair will generate a ghosted region after fusion.
7. The method of claim 4, wherein fusing the first segmented image and the second segmented image in the segmented image pair in a second predetermined manner, specifically comprises:
and fusing the first segmented image and the second segmented image in the segmented image pair by adopting a second preset mode, and fusing the first segmented image and the second segmented image in the segmented image pair corresponding to the position in the preset range of the position of the segmented image pair.
8. A ghost area determining apparatus, comprising:
the acquisition module is used for acquiring a first image and a second image to be fused;
the dividing module is used for dividing the first image and the second image in the same dividing mode to obtain first block images of the first image and second block images of the second image;
a combination module for determining the first block image and the second block image which are positioned at the same position as a block image pair;
a determining module, configured to determine, for each segmented image pair, a bidirectional difference of the segmented image pair, where the bidirectional difference is used to characterize a matching degree of a first segmented image and a second segmented image in the segmented image pair;
the judging module is used for judging whether a first segmented image and a second segmented image in the segmented image pair can generate a ghost area after fusion according to the bidirectional difference;
the determining module is specifically configured to determine a backward difference and a forward difference of the segmented image;
a matching module, before the combining module determines the first segmented image and the second segmented image which are positioned at the same position as a segmented image pair, the matching module is used for inputting each first segmented image into a pre-trained matching model, determining a backward matching area of each first segmented image in the second image, inputting each second segmented image into the matching model, and determining a forward matching area of each second segmented image in the first image;
The determining module is specifically configured to determine, for each segmented image pair, a backward difference of the segmented image pair according to a first segmented image in the segmented image pair and a backward matching area of the first segmented image in the segmented image pair, and determine a forward difference of the segmented image pair according to a second segmented image in the segmented image pair and a forward matching area of the second segmented image in the segmented image pair;
the determining module is specifically configured to determine, for each pixel point of a first segmented image in each segmented image pair, an absolute value of a difference between a pixel value of a pixel point at a corresponding position in a backward matching area of the first segmented image in the segmented image pair and a pixel value of the pixel point, as a first residual error of the pixel point, and determine a sum of the first residual errors of the pixel points as a backward difference of the segmented image pair; and determining, for each pixel point of a second segmented image in each segmented image pair, an absolute value of a difference between a pixel value of a pixel point at a corresponding position of the pixel point in a forward matching region of the second segmented image in the segmented image pair and a pixel value of the pixel point as a second residual of the pixel point, and determining a sum of the second residual of each pixel point as a forward difference of the segmented image pair.
9. A computer readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-7 when executing the program.
CN202311013549.XA 2023-08-11 2023-08-11 Ghost area determining method and device, storage medium and electronic equipment Active CN116740182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311013549.XA CN116740182B (en) 2023-08-11 2023-08-11 Ghost area determining method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311013549.XA CN116740182B (en) 2023-08-11 2023-08-11 Ghost area determining method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN116740182A CN116740182A (en) 2023-09-12
CN116740182B true CN116740182B (en) 2023-11-21

Family

ID=87906411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311013549.XA Active CN116740182B (en) 2023-08-11 2023-08-11 Ghost area determining method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116740182B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10257295A (en) * 1997-03-07 1998-09-25 Toyo Ink Mfg Co Ltd Color reproduction range compression method and its device
CN108416754A (en) * 2018-03-19 2018-08-17 浙江大学 A kind of more exposure image fusion methods automatically removing ghost
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN109816619A (en) * 2019-01-28 2019-05-28 努比亚技术有限公司 Image interfusion method, device, terminal and computer readable storage medium
CN110728644A (en) * 2019-10-11 2020-01-24 厦门美图之家科技有限公司 Image generation method and device, electronic equipment and readable storage medium
CN112085673A (en) * 2020-08-27 2020-12-15 宁波大学 Multi-exposure image fusion method for removing strong ghost
CN112767281A (en) * 2021-02-02 2021-05-07 北京小米松果电子有限公司 Image ghost eliminating method, device, electronic equipment and storage medium
CN114897880A (en) * 2022-06-10 2022-08-12 西安建筑科技大学 Remote sensing image change detection method based on self-adaptive image regression
CN115115562A (en) * 2022-07-15 2022-09-27 展讯通信(天津)有限公司 Image fusion method and device
CN115272428A (en) * 2022-08-24 2022-11-01 声呐天空资讯顾问有限公司 Image alignment method and device, computer equipment and storage medium
CN116188343A (en) * 2023-02-27 2023-05-30 上海玄戒技术有限公司 Image fusion method and device, electronic equipment, chip and medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10257295A (en) * 1997-03-07 1998-09-25 Toyo Ink Mfg Co Ltd Color reproduction range compression method and its device
CN108416754A (en) * 2018-03-19 2018-08-17 浙江大学 A kind of more exposure image fusion methods automatically removing ghost
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN109816619A (en) * 2019-01-28 2019-05-28 努比亚技术有限公司 Image interfusion method, device, terminal and computer readable storage medium
CN110728644A (en) * 2019-10-11 2020-01-24 厦门美图之家科技有限公司 Image generation method and device, electronic equipment and readable storage medium
CN112085673A (en) * 2020-08-27 2020-12-15 宁波大学 Multi-exposure image fusion method for removing strong ghost
CN112767281A (en) * 2021-02-02 2021-05-07 北京小米松果电子有限公司 Image ghost eliminating method, device, electronic equipment and storage medium
CN114897880A (en) * 2022-06-10 2022-08-12 西安建筑科技大学 Remote sensing image change detection method based on self-adaptive image regression
CN115115562A (en) * 2022-07-15 2022-09-27 展讯通信(天津)有限公司 Image fusion method and device
CN115272428A (en) * 2022-08-24 2022-11-01 声呐天空资讯顾问有限公司 Image alignment method and device, computer equipment and storage medium
CN116188343A (en) * 2023-02-27 2023-05-30 上海玄戒技术有限公司 Image fusion method and device, electronic equipment, chip and medium

Also Published As

Publication number Publication date
CN116740182A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN112001456B (en) Vehicle positioning method and device, storage medium and electronic equipment
CN111238450B (en) Visual positioning method and device
CN108961174A (en) A kind of image repair method, device and electronic equipment
CN113688832B (en) Model training and image processing method and device
CN116309823A (en) Pose determining method, pose determining device, pose determining equipment and storage medium
JP2015226326A (en) Video analysis method and video analysis device
CN111798489B (en) Feature point tracking method, device, medium and unmanned equipment
CN116740182B (en) Ghost area determining method and device, storage medium and electronic equipment
CN116363695A (en) Method, device, medium and equipment for determining interested position of human body
CN112770015B (en) Data processing method and related device
CN112734851B (en) Pose determination method and device
CN114863206A (en) Model training method, target detection method and device
CN112712561A (en) Picture construction method and device, storage medium and electronic equipment
CN117726907B (en) Training method of modeling model, three-dimensional human modeling method and device
CN113887326B (en) Face image processing method and device
CN113888611B (en) Method and device for determining image depth and storage medium
CN114528923B (en) Video target detection method, device, equipment and medium based on time domain context
CN117808976B (en) Three-dimensional model construction method and device, storage medium and electronic equipment
CN117726760B (en) Training method and device for three-dimensional human body reconstruction model of video
CN116389789A (en) Method, device, medium and electronic equipment for enhancing video image quality of mobile terminal
CN116233553A (en) Video processing method, device, equipment and medium
CN116740114B (en) Object boundary fitting method and device based on convex hull detection
CN113640823B (en) Method and device for map drawing based on laser reflectivity base map
CN118840641A (en) Multi-band image fusion method, device, storage medium and equipment
CN116563387A (en) Training method and device of calibration model, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant