CN113077387B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN113077387B
CN113077387B CN202110402157.7A CN202110402157A CN113077387B CN 113077387 B CN113077387 B CN 113077387B CN 202110402157 A CN202110402157 A CN 202110402157A CN 113077387 B CN113077387 B CN 113077387B
Authority
CN
China
Prior art keywords
attenuation
region
pixel
image
spliced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110402157.7A
Other languages
Chinese (zh)
Other versions
CN113077387A (en
Inventor
田仁富
丁红艳
陈磊
刘刚
曾峰
徐鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110402157.7A priority Critical patent/CN113077387B/en
Publication of CN113077387A publication Critical patent/CN113077387A/en
Application granted granted Critical
Publication of CN113077387B publication Critical patent/CN113077387B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • G06T3/14
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution

Abstract

The application provides an image processing method and device, comprising the following steps: acquiring a spliced image based on the first original image and the second original image, wherein the spliced image comprises a spliced line, one side of the spliced line is a first spliced area, and the other side of the spliced line is a second spliced area; determining a first attenuation region from the first splicing region, and carrying out attenuation treatment on the first attenuation region based on the size of the first attenuation region, an attenuation speed value corresponding to the first attenuation region and the distance between a pixel point in the first attenuation region and the splicing line; determining a second attenuation region from the second splicing region, and carrying out attenuation treatment on the second attenuation region based on the size of the second attenuation region, an attenuation speed value corresponding to the second attenuation region and the distance between the pixel point in the second attenuation region and the splicing line; and generating a target image based on the attenuated first splicing region and the attenuated second splicing region. Through the technical scheme of this application, can improve color difference and luminance difference, the image concatenation effect is better.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
In the fields of computer vision and image processing, image stitching refers to: two or more images of the same scene with overlapping areas will be described and stitched into a panoramic image or high resolution image (i.e., an ultra-wide view image). Image registration and image fusion are two main processes of image stitching, in the image registration process, a transformation relation between images needs to be determined, a mathematical model of image coordinate transformation is established, and the images are transformed to the same coordinate system by solving parameters of the mathematical model. In the image fusion process, the images transformed to the same coordinate system need to be spliced into a panoramic image or a high-resolution image.
Because of poor consistency of the photosensitive devices of different cameras, different exposure parameters of different cameras and the like, the images acquired by the different cameras have larger differences (such as color differences and/or brightness differences), when the images with the larger differences are spliced into a panoramic image or a high-resolution image, larger seam marks exist in the panoramic image or the high-resolution image, and the image splicing effect is poor.
Disclosure of Invention
The application provides an image processing method, which comprises the following steps:
Acquiring a spliced image based on a first original image and a second original image, wherein the spliced image comprises a spliced line, one side of the spliced line is a first spliced area determined based on the first original image, and the other side of the spliced line is a second spliced area determined based on the second original image;
determining a first attenuation region from the first splicing region, and carrying out attenuation treatment on the first attenuation region based on the size of the first attenuation region, the configured attenuation speed value corresponding to the first attenuation region and the distance between the pixel point in the first attenuation region and the splicing line;
determining a second attenuation region from the second splicing region, and carrying out attenuation treatment on the second attenuation region based on the size of the second attenuation region, the configured attenuation speed value corresponding to the second attenuation region and the distance between the pixel point in the second attenuation region and the splicing line;
generating a target image based on the first splicing region after the attenuation treatment and the second splicing region after the attenuation treatment, wherein the target image comprises the splicing line, one side of the splicing line is the first splicing region after the attenuation treatment, and the other side of the splicing line is the second splicing region after the attenuation treatment.
The present application provides an image processing apparatus, the apparatus including:
the acquisition module is used for acquiring a spliced image based on the first original image and the second original image, wherein the spliced image comprises a spliced line, one side of the spliced line is a first spliced area determined based on the first original image, and the other side of the spliced line is a second spliced area determined based on the second original image;
the processing module is used for determining a first attenuation area from the first splicing area, and carrying out attenuation processing on the first attenuation area based on the size of the first attenuation area, the configured attenuation speed value corresponding to the first attenuation area and the distance between the pixel point in the first attenuation area and the splicing line of the splicing area; determining a second attenuation region from the second splicing region, and carrying out attenuation processing on the second attenuation region based on the size of the second attenuation region, the configured attenuation speed value corresponding to the second attenuation region and the distance between the pixel point in the second attenuation region and the splicing line of the splicing region;
the generating module is used for generating a target image based on the first splicing area after attenuation treatment and the second splicing area after attenuation treatment, wherein the target image comprises a splicing seam area splicing seam, one side of the splicing seam area splicing seam is the first splicing area after attenuation treatment, and the other side of the splicing seam area splicing seam is the second splicing area after attenuation treatment.
The application provides an image processing method, which comprises the following steps:
acquiring a spliced image;
and executing smoothing processing on at least one row of pixel points in the spliced image, wherein the smoothing processing comprises the following steps:
selecting one pixel point from the row of pixel points as a joint point;
in response to the joint point, performing smoothing processing on a plurality of other pixel points except the joint point based on a target attenuation factor, and generating a plurality of processed other pixel points; the target attenuation factor is defined as a function operation obtained by taking a transverse coordinate difference value and a maximum attenuation factor of the other pixel points and the seam point as parameter values, so that when the maximum attenuation factor is smaller than 1, the variation between the pixel value of the plurality of other pixel points after smoothing and the pixel value before smoothing increases along with the increase of the transverse coordinate difference value, and when the maximum attenuation factor is larger than 1, the variation between the pixel value of the plurality of other pixel points after smoothing and the pixel value before smoothing decreases along with the increase of the transverse coordinate difference value; wherein the pixel value is a luminance value and/or a chrominance value.
As can be seen from the above technical solutions, in the embodiments of the present application, for a first attenuation region in a first splicing region, attenuation processing may be performed on the first attenuation region based on a size of the first attenuation region, an attenuation speed value corresponding to the first attenuation region, and a distance between a pixel point in the first attenuation region and a stitching line. For the second attenuation region in the second stitching region, attenuation processing may be performed on the second attenuation region based on the size of the second attenuation region, the attenuation speed value corresponding to the second attenuation region, and the distance between the pixel point and the stitching line in the second attenuation region. After the first attenuation region in the first splicing region and the second attenuation region in the second splicing region are subjected to attenuation treatment, the first splicing region after the attenuation treatment and the second splicing region after the attenuation treatment can be spliced into a target image (such as a panoramic image or a high-resolution image), so that a first original image and a second original image with larger differences (such as color differences and/or brightness differences) can be spliced into the target image, larger seam marks cannot exist in the target image, the color differences and the brightness differences can be improved, the image splicing effect is good, and the smooth transition of the target image is realized.
Drawings
FIG. 1 is a flow diagram of an image processing method in one embodiment of the present application;
FIGS. 2A-2E are schematic illustrations of stitched images in one embodiment of the present application;
FIG. 3 is a schematic illustration of an attenuation region in one embodiment of the present application;
FIG. 4 is a flow chart of an image processing method in another embodiment of the present application;
fig. 5 is a schematic structural view of an image processing apparatus in one embodiment of the present application;
fig. 6 is a hardware configuration diagram of an image processing apparatus in one embodiment of the present application.
Detailed Description
The embodiment of the application provides an image processing method which can be applied to front-end equipment (such as a video camera and the like) or back-end equipment (such as a background server and the like). If the image processing method is applied to the front-end equipment, the front-end equipment acquires two or more frames of images with overlapping areas and splices the two or more frames of images into a panoramic image or a high-resolution image. If the image processing method is applied to the back-end equipment, the front-end equipment acquires two or more frames of images with overlapping areas, the two or more frames of images are input to the back-end equipment, and the back-end equipment splices the two or more frames of images into a panoramic image or a high-resolution image.
If two frames of images are spliced into one frame of panoramic image or high resolution image, the two frames of images are marked as a first original image and a second original image, and the two frames of images are spliced into one frame of panoramic image or high resolution image by adopting the image processing method of the embodiment. If the multi-frame images are spliced into a panoramic image or a high-resolution image, taking three frames of images (such as an image a1, an image a2 and an image a 3) as an example, the images a1 and a2 are marked as a first original image and a second original image, and the image processing method of the embodiment is adopted to splice the images a1 and a2 into a frame of image a4. The image a2 and the image a3 are recorded as a first original image and a second original image, and the image a2 and the image a3 are spliced into a frame image a5 by the image processing method of the present embodiment. The image a4 and the image a5 are recorded as a first original image and a second original image, and the image a4 and the image a5 are spliced into a panoramic image or a high resolution image by the image processing method of the present embodiment.
For convenience of description, in the following embodiments, two frames of images to be stitched are taken as an example, and the two frames of images are referred to as a first original image and a second original image.
Referring to fig. 1, a flowchart of an image processing method is shown, and the method may include:
step 101, acquiring a spliced image based on a first original image and a second original image, wherein the spliced image comprises a spliced line, one side of the spliced line is a first spliced area determined based on the first original image, and the other side of the spliced line is a second spliced area determined based on the second original image.
For example, a first original image and a second original image may be acquired, where the first original image and the second original image are images acquired by different cameras or the same camera, and the acquisition moments of the first original image and the second original image are the same, and the first original image and the second original image are images for the same scene.
And then, carrying out image registration on the first original image and the second original image, determining a transformation relation between the first original image and the second original image in the image registration process, establishing a mathematical model for image coordinate transformation based on the transformation relation, and transforming the first original image and the second original image into the same coordinate system by solving parameters of the mathematical model, wherein the image registration process is not limited.
Then, the first original image and the second original image are subjected to image fusion, and in the image fusion process, the first original image and the second original image which are transformed into the same coordinate system can be spliced into a frame of image, so that the image fusion process is not limited. For convenience of distinction, the stitched image is denoted as a stitched image, which may include stitching lines, a first stitching region, and a second stitching region.
If the first original image and the second original image need to be spliced left and right, in the spliced image, the spliced line can be a spliced vertical line, the left side of the spliced line is a first spliced area, and the right side of the spliced line is a second spliced area. Or if the first original image and the second original image need to be spliced up and down, in the spliced image, the spliced line can be a spliced transverse line, the upper side of the spliced line is a first spliced area, and the lower side of the spliced line is a second spliced area. For convenience of description, in the subsequent embodiments, the left-right stitching of the first original image and the second original image is taken as an example.
In one possible embodiment, the first original image and the second original image may have overlapping areas, based on which, when one boundary line located in the overlapping areas in the stitched image is taken as a stitching line, for example, when the stitching line is one stitching vertical line, one boundary vertical line located in the overlapping areas in the stitched image is taken as the stitching line, that is, the height of the stitching line is the same as the height of the image, and each row of the stitching line has one pixel point. Based on the above, the first stitching region is a first original image, and the second stitching region is a part of a second original image; or the first stitching region is a part of the first original image, the second stitching region is a part of the second original image, or the first stitching region is a part of the first original image, and the second stitching region is a part of the second original image.
For example, referring to fig. 2A, the region b1 in the first original image and the region b2 in the second original image are overlapping regions, that is, the region b1 and the region b2 are pictures for the same physical space, in this application scenario, in the stitched image, the region b1 and the region b2 may be overlapped, the overlapping region is the overlapping region, and one boundary line located in the overlapping region may be used as the stitching line.
The stitched image may be shown in fig. 2B or fig. 2C, in fig. 2B, the left side of the stitching line is a first stitched area, which is a part of the first original image, that is, the overlapping area in the first original image (that is, the area B1 is discarded when the area B1 and the area B2 are overlapped) has been discarded, and the right side of the stitching line is a second stitched area, which is a second original image, that is, the overlapping area in the second original image is reserved (that is, the area B2 is reserved when the area B1 and the area B2 are overlapped). In fig. 2B, the stitching line is located in the leftmost column of the region B2, and of course, the stitching line may be any column of the region B2, and the position of the stitching line is not limited.
In fig. 2C, the left side of the stitching line is a first stitching region, which is the first original image, i.e., the overlapping region in the first original image is reserved (i.e., region b 1), and the right side of the stitching line is a second stitching region, which is a part of the second original image, i.e., the overlapping region in the second original image has been discarded (i.e., region b 2). In fig. 2C, the stitching line is located in the rightmost column of the region b1, and of course, the stitching line may be any column of the region b1, and the position of the stitching line is not limited.
In another possible embodiment, the first original image and the second original image may not have an overlapping area, based on which, when one boundary line in the middle area of the stitched image is taken as a stitching line, for example, when the stitching line is one stitching vertical line, one boundary vertical line in the middle area of the stitched image is taken as the stitching line, that is, the height of the stitching line is the same as that of the image, and each row of the stitching line has one pixel point. Based on the above, the first stitching region is a first original image, and the second stitching region is a second original image.
The middle area of the spliced image can be one column of the middle position or a plurality of columns of the middle position, and in the spliced image, the width of the left side of the middle position is consistent with the width of the right side of the middle position.
Referring to fig. 2D, the first original image and the second original image have no overlapping area, i.e., there is no picture for the same physical space, and the stitched image may be referred to as shown in fig. 2E. The left side of the stitching line is a first stitching region, namely a first original image, and the first original image is stitched with the stitching line, namely the last column of the first original image is abutted with the stitching line. The right side of the stitching line is a second stitching region, namely a second original image, and the second original image is stitched with the stitching line, namely a first column of the second original image is abutted with the stitching line. It is apparent that, in fig. 2E, the central line of the stitching line in the middle is taken as an example, and the stitching line may be another position in the middle, and the position of the stitching line is not limited.
In the above embodiment, for the stitched image, as shown in fig. 2B, 2C and 2E, for the first stitching region on the left side of the stitching line, the pixel values of the pixels in the first stitching region may be reserved, that is, the pixel values of the region on the left side of the stitching line remain unchanged. For the second stitching region on the right side of the stitching line, the pixel values of all the pixel points in the second stitching region can be reserved, namely, the pixel values of the region on the right side of the stitching line are kept unchanged.
For the pixel value of each pixel point in the stitching line (i.e. the pixel value of each pixel point in the column of the stitching line, there is only one pixel point in each row), the following manner may be adopted: the pixel value of each pixel point in the stitching line is determined based on the pixel values of N pixel points adjacent to the stitching line in the first stitching region and the pixel values of N pixel points adjacent to the stitching line in the second stitching region, and the stitching line is determined based on the pixel value of each pixel point in the stitching line, so that the stitching line in the stitching image can be obtained.
For example, assuming that the stitching line includes a pixel point c 1-a pixel point c8 (i.e., there are 8 rows of pixel points in the stitching line, and one pixel point exists in each row), for the pixel point c1 in the stitching line, the pixel value of the pixel point c1 may be determined based on the pixel values of N pixel points adjacent to the pixel point c1 on the left side of the pixel point c1 (i.e., the pixel values of N pixel points which are in the same row as the pixel point c1 and have a smaller distance from the pixel point c 1), and the pixel values of N pixel points adjacent to the pixel point c1 on the right side of the pixel point c 1. For example, the average value of the pixel values of 2N pixels is taken as the pixel value of the pixel c1, or the median of the pixel values of 2N pixels is taken as the pixel value of the pixel c1, or the maximum value of the pixel values of 2N pixels is taken as the pixel value of the pixel c1, or the minimum value of the pixel values of 2N pixels is taken as the pixel value of the pixel c1, and the determination method is not limited. Similarly, the pixel value of the pixel point c2 to the pixel point c8 can be obtained. Then, the pixel values of pixel point c 1-pixel point c8 may be assembled into a stitching line.
In the above embodiment, the value of N may be empirically configured, such as 3, 5, etc., which is not limited.
In the above embodiment, the first original image and the second original image may be image types, such as an image collected by an intelligent terminal, a map image, and the like, and the stitching process of these images may be offline processing, which has low real-time requirements, so that the first original image and the second original image may be stitched in an offline manner. Or the first original image and the second original image can also be video images, such as each frame of image in a video stream, and the splicing process of the images can be online processing, so that the real-time requirement is high, and the first original image and the second original image can be spliced in an online mode.
Step 102, determining a first attenuation region from a first splicing region (i.e. the splicing region located at the left side of the stitching line) in the spliced image, and performing attenuation processing on the first attenuation region based on the size of the first attenuation region, the configured attenuation speed value corresponding to the first attenuation region and the distance between the pixel point in the first attenuation region and the stitching line, so as to obtain the attenuated first splicing region.
For example, for a first stitching region in the stitched image, M pixel points adjacent to the stitching line in the first stitching region may be used as a first attenuation region, and then the first attenuation region is determined.
For example, assume that the stitching line includes a pixel point c 1-a pixel point c8, and for the pixel point c1 in the stitching line, M pixel points adjacent to the pixel point c1 in the first stitching region (i.e., M pixel points in the same row as the pixel point c1 and having a smaller distance from the pixel point c 1) belong to the first attenuation region. For the pixel point c2 in the stitching line, M pixel points adjacent to the pixel point c2 in the first stitching region belong to the first attenuation region, and so on. Referring to fig. 3, a schematic view of a first attenuation region is shown.
As can be seen from fig. 3, assuming that M is 6, the pixel point d11, the pixel point d12, the pixel point d13, the pixel point d14, the pixel point d15, and the pixel point d16 adjacent to the pixel point c1 in the first stitching region belong to the first attenuation region, the pixel point d 21-the pixel point d26 adjacent to the pixel point c2 in the first stitching region belong to the first attenuation region, and so on, finally, the first attenuation region in the first stitching region is obtained.
For example, the size of the first attenuation region may include a width value (i.e., a number of horizontal pixels) of the first attenuation region and/or a height value (i.e., a number of vertical pixels) of the first attenuation region. The size of the first attenuation region is the width value of the first attenuation region when the first attenuation region is positioned at the left side or the right side of the splice line, and the size of the first attenuation region is the height value of the first attenuation region when the first attenuation region is positioned at the upper side or the lower side of the splice line. In this embodiment, the width value of the first attenuation region is taken as an example.
For example, when M pixels adjacent to the stitching line in the first stitching region are used as the first attenuation region, the width value of the first attenuation region is M, which indicates that the number of the transverse pixels is M.
The attenuation speed value corresponding to the first attenuation region indicates the change speed of the attenuation degree value, that is, the greater the attenuation speed value is, the faster the change speed of the attenuation degree value is, and the smaller the attenuation speed value is, the slower the change speed of the attenuation degree value is. The attenuation speed value corresponding to the first attenuation region is an empirical value which is already configured, that is, the attenuation speed value corresponding to the first attenuation region can be empirically configured, which is not limited.
The distance between the pixel point and the stitching line in the first attenuation region represents the number of pixel points spaced between the pixel point and the stitching line. For example, referring to fig. 3, the pixel point d11 is spaced 1 pixel point from the pixel point c1, so the distance between d11 and the stitching line is 1; the pixel d12 is separated from the pixel c1 by 2 pixels, so the distance between d12 and the stitching line is 2, and so on.
In one possible implementation manner, after determining the first attenuation region from the first splicing region, the first attenuation region may be subjected to attenuation processing based on the size of the first attenuation region, the attenuation speed value corresponding to the first attenuation region, and the distance between the pixel point in the first attenuation region and the stitching line, so as to obtain the attenuated first splicing region, and the process may include the following steps:
Step S11, for each pixel point in the first attenuation area, determining an attenuation degree value corresponding to the pixel point based on the size of the first attenuation area, an attenuation speed value corresponding to the first attenuation area and a distance between the pixel point and the stitching line. For example, the attenuation level value may be proportional to the dimension, the attenuation level value may be inversely proportional to the attenuation speed value, and the attenuation level value may be inversely proportional to the distance.
For example, for the pixel (x, y), the attenuation degree value corresponding to the pixel (x, y) may be determined by using the formula (1) and the formula (2), where x represents the abscissa of the pixel and y represents the ordinate of the pixel.
T x,y =abs (x-x 0)/max_off formula (1)
alpha x,y =1–T x,y The formula of k (2)
In the formulas (1) and (2), the pixel point (x, y) represents any one of the pixel points in the first attenuation region, x0 represents the abscissa of the pixel point in the stitching line, and the abscissa x0 corresponds to the same row as the abscissa x, that is, x represents the x-th pixel point of the y-th row, and x0 represents the x 0-th pixel point of the y-th row. Obviously, abs (x-x 0) represents the distance between the pixel point (x, y) and the stitching line, and abs represents the absolute value.
max_off represents the size of the first attenuation region, i.e., the width value of the first attenuation region. k represents the attenuation speed value corresponding to the first attenuation region, and is a configured empirical value, such as 2, 3, 4, etc. alpha x,y The attenuation degree value corresponding to the pixel point (x, y) is shown. T (T) x,y And k represents T x,y To the power of k, T x,y Is an intermediate variable representing abs (x-x 0)/max_off, i.e., determined by abs (x-x 0) and max_off.
In summary, for each pixel (x, y) in the first attenuation region, after knowing the size max_off of the first attenuation region, the attenuation speed value k corresponding to the first attenuation region, and the distance abs (x-x 0) between the pixel (x, y) and the stitching line, the attenuation degree value alpha corresponding to the pixel (x, y) can be determined based on the formula (1) and the formula (2) x,y . As can be seen from the equation (1) and the equation (2), the attenuation degree value alpha x,y In proportion to the dimension max_off, the greater alpha x,y The larger the max_off is, the smaller the alpha x,y The smaller. Attenuation degree value alpha x,y Inversely proportional to the decay rate value k, the greater k, the alpha x,y The smaller the number, the smaller the k,alpha is then x,y The larger. Attenuation degree value alpha x,y Inversely proportional to the distance abs (x-x 0), the greater abs (x-x 0) the alpha x,y Smaller abs (x-x 0) smaller alpha x,y The larger.
As can be seen from equation (2), the attenuation level value alpha x,y Inversely proportional to the decay rate value k, and the greater the decay rate value k, the greater the decay degree value alpha x,y The faster the change speed of (a), the smaller the decay speed value k, the decay degree value alpha x,y The slower the rate of change of (c). For example, alpha x,y The value range of (2) is 0-1, alpha x,y Maximum value of 1, alpha x,y Is 0, alpha x,y Ranging from 1 to 0. Obviously, when k is greater than 1, if k is greater, alpha x,y The faster the rate of change of (a), the smaller k, alpha x,y The slower the rate of change of (c).
Of course, the formulas (1) and (2) are only examples of determining the attenuation degree value, and the manner of determining the attenuation degree value is not limited as long as the attenuation degree value can be determined based on the size of the first attenuation region, the attenuation speed value corresponding to the first attenuation region, and the distance between the pixel point and the stitching line.
Step S12, for each pixel point in the first attenuation area, determining a target attenuation factor corresponding to the pixel point based on the attenuation degree value corresponding to the pixel point. For example, a maximum attenuation factor corresponding to the pixel point is obtained first, and a target attenuation factor corresponding to the pixel point is determined based on the attenuation degree value and the maximum attenuation factor, i.e. the target attenuation factor corresponding to each pixel point in the first attenuation region is obtained.
Illustratively, obtaining the maximum attenuation factor corresponding to the pixel point may include, but is not limited to:
and acquiring a configured maximum attenuation factor corresponding to the pixel point, for example, pre-configuring an attenuation factor, and taking the attenuation factor as the maximum attenuation factor corresponding to each pixel point in the first attenuation area.
Or determining an initial scale factor corresponding to the stitching line based on the first stitching region and the second stitching region, and filtering the initial scale factor to obtain a target scale factor corresponding to the stitching line. Then, a maximum attenuation factor corresponding to the pixel point may be determined based on the target scale factor.
In a possible implementation manner, in step S12, for each pixel point in the first attenuation area, the following steps may be used to determine the target attenuation factor corresponding to the pixel point:
step S121, determining the initial scale factor corresponding to the stitching line based on the first stitching region and the second stitching region, for example, equation (3) may be used to determine the initial scale factor R corresponding to the stitching line x0,y
R x0,y =(P x0+1,y +P x0+2,y +…P x0+n,y +1)/(P x0-1,y +P x0-2,y +…P x0-n,y +1) equation (3)
For example, the pixels in the stitching line are denoted as (x 0 Y), while the pixel point (x 0 The initial scale factor corresponding to y) is denoted as R x0,y 。x 0 Is the abscissa of the pixel points in the stitching line, and the abscissas of all the pixel points in the stitching line are the same and are all x 0 Y is the ordinate of the pixel points in the stitching line, and the ordinate of all the pixel points in the stitching line are different and are sequentially 1,2 and …. Referring to fig. 3, the stitching line includes a pixel point c 1-a pixel point c8, and the pixel point c1 is (x 0 1), the pixel point c2 is (x 0 2), …, the pixel point c8 is (x 0 ,8)。
In the formula (3), the pixel point (x 0+1 Y represents a pixel point (x) 0 Y) the first pixel point on the right side, P x0+1,y Representing pixel points (x) 0+1 Y), pixel point (x 0+2 Y represents a pixel point (x) 0 Y) the second pixel point on the right side, P x0+2,y Representing pixel points (x) 0+2 Y), …, pixel point (x 0+n Y represents a pixel point (x) 0 Y) the nth pixel point on the right side, P x0+n,y Representing pixel points (x) 0+n Y) pixel values. Obviously, regarding the pixel point (x 0 Y) are all pixels in the second stitching region.
In the formula (3), the pixel point (x 0-1 Y represents a pixel point (x) 0 Y) the first pixel point on the left side, P x0-1,y Representing pixel points (x) 0-1 Y), pixel point (x 0-2 Y represents a pixel point (x) 0 Y) the second pixel point on the left side, P x0-2,y Representing pixel points (x) 0-2 Y), …, pixel point (x 0-n Y represents a pixel point (x) 0 Y) the nth pixel point on the left side, P x0-n,y Representing pixel points (x) 0-n Y) pixel values. Obviously, regarding the pixel point (x 0 Y) are all pixels in the first stitching region.
In summary, the pixel point (x 0 Y) the corresponding initial scale factor. Of course, equation (3) is only an example, and the determination manner is not limited as long as the initial scale factor can be determined.
Referring to fig. 3, if y is 1, the initial scale factor corresponding to the pixel point c1 is determined by using the formula (3), if y is 2, the initial scale factor corresponding to the pixel point c2 is determined by using the formula (3), and so on, the initial scale factor corresponding to each pixel point in the stitching line can be determined by using the formula (3).
In the formula (3), for any pixel point (i.e. any vertical component y) in the stitching line, selecting n pixel points on the left side of the stitching line and n pixel points on the right side of the stitching line, and determining an initial scale factor R x0,y ,R x0,y For the initial scale factor of row y, n may be less than x 0 ,x 0 Is the abscissa of the pixel points in the stitching line. The addition of 1 in equation (3) is to prevent zero removal and an initial scale factor of 0.
And step S122, filtering the initial scale factors to obtain target scale factors corresponding to the stitching lines.
For example, for a pixel point (x 0 Y) is obtained at the pixel point (x 0 Y) corresponding initial scale factor R x0,y The initial scale factor R may then be applied x0,y Is determined as a pixel point (x 0 Y) corresponding target scale factor, or, alternatively, the initial scale factor R x0,y Filtering to obtain pixel point (x 0 Y) the corresponding target scaling factor. For example, based on pixel points (x 0 Y) initial scale factors and pixel points (x) corresponding to m pixel points (all located at the stitching line) on the upper side 0 Y) initial scale factors corresponding to m pixel points (all located at the stitching line) on the lower side, and (x) the pixel points 0 Y) corresponding initial scale factor R x0,y Filtering to obtain pixel point (x 0 Y) corresponding target scaling factor T x0,y . For example, the initial scale factor corresponding to the m pixel points on the upper side, the initial scale factor corresponding to the m pixel points on the lower side, and the initial scale factor R x0,y Is determined as the average value of the pixel points (x 0 Y) corresponding target scaling factor T x0,y
m is a smooth radius and may be empirically configured such as 1, 2, 3, etc., with m being taken as 1 in the following.
Referring to FIG. 3, the stitching line may include a pixel point c 1-a pixel point c8, where the pixel point c1 is (x 0 1), the pixel point c2 is (x 0 2), …, the pixel point c8 is (x 0 ,8). At the initial scale factor R corresponding to the pixel point c1 x0,1 When filtering, there is no pixel point above the pixel point c1, and m (i.e. 1) pixel points below the pixel point c1 are the pixel point c2, so that the initial scale factor R can be determined x0,1 And an initial scale factor R x0,2 Is the average value of the pixel points (x 0 1) corresponding target scaling factor T x0,1
Similarly, at the initial scale factor R corresponding to the pixel point c2 x0,2 When filtering, m pixels above the pixel point c2 are the pixel point c1, and m pixels below the pixel point c2 are the pixel point c3, so that the initial scale factor R can be determined x0,1 Initial scale factor R x0,2 And an initial scale factor R x0,3 Is the average value of the pixel points (x 0 2) corresponding target scaling factor T x0,2
Similarly, a target scale factor corresponding to each pixel point in the stitching line can be obtained.
In summary, for the pixel points (x 0 Y) can be applied to the pixel point (x 0 Y) corresponding initial scale factor R x0,y Smoothing and filtering to obtain pixel point (x) 0 Y) corresponding target scaling factor T x0,y
Step S123, for each pixel (x, y) in the first attenuation region, determining a maximum attenuation factor corresponding to the pixel (x, y) based on the target scale factor. For example, based on the ordinate of the pixel (x, y), the same pixel (x) as the ordinate is determined from the stitching line 0 Y) and is based on pixel points (x 0 Y) corresponding target scaling factor T x0,y A maximum attenuation factor corresponding to the pixel point (x, y) is determined.
For example, for each pixel (x, 1) in the first attenuation region, i.e., all pixels with y of 1 in the first attenuation region, the pixel (x) 0 1) corresponding target scaling factor T x0,1 And determining the maximum attenuation factor corresponding to the pixel point (x, 1). For each pixel (x, 2) in the first attenuation region, a pixel (x 0 2) corresponding target scaling factor T x0,2 Determining the maximum attenuation factor corresponding to the pixel point (x, 2), and so on.
Illustratively, the pixel-based (x 0 Y) corresponding target scaling factor T x0,y Determining the maximum attenuation factor corresponding to the pixel point (x, y) may include, but is not limited to, the following: determining the maximum attenuation factor LDiv corresponding to the pixel point (x, y) by adopting the following formula x,y :LDiv x,y =(1+T x0,y )/2. Of course, the above formula is merely an example, and the determination method is not limited thereto. It should be noted that, for all pixel points with y of 1 in the first attenuation region, the maximum attenuation factors are the same and are LDiv x,1 For all pixel points with y of 2 in the first attenuation region, the maximum attenuation factors are the same, and are LDiv x,2 And so on.
Step S124, for each pixel point (x, y) in the first attenuation region, based on the attenuation corresponding to the pixel point (x, y)And determining a target attenuation factor corresponding to the pixel point (x, y) by the degree value and the maximum attenuation factor corresponding to the pixel point (x, y). For example, equation (4) may be used to determine the target attenuation factor LS x,y
LS x,y =1–alpha x,y +LDiv x,y *alpha x,y Formula (4)
alpha x,y The attenuation degree value corresponding to the pixel point (x, y) is represented by the determination method in the step S11, LDiv x,y The maximum attenuation factor corresponding to the pixel point (x, y) is indicated, and the determination method is referred to step S121-step S123, and the detailed description is not repeated here. LS (least squares) x,y And the target attenuation factor corresponding to the pixel point (x, y) is represented.
In step S12, a target attenuation factor corresponding to each pixel point in the first attenuation region is obtained.
Step S13, for each pixel point in the first attenuation area, determining a target pixel value of the pixel point based on a target attenuation factor corresponding to the pixel point and an original pixel value of the pixel point. For example, the target pixel value of the pixel may be determined based on the product of the target attenuation factor corresponding to the pixel and the original pixel value of the pixel, for example, the target pixel value may be determined by the following formula (5).
LT x,y =LP x,y *LS x,y Formula (5)
In equation (5), LP x,y Representing the original pixel value of pixel point (x, y), LS x,y Represents a target attenuation factor, LT, corresponding to the pixel point (x, y) x,y A target pixel value representing a pixel point (x, y).
Step S14, generating a first stitched region after the attenuation process based on the target pixel value of each pixel point in the first attenuation region, that is, for the first stitched region in the stitched image, the first stitched region after the attenuation process. The first splicing region after the attenuation processing may include a first attenuation region after the attenuation processing and an unattenuated region, and the first attenuation region after the attenuation processing is composed of a target pixel value of each pixel point in the first attenuation region. For the unattenuated region, it consists of the original pixel value of each pixel in the unattenuated region, i.e. the pixel value of each pixel in the unattenuated region remains unchanged.
The attenuation processing is performed on the first stitching region in the stitched image, that is, the first stitching region after the attenuation processing is included in the stitched image, but not the first stitching region before the attenuation processing.
And 103, determining a second attenuation region from a second splicing region (namely the splicing region positioned on the right side of the splicing line) in the spliced image, and carrying out attenuation treatment on the second attenuation region based on the size of the second attenuation region, the configured attenuation speed value corresponding to the second attenuation region and the distance between the pixel point in the second attenuation region and the splicing line to obtain the attenuated second splicing region.
For example, for a second stitching region in the stitched image, M pixel points adjacent to the stitching line in the second stitching region may be used as a second attenuation region, and then the second attenuation region is determined.
For example, the dimensions of the second attenuation region may include a width value of the second attenuation region and/or a height value of the second attenuation region. The dimension is a width value when the second attenuation region is located on the left or right side of the splice line and is a height value when the second attenuation region is located on the upper or lower side of the splice line.
The decay rate value corresponding to the second decay region represents the rate of change of the decay degree value, and the decay rate value corresponding to the second decay region is a configured empirical value. The distance between the pixel point and the stitching line in the second attenuation region represents the number of pixel points spaced between the pixel point and the stitching line.
By way of example, the second attenuation region may be subjected to an attenuation process using the following steps:
step S21, for each pixel point in the second attenuation area, determining an attenuation degree value corresponding to the pixel point based on the size of the second attenuation area, an attenuation speed value corresponding to the second attenuation area and a distance between the pixel point and the stitching line. For example, the attenuation level value may be proportional to the dimension, the attenuation level value may be inversely proportional to the attenuation speed value, and the attenuation level value may be inversely proportional to the distance.
The implementation process of step S21 is similar to that of step S11, and will not be described herein.
Step S22, for each pixel point in the second attenuation area, determining a target attenuation factor corresponding to the pixel point based on the attenuation degree value corresponding to the pixel point. For example, a maximum attenuation factor corresponding to the pixel point is obtained first, and a target attenuation factor corresponding to the pixel point is determined based on the attenuation degree value and the maximum attenuation factor, so as to obtain a target attenuation factor corresponding to each pixel point in the second attenuation region.
The process of obtaining the maximum attenuation factor can refer to step S12, which is not described herein.
In a possible implementation manner, in step S22, for each pixel point in the second attenuation area, the following steps may be used to determine the target attenuation factor corresponding to the pixel point:
step S221, determining an initial scale factor corresponding to the stitching line based on the first stitching region and the second stitching region, where the implementation process of step S221 is the same as the implementation process of step S121, and will not be described herein.
Step S222, filtering the initial scale factors to obtain target scale factors corresponding to the stitching lines.
The implementation process of step S222 is the same as that of step S122, and will not be described herein.
Step S223, for each pixel point (x, y) in the second attenuation region, determining a maximum attenuation factor corresponding to the pixel point (x, y) based on the target scale factor. For example, based on the pixel points (x 0 Y) corresponding target scaling factor T x0,y A maximum attenuation factor corresponding to the pixel point (x, y) is determined. For example, the maximum attenuation factor RDiv corresponding to the pixel point (x, y) is determined by the following formula x,y :RDiv x,y =(1+1/T x0,y )/2。
Wherein, the real of step S223The present process is similar to the implementation process of step S123, except that the maximum attenuation factor LDiv corresponding to the pixel point (x, y) in the first attenuation region is calculated x,y Replaced by the maximum attenuation factor RDiv corresponding to the pixel point (x, y) in the second attenuation region x,y The description is not repeated here.
Step S224, for each pixel (x, y) in the second attenuation region, determining a target attenuation factor corresponding to the pixel (x, y) based on the attenuation degree value corresponding to the pixel (x, y) and the maximum attenuation factor corresponding to the pixel (x, y). For example, the target attenuation factor RS is determined by the following formula x,y :RS x,y =1-alpha x,y +RDiv x,y *alpha x,y . The implementation of step S224 is similar to that of step S124, except that the target attenuation factor LS corresponding to the pixel (x, y) in the first attenuation region is set x,y Replaced by a target attenuation factor RS corresponding to the pixel point (x, y) in the second attenuation region x,y The description is not repeated here.
Step S23, for each pixel point in the second attenuation area, determining a target pixel value of the pixel point based on a target attenuation factor corresponding to the pixel point and an original pixel value of the pixel point.
For example, the target pixel value may be determined by the following formula: RT (reverse transcription) method x,y =RP x,y *RS x,y ,RP x,y Original pixel value, RT, representing pixel point (x, y) x,y A target pixel value representing a pixel point (x, y).
Step S24, generating a second stitched region after the attenuation process based on the target pixel value of each pixel point in the second attenuation region, that is, for the second stitched region in the stitched image, the second stitched region after the attenuation process. The second spliced region after the attenuation processing may include a second attenuated region after the attenuation processing and an unattenuated region, and the second attenuated region after the attenuation processing is composed of a target pixel value of each pixel point in the second attenuated region. For the unattenuated region, it consists of the original pixel value of each pixel in the unattenuated region, i.e. the pixel value of each pixel in the unattenuated region remains unchanged.
So far, the attenuation processing is performed on the second stitching region in the stitched image, that is, the second stitching region after the attenuation processing is included in the stitched image, but not the second stitching region before the attenuation processing.
And 104, generating a target image based on the attenuated first splicing region and the attenuated second splicing region, wherein the target image comprises a splicing line, one side of the splicing line is the attenuated first splicing region, and the other side of the splicing line is the attenuated second splicing region.
For example, after the stitched image is obtained, the attenuation processing may be performed on the first stitched area in the stitched image to obtain the first stitched area after the attenuation processing, and the attenuation processing may be performed on the second stitched area in the stitched image to obtain the second stitched area after the attenuation processing. Thus, the first stitching region after the attenuation treatment, the stitching lines in the stitching image and the second stitching region after the attenuation treatment can be formed into a frame of target image, and the target image is a panoramic image or a high-resolution image (namely, an ultra-wide view angle image).
For example, the above execution sequence is only an example given for convenience of description, and in practical application, the execution sequence between steps may be changed, which is not limited. Moreover, in other embodiments, the steps of the corresponding methods need not be performed in the order shown and described herein, and the methods may include more or less steps than described herein. Furthermore, individual steps described in this specification, in other embodiments, may be described as being split into multiple steps; various steps described in this specification, in other embodiments, may be combined into a single step.
As can be seen from the above technical solution, in the embodiment of the present application, after performing attenuation processing on the first attenuation region in the first stitching region and the second attenuation region in the second stitching region, the first stitching region after attenuation processing and the second stitching region after attenuation processing may be stitched into a target image (such as a panoramic image or a high resolution image), so that a first original image and a second original image with larger differences (such as color differences and/or brightness differences) are stitched into the target image, and a larger stitching trace does not exist in the target image, which can improve the color differences and the brightness differences, and has a better image stitching effect, thereby realizing smooth transition of the target image.
In one possible embodiment, the first stitching region may include a first low frequency component and a first high frequency component, the second stitching region may include a second low frequency component and a second high frequency component, and the stitched image may include a first stitched image and a second stitched image, based on which, referring to fig. 4, another flowchart of an image processing method is shown, and the method may include the steps of:
step 401, acquiring a first stitched image and a second stitched image, wherein one side of a stitching line of the first stitched image is a first low-frequency component, the other side of the stitching line is a second low-frequency component, one side of the stitching line of the second stitched image is a first high-frequency component, and the other side of the stitching line is a second high-frequency component.
Illustratively, the implementation of step 401 may be similar to that of step 101.
For example, a first low frequency component and a first high frequency component in a first splicing region may be acquired, and a second low frequency component and a second high frequency component in a second splicing region may be acquired. For convenience of distinction, the low frequency component in the first stitching region may be denoted as a first low frequency component (may also be referred to as a first base layer image or a first blur layer image), and the high frequency component in the first stitching region may be denoted as a first high frequency component (may also be referred to as a first detail layer image). And, the low frequency component in the second stitching region may be denoted as a second low frequency component (which may also be referred to as a second base layer image or a second blur layer image), and the high frequency component in the second stitching region may be denoted as a second high frequency component (which may also be referred to as a second detail layer image).
Illustratively, after the first splicing region is obtained, a low-pass filtering process may be performed on the first splicing region to obtain a first low-frequency component in the first splicing region. Then, a first high-frequency component in the first splicing region is determined based on the first splicing region and the first low-frequency component, for example, a difference between the first splicing region and the first low-frequency component is determined as the first high-frequency component.
Illustratively, after the second splicing region is obtained, a low-pass filtering process may be performed on the second splicing region to obtain a second low-frequency component in the second splicing region. Then, a second high-frequency component in the second splicing region is determined based on the second splicing region and the second low-frequency component, for example, a difference between the second splicing region and the second low-frequency component is determined as the second high-frequency component.
In one possible embodiment, since the high frequency is a main factor causing ghost and the low frequency is a main factor causing luminance difference and chromaticity difference, frequency division processing may be performed on the first and second splicing regions, respectively, to obtain first and second low frequency components in the first splicing region and first and second high frequency components in the second splicing region, and then the first and second low frequency components are fused, and the first and second high frequency components are fused, so that the low and high frequency components are differentially fused, so that ghost can not be generated in the image while improving the luminance difference and chromaticity difference.
In one possible implementation, the first low frequency component and the second low frequency component may be stitched into a first stitched image that includes a first stitching line (i.e., a stitching line in the first stitched image) that is on one side of the first low frequency component and on the other side of the second low frequency component.
And, the first high frequency component and the second high frequency component may be stitched into a second stitched image, the second stitched image including a second stitching line (i.e., a stitching line in the second stitched image), one side of the second stitching line being the first high frequency component, and the other side of the second stitching line being the second high frequency component.
In step 402, for a first low-frequency component in the first stitched image, a first attenuation region (hereinafter referred to as a first attenuation region E1) is determined from the first low-frequency component, and attenuation processing is performed on the first attenuation region E1 based on the size of the first attenuation region E1 (hereinafter referred to as a first size), a configured attenuation speed value (hereinafter referred to as a first attenuation speed value) corresponding to the first attenuation region E1, and a distance between a pixel point in the first attenuation region E1 and the first stitching line, so as to obtain a first low-frequency component after the attenuation processing.
For example, step 402 is similar to step 102, the first splicing area in step 102 is replaced by the first low frequency component, and the first attenuation area is denoted as E1, which is not repeated here.
In step 403, for the first high-frequency component in the second stitched image, a first attenuation region (hereinafter referred to as a first attenuation region E2) is determined from the first high-frequency component, and the first attenuation region E2 is subjected to attenuation processing based on the size of the first attenuation region E2 (hereinafter referred to as a second size), the configured attenuation speed value (hereinafter referred to as a second attenuation speed value) corresponding to the first attenuation region E2, and the distance between the pixel point in the first attenuation region E2 and the second stitching line, so as to obtain the first high-frequency component after the attenuation processing.
For example, step 403 is similar to step 102, and the first splicing area in step 102 is replaced by the first high frequency component, and the first attenuation area is denoted as E2, which is not repeated here.
In step 404, for the second low frequency component in the first stitched image, a second attenuation region (hereinafter referred to as a second attenuation region E3) is determined from the second low frequency component, and attenuation processing is performed on the second attenuation region E3 based on the size of the second attenuation region E3 (hereinafter referred to as a third size), the configured attenuation speed value (hereinafter referred to as a third attenuation speed value) corresponding to the second attenuation region E3, and the distance between the pixel point in the second attenuation region E3 and the first stitching line, so as to obtain the second low frequency component after the attenuation processing.
For example, step 404 is similar to step 103, the second splicing area in step 103 is replaced by a second low frequency component, and the second attenuation area is denoted as E3, which is not repeated here.
In step 405, for the second high-frequency component in the second stitched image, a second attenuation region (hereinafter referred to as a second attenuation region E4) is determined from the second high-frequency component, and attenuation processing is performed on the second attenuation region E4 based on the size of the second attenuation region E4 (hereinafter referred to as a fourth size), the configured attenuation speed value (hereinafter referred to as a fourth attenuation speed value) corresponding to the second attenuation region E4, and the distance between the pixel point in the second attenuation region E4 and the second stitching line, so as to obtain the second high-frequency component after the attenuation processing.
For example, step 405 is similar to step 103, and the second splicing area in step 103 is replaced by a second high frequency component, and the second attenuation area is denoted as E4, which is not repeated here.
In the above embodiment, referring to step 402 and step 403, the first size of the first attenuation region E1 may be larger than the second size of the first attenuation region E2, and the first attenuation speed value corresponding to the first attenuation region E1 is smaller than or equal to the second attenuation speed value corresponding to the first attenuation region E2. Alternatively, the first size of the first attenuation region E1 may be equal to the second size of the first attenuation region E2, and the first attenuation velocity value corresponding to the first attenuation region E1 is smaller than the second attenuation velocity value corresponding to the first attenuation region E2.
The reason for adopting the design is that: see equations (1), (2) and (4),
for low frequency components, the larger the first dimension, the greater the alpha x,y The larger the target attenuation factor LS x,y Smaller (LDiv) x,y When the value is greater than 0 and less than 1, the conclusion can be obtained through the formula (4), namely when the pixel points on the right side and the pixel points on the left side of the first attenuation region are attenuated, the target attenuation factors become larger in sequence, but the change speed of the target attenuation factors is smaller, so that the low-frequency components are attenuated slowly (namely, the number of the attenuated pixel points is larger), the pixel points can be transited smoothly, and the low-frequency characteristics of the first splicing region are reflected more. For high frequency components, the smaller the second dimension, the smaller the alpha x,y The smaller the target attenuation factor LS x,y The larger the target attenuation is, namely when the target attenuation is from the pixel point on the right side to the pixel point on the left side of the first attenuation regionThe subtraction factor becomes larger in turn, and the change speed of the target attenuation factor is larger, so that the high-frequency component is attenuated rapidly (i.e. the number of attenuated pixels is small), the high-frequency characteristic of the first splicing region can be reserved, but the high-frequency characteristic is changed rapidly, and the visual experience is not affected. In summary, the first dimension may be larger and the second dimension may be smaller, that is, the first dimension of the first attenuation region E1 may be larger than the second dimension of the first attenuation region E2.
For low frequency components, the smaller the first decay rate value, the alpha x,y The larger the target attenuation factor LS x,y The smaller the target attenuation factor is, namely when the target attenuation factor is attenuated from the pixel point on the right side to the pixel point on the left side of the first attenuation region, the larger the target attenuation factor is, but the change speed of the target attenuation factor is smaller, so that the low-frequency component is attenuated slowly, the pixel points can be in smooth transition, and the low-frequency characteristic of the first splicing region is reflected more. For the high frequency component, alpha is the greater the second decay rate value x,y The smaller the target attenuation factor LS x,y The larger the target attenuation factor is, namely when the target attenuation factor is attenuated from the pixel point on the right side to the pixel point on the left side of the first attenuation region, the larger the target attenuation factor is, the higher the change speed of the target attenuation factor is, so that the high-frequency component is attenuated rapidly, the high-frequency characteristic of the first splicing region can be reserved, but the high-frequency characteristic is changed rapidly, and the visual experience is not influenced. In summary, the first attenuation speed value may be smaller, and the second attenuation speed value may be larger, that is, the first attenuation speed value corresponding to the first attenuation area E1 may be smaller than the second attenuation speed value corresponding to the first attenuation area E2.
In the above embodiment, referring to step 404 and step 405, the third dimension of the second attenuation region E3 may be greater than the fourth dimension of the second attenuation region E4, and the third attenuation speed value corresponding to the second attenuation region E3 is less than or equal to the fourth attenuation speed value corresponding to the second attenuation region E4. Alternatively, the third dimension of the second attenuation region E3 may be equal to the fourth dimension of the second attenuation region E4, and the third attenuation speed value corresponding to the second attenuation region E3 is smaller than the fourth attenuation speed value corresponding to the second attenuation region E4.
Similarly, the reason for adopting the design is that: for the low-frequency component of the second splicing region, when the pixel points on the left side of the second attenuation region attenuate to the pixel points on the right side, the target attenuation factors become larger in sequence, but the change speed of the target attenuation factors is smaller, so that the low-frequency component attenuates slowly, the pixel points can be in smooth transition, and the low-frequency characteristic of the second splicing region is reflected more. For the high-frequency component of the second splicing region, when the pixel points on the left side and the right side of the second attenuation region attenuate, the target attenuation factors sequentially become larger, and the change speed of the target attenuation factors is larger, so that the high-frequency component is attenuated rapidly, the high-frequency characteristic of the first splicing region can be reserved, but the high-frequency characteristic is changed rapidly, and the visual experience is not affected.
In one possible embodiment, for each attribute value, the first size of the first attenuation region E1 may be equal to the third size of the second attenuation region E3, and the first attenuation speed value corresponding to the first attenuation region E1 may be equal to the third attenuation speed value corresponding to the second attenuation region E3. And, the second size of the first attenuation region E2 may be equal to the fourth size of the second attenuation region E4, and the second attenuation speed value corresponding to the first attenuation region E2 may be equal to the fourth attenuation speed value corresponding to the second attenuation region E4.
In step 406, a low-frequency stitched image is obtained based on the attenuated first low-frequency component and the attenuated second low-frequency component, wherein one side of a stitching line of the low-frequency stitched image is the low-frequency component (i.e., the attenuated first low-frequency component) after the attenuation of the first attenuation region of the first low-frequency component, and the other side of the stitching line is the low-frequency component (i.e., the attenuated second low-frequency component) after the attenuation of the second attenuation region of the second low-frequency component. Step 406 is similar to step 104 and will not be repeated here.
Step 407, obtaining a high-frequency stitched image based on the attenuated first high-frequency component and the attenuated second high-frequency component, where one side of a stitching line of the high-frequency stitched image is the high-frequency component after attenuation processing (i.e., the attenuated first high-frequency component) of the first attenuation region of the first high-frequency component, and the other side of the stitching line is the high-frequency component after attenuation processing (i.e., the attenuated second high-frequency component) of the second attenuation region of the second high-frequency component. Step 407 is similar to step 104 and will not be repeated here.
Step 408, fusing the low-frequency stitched image and the high-frequency stitched image to obtain the target image.
For example, the attenuated first low-frequency component in the low-frequency stitched image and the attenuated first high-frequency component in the high-frequency stitched image are fused to obtain the attenuated first stitched region. And fusing the second low-frequency component subjected to the attenuation treatment in the low-frequency spliced image with the second high-frequency component subjected to the attenuation treatment in the high-frequency spliced image to obtain a second spliced region subjected to the attenuation treatment. And fusing the stitching lines in the low-frequency spliced image and the stitching lines in the high-frequency spliced image to obtain fused stitching lines. Thus, a target image can be obtained, wherein the target image can comprise a fused stitching line, one side of the stitching line is a first stitching region after attenuation treatment, the other side of the stitching line is a second stitching region after attenuation treatment, and the target image is a panoramic image or a high-resolution image (namely, an ultra-wide view angle image).
In the above embodiment, the pixel value of the pixel point may be a luminance value, or the pixel value of the pixel point may be a chrominance value, or the pixel value of the pixel point may be a luminance value and a chrominance value.
According to the technical scheme, in the embodiment of the application, the image is divided by filtering, and the attenuation factors of different areas are smoothed for high frequency and low frequency respectively, so that the ghost-free smoothing method of the spliced image is realized, the spliced image can be smoothed under the condition that the overlapped area is very small or no overlapped area exists, chromatic aberration and brightness difference can be eliminated, and splicing marks caused by error transition registration can be eliminated.
Based on the same application concept as the above method, another image processing method is provided in the embodiments of the present application, and the method may include: and acquiring a spliced image, and performing smoothing processing on at least one row of pixel points in the spliced image. The process of acquiring the stitched image may refer to step 101, and taking left and right stitching as an example, the smoothing process may be performed on each row of pixel points in the stitched image, and the specific smoothing process may include:
step S51, selecting one pixel point from a row of pixel points as a joint point.
Illustratively, the splice point may be: when the spliced image has an overlapping area, selecting a left boundary point or a right boundary point of the overlapping area as the splice point; and when the spliced image does not have an overlapping area, selecting a pixel point positioned in the middle area of the spliced image as the splice point.
Step S52, in response to the joint point, smoothing processing is performed on a plurality of other pixel points except the joint point based on the target attenuation factor, and a plurality of processed other pixel points are generated. The target attenuation factor is defined as a function operation obtained by taking a transverse coordinate difference value between other pixel points and the joint point and a maximum attenuation factor as parameters, so that when the maximum attenuation factor is smaller than 1, the variation between the pixel value after the smoothing processing of the plurality of other pixel points and the pixel value before the smoothing processing is performed increases along with the increase of the transverse coordinate difference value, and when the maximum attenuation factor is larger than 1, the variation between the pixel value after the smoothing processing of the plurality of other pixel points and the pixel value before the smoothing processing is performed is reduced along with the increase of the transverse coordinate difference value.
In the above embodiment, the plurality of other pixel points on the left side of the seam point correspond to the same left side maximum attenuation factor, and the left side maximum attenuation factor is defined as a function operation obtained by taking the sum of the pixel values of the plurality of other pixel points on the left side of the seam point and the sum of the pixel values of the plurality of other pixel points on the right side of the seam point as parameters. The right maximum attenuation factor is defined by a function operation with a preset smooth radius, the sum of pixel values of a plurality of other pixel points at the left side of the joint point and the sum of pixel values of a plurality of other pixel points at the right side of the joint point as parameters.
Illustratively, when the left side maximum attenuation factor is greater than 1, the right side maximum attenuation factor is less than 1. Alternatively, when the left side maximum attenuation factor is less than 1, the right side maximum attenuation factor is greater than 1.
In the above embodiment, when the left-side maximum attenuation factor is greater than 1 and the right-side maximum attenuation factor is less than 1, the plurality of target attenuation factors corresponding to the plurality of other pixel points on the left side of the joint point are defined to gradually decrease as the distance from the joint point is from small to large, so that the attenuation ratio of the pixel values of the plurality of other pixel points on the left side of the joint point gradually decreases to the left to 1; and a plurality of target attenuation factors corresponding to a plurality of other pixel points on the right side of the joint point are defined to gradually increase from small to large along with the distance from the joint point, so that the attenuation proportion of the pixel values of the plurality of other pixel points on the right side of the joint point gradually increases to the right to 1.
In the above embodiment, when the left-side maximum attenuation factor is smaller than 1 and the right-side maximum attenuation factor is larger than 1, the plurality of target attenuation factors corresponding to the plurality of other pixel points on the left side of the joint point are defined to be gradually increased as the distance from the joint point is from small to large, so that the attenuation ratio of the pixel values of the plurality of other pixel points on the left side of the joint point is gradually increased to the left to 1; and a plurality of target attenuation factors corresponding to a plurality of other pixel points on the right side of the joint point are defined to gradually decrease as the distance from the joint point is changed from small to large, so that the attenuation proportion of the pixel values of the plurality of other pixel points on the right side of the joint point gradually decreases to the right to 1.
In the above embodiments, the pixel values may be luminance values and/or chrominance values.
In one possible implementation, as shown in equation (5), for each pixel point in the first attenuation region on the left side of the joint point, the target attenuation factor LS of the pixel point may be based on x,y The smoothing process is performed on the pixel point to generate a processed pixel point, that is, the smoothing process may be performed on a plurality of pixel points in the first attenuation region on the left side of the joint point to generate a plurality of processed pixel points. Similarly, smoothing processing can be performed on a plurality of pixel points in the first attenuation region on the right side of the joint point to generate a plurality of processed pixelsAnd (5) a dot.
Referring to formula (4), the target attenuation factor LS of the pixel point x,y By alpha of the pixel x,y (i.e., the attenuation level value) and LDiv of the pixel point x,y (i.e., maximum attenuation factor) is determined, see equations (1) and (2), alpha x,y Is determined by the difference (x-x 0) of the transverse coordinates of the pixel point and the joint point, thus the target attenuation factor LS of the pixel point x,y Is limited to be a difference value (x-x 0) between the transverse coordinates of the pixel point and the joint point and a maximum attenuation factor LDiv x,y Is obtained by the function operation of the parameter.
For example, for a plurality of pixels on the left side of the joint point, the pixels correspond to the same maximum attenuation factor, i.e. the left maximum attenuation factor, denoted as LDiv x,y For a plurality of pixel points on the right side of the joint point, the pixel points correspond to the same maximum attenuation factor, namely the right maximum attenuation factor, and are recorded as RDiv x,y
Left maximum attenuation factor LDiv corresponding to a plurality of pixel points on the left side of the joint point x,y See step S121-step S123, LDiv x,y From formula LDiv x,y =(1+T x0,y ) 2, T x0,y Sum of pixel values of a plurality of pixel points at left side of the joint point with a preset smooth radius m (P x0-1,y +P x0-2,y +…P x0-n,y ) Sum of pixel values of a plurality of pixel points on the right side of the joint point (P x0+1,y +P x0+2,y +…P x0+n,y ) Determination, therefore, LDiv x,y Is defined as a function operation with a preset smooth radius, the sum of the pixel values of a plurality of other pixel points at the left side of the joint point and the sum of the pixel values of a plurality of other pixel points at the right side of the joint point as parameters.
Right maximum attenuation factor RDiv corresponding to multiple pixel points on right side of joint point x,y See step S221-step S223, RDiv x,y From the formula RDiv x,y =(1+1/T x0,y ) 2, T x0,y Sum of pixel values of a plurality of pixel points at left side of the joint point with a preset smooth radius m (P x0-1,y +P x0-2,y +…P x0-n,y ) Multiple images on the right side of the joint pointSum of pixel values of pixel points (P x0+1,y +P x0+2,y +…P x0+n,y ) Determination, therefore RDiv x,y Is defined as a function operation with a preset smooth radius, the sum of the pixel values of a plurality of other pixel points at the left side of the joint point and the sum of the pixel values of a plurality of other pixel points at the right side of the joint point as parameters.
Illustratively, from the above formula LDiv x,y =(1+T x0,y ) 2 and RDiv x,y =(1+1/T x0,y ) As can be seen from/2, if T x0,y Greater than 1, then LDiv x,y Greater than 1, and RDiv x,y Less than 1; alternatively, if T x0,y Less than 1, then LDiv x,y Less than 1, and RDiv x,y Greater than 1. To sum up, when the left maximum attenuation factor LDiv x,y When the maximum attenuation factor RDiv is greater than 1, the right maximum attenuation factor RDiv is higher than the maximum attenuation factor RDiv x,y Less than 1. Alternatively, when left side maximum attenuation factor LDiv x,y When the attenuation factor RDiv is smaller than 1, the maximum attenuation factor RDiv on the right side x,y Greater than 1.
Exemplary, when left maximum attenuation factor LDiv x,y When the LS is greater than 1, as shown in formula (4), LS x,y And alpha x,y Proportional, alpha, as shown in equation (1) and equation (2) x,y Inversely proportional to the difference in lateral coordinates (x-x 0), thus LS x,y Inversely proportional to the difference in lateral coordinates (x-x 0). The target attenuation factors corresponding to the pixel points at the left side of the joint point are defined as LS corresponding to the pixel values of the pixel points at the left side of the joint point, wherein the target attenuation factors gradually decrease along with the distance from the joint point from small to large, the attenuation proportion of the pixel values of the pixel points at the left side of the joint point gradually decreases to 1 to the left x,y Gradually decreasing to 1 to the left, i.e. from large to small.
Maximum attenuation factor RDiv on right side x,y When less than 1, RS is known from the following formula x,y =1-alpha x,y +RDiv x,y *alpha x,y ,RS x,y And alpha x,y Inversely proportional, due to alpha x,y Inversely proportional to the difference in transverse coordinates (x-x 0), and therefore, RS x,y Proportional to the difference in lateral coordinates (x-x 0). A plurality of target attenuation factors corresponding to a plurality of pixel points on the right side of the joint point are defined asWith the gradual increase of the distance from the joint point from small to large, the attenuation proportion of the pixel values of a plurality of pixel points on the right side of the joint point gradually increases to the right to 1, and the pixel values of the plurality of pixel points correspond to RS x,y Gradually increasing to the right to 1, i.e. from small to large.
Exemplary, when left maximum attenuation factor LDiv x,y When the LS is less than 1, LS x,y And alpha x,y Inversely proportional, alpha x,y Inversely proportional to the difference in lateral coordinates (x-x 0), thus LS x,y Proportional to the difference in lateral coordinates (x-x 0). The target attenuation factors corresponding to the pixels on the left side of the joint point are defined as gradually increasing from small to large as the distance from the joint point increases, and the attenuation ratio of the pixel values of the pixels on the left side of the joint point gradually increases to 1 to the left, namely LS x,y Gradually increasing to 1 to the left, i.e. from small to large.
Maximum attenuation factor RDiv on right side x,y When greater than 1, RS x,y And alpha x,y Proportional to alpha x,y Inversely proportional to the difference in transverse coordinates (x-x 0), and therefore, RS x,y Inversely proportional to the difference in lateral coordinates (x-x 0). Based on the above, the target attenuation factors corresponding to the pixels on the right side of the joint point are defined to gradually decrease as the distance from the joint point is reduced from small to large, and the attenuation ratio of the pixel values of the pixels on the right side of the joint point gradually decreases to 1, namely RS x,y Gradually decreasing to 1 to the right, i.e. from large to small.
As can be seen from the above, when the maximum attenuation factor is smaller than 1, the amount of change between the pixel value after the smoothing process is performed by the plurality of pixel points and the pixel value before the smoothing process is performed increases as the lateral coordinate difference increases, and when the maximum attenuation factor is larger than 1, the amount of change between the pixel value after the smoothing process is performed by the plurality of pixel points and the pixel value before the smoothing process is performed decreases as the lateral coordinate difference increases. For example, the left-hand maximum attenuation factor LDiv x,y Greater than 1, right maximum attenuation factor RDiv x,y When the number of pixels is less than 1, the number of pixels on the right side is between the pixel value after the smoothing process and the pixel value before the smoothing processThe amount of change increases as the difference in lateral coordinates increases, and the amount of change between the pixel value after the smoothing process is performed by the plurality of pixel points on the left and the pixel value before the smoothing process is performed decreases as the difference in lateral coordinates increases. Left side maximum attenuation factor LDiv x,y Less than 1, right maximum attenuation factor RDiv x,y When the difference is larger than 1, the amount of change between the pixel value after the smoothing process is performed by the left-side plurality of pixel points and the pixel value before the smoothing process is performed increases with the increase of the lateral coordinate difference, and the amount of change between the pixel value after the smoothing process is performed by the right-side plurality of pixel points and the pixel value before the smoothing process is performed decreases with the increase of the lateral coordinate difference.
In the above embodiment, the stitched image may include a blurred stitched image (i.e., a first stitched image, one side of a stitching line of the first stitched image is a first low frequency component, and the other side of the stitching line is a second low frequency component) generated based on filtering the original stitched image, and a detail stitched image (i.e., a second stitched image, one side of the stitching line of the second stitched image is a first high frequency component, and the other side of the stitching line of the second stitched image) generated by subtracting the blurred stitched image from the original stitched image, and performing smoothing processing on at least one line of pixels in the blurred stitched image and the detail stitched image. For example, the processed blurred stitched image and the processed detail stitched image may be added and fused to generate a fused stitched image.
Based on the same application concept as the above method, an image processing apparatus is provided in an embodiment of the present application, and referring to fig. 5, which is a schematic structural diagram of the image processing apparatus, the apparatus may include: the obtaining module 51 is configured to obtain a stitched image based on a first original image and a second original image, where the stitched image includes a stitching line, one side of the stitching line is a first stitching region determined based on the first original image, and the other side of the stitching line is a second stitching region determined based on the second original image; a processing module 52, configured to determine a first attenuation region from the first stitching region, and perform attenuation processing on the first attenuation region based on a size of the first attenuation region, a configured attenuation speed value corresponding to the first attenuation region, and a distance between a pixel point in the first attenuation region and the stitching line; determining a second attenuation region from the second splicing region, and carrying out attenuation processing on the second attenuation region based on the size of the second attenuation region, the configured attenuation speed value corresponding to the second attenuation region and the distance between the pixel point in the second attenuation region and the splicing line; the generating module 53 is configured to generate a target image based on the attenuated first stitching region and the attenuated second stitching region, where the target image includes the stitching line, one side of the stitching line is the attenuated first stitching region, and the other side of the stitching line is the attenuated second stitching region.
Illustratively, the processing module 52 is specifically configured to, when performing the attenuation processing on the first attenuation region, based on the size of the first attenuation region, the configured attenuation speed value corresponding to the first attenuation region, and the distance between the pixel point in the first attenuation region and the stitching line:
for each pixel point in the first attenuation region, determining an attenuation degree value corresponding to the pixel point based on the size of the first attenuation region, the configured attenuation speed value corresponding to the first attenuation region and the distance between the pixel point and the stitching line; wherein the attenuation level value is proportional to the dimension, the attenuation level value is inversely proportional to the attenuation speed value, and the attenuation level value is inversely proportional to the distance;
and determining a target attenuation factor corresponding to the pixel point based on the attenuation degree value, determining a target pixel value of the pixel point based on the target attenuation factor and an original pixel value of the pixel point, and generating a first splicing region after attenuation processing based on the target pixel value of each pixel point in the first attenuation region.
Illustratively, the first stitching region includes a first low frequency component and a first high frequency component, the second stitching region includes a second low frequency component and a second high frequency component, the stitched image includes a first stitched image and a second stitched image, one side of a stitching line of the first stitched image is the first low frequency component, the other side of the stitching line is the second low frequency component, one side of the stitching line of the second stitched image is the first high frequency component, and the other side of the stitching line is the second high frequency component; the generating module 53 is specifically configured to, when generating the target image based on the first stitched area after the attenuation process and the second stitched area after the attenuation process: generating a low-frequency spliced image based on the first spliced image, wherein one side of a spliced line of the low-frequency spliced image is a low-frequency component subjected to attenuation treatment on a first attenuation region of a first low-frequency component, and the other side of the spliced line is a low-frequency component subjected to attenuation treatment on a second attenuation region of a second low-frequency component;
Generating a high-frequency spliced image based on the second spliced image, wherein one side of a spliced line of the high-frequency spliced image is a high-frequency component subjected to attenuation treatment on a first attenuation region of a first high-frequency component, and the other side of the spliced line is a high-frequency component subjected to attenuation treatment on a second attenuation region of the second high-frequency component;
and fusing the low-frequency spliced image and the high-frequency spliced image to obtain the target image.
Based on the same application concept as the above method, an image processing apparatus is proposed in an embodiment of the present application, and as shown in fig. 6, the image processing apparatus may include: a processor 61 and a machine-readable storage medium 62, the machine-readable storage medium 62 storing machine-executable instructions executable by the processor 61; the processor 61 is configured to execute machine executable instructions to implement the following steps:
acquiring a spliced image based on a first original image and a second original image, wherein the spliced image comprises a spliced line, one side of the spliced line is a first spliced area determined based on the first original image, and the other side of the spliced line is a second spliced area determined based on the second original image;
Determining a first attenuation region from the first splicing region, and carrying out attenuation treatment on the first attenuation region based on the size of the first attenuation region, the configured attenuation speed value corresponding to the first attenuation region and the distance between the pixel point in the first attenuation region and the splicing line;
determining a second attenuation region from the second splicing region, and carrying out attenuation treatment on the second attenuation region based on the size of the second attenuation region, the configured attenuation speed value corresponding to the second attenuation region and the distance between the pixel point in the second attenuation region and the splicing line;
generating a target image based on the first splicing region after the attenuation treatment and the second splicing region after the attenuation treatment, wherein the target image comprises the splicing line, one side of the splicing line is the first splicing region after the attenuation treatment, and the other side of the splicing line is the second splicing region after the attenuation treatment.
Based on the same application concept as the above method, the embodiments of the present application further provide a machine-readable storage medium, where a number of computer instructions are stored, where the computer instructions can implement the image processing method disclosed in the above example of the present application when executed by a processor.
Wherein the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, or the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state drive, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (7)

1. An image processing method, the method comprising:
acquiring a spliced image obtained by splicing a first original image and a second original image, wherein the spliced image comprises a spliced line, one side of the spliced line is a first spliced area, and the other side of the spliced line is a second spliced area;
Determining a first attenuation region adjacent to the splice line from the first splice region, and determining a second attenuation region adjacent to the splice line from the second splice region;
respectively carrying out attenuation processing on the first attenuation region and the second attenuation region to obtain a target image, wherein the attenuation processing comprises the following steps:
determining, for each pixel point in the first attenuation region and the second attenuation region, an attenuation degree value corresponding to the pixel point based on the size of the region, the configured attenuation speed value corresponding to the region, and the distance between the pixel point in the region and the stitching line; wherein the attenuation level value is proportional to the dimension, the attenuation level value is inversely proportional to the attenuation speed value, and the attenuation level value is inversely proportional to the distance;
determining a first pixel point with the same ordinate as the pixel point from the stitching line under the condition that the stitching line is the stitching line in the vertical direction, determining a maximum attenuation factor corresponding to the pixel point based on a target scale factor of the first pixel point, determining a target attenuation factor corresponding to the pixel point based on the attenuation degree value and the maximum attenuation factor, and determining a target pixel value of the pixel point based on the target attenuation factor and an original pixel value of the pixel point;
The target scale factor of each first pixel point on the piece of stitching line is determined based on the following mode: and respectively determining n pixel points which are the same as the ordinate of the first pixel point from the left side and the right side of the stitching line, determining an initial scale factor of the first pixel point based on the pixel values of the n pixel points on the left side and the pixel values of the n pixel points on the right side, and filtering the initial scale factor to obtain a target scale factor corresponding to the first pixel point, wherein n is a positive integer and smaller than the abscissa of the first pixel point.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the first stitching region comprises a first low-frequency component and a first high-frequency component, the second stitching region comprises a second low-frequency component and a second high-frequency component, the stitched image comprises a first stitched image and a second stitched image, one side of a stitching line of the first stitched image is the first low-frequency component, the other side of the stitching line is the second low-frequency component, one side of the stitching line of the second stitched image is the first high-frequency component, and the other side of the stitching line is the second high-frequency component; the generating a target image based on the first splicing region after the attenuation processing and the second splicing region after the attenuation processing includes:
Generating a low-frequency spliced image based on the first spliced image, wherein one side of a spliced line of the low-frequency spliced image is a low-frequency component subjected to attenuation treatment on a first attenuation region of a first low-frequency component, and the other side of the spliced line is a low-frequency component subjected to attenuation treatment on a second attenuation region of a second low-frequency component;
generating a high-frequency spliced image based on the second spliced image, wherein one side of a spliced line of the high-frequency spliced image is a high-frequency component subjected to attenuation treatment on a first attenuation region of a first high-frequency component, and the other side of the spliced line is a high-frequency component subjected to attenuation treatment on a second attenuation region of the second high-frequency component;
and fusing the low-frequency spliced image and the high-frequency spliced image to obtain the target image.
3. The method according to any one of claims 1-2, wherein,
when the first original image and the second original image do not have an overlapping area, taking one boundary line positioned in the middle area in the spliced image as the splicing line, wherein the first spliced area is the first original image, and the second spliced area is the second original image;
when the first original image and the second original image have an overlapping area, taking one boundary line positioned in the overlapping area in the spliced image as the splicing line, wherein the first splicing area is the first original image, and the second splicing area is a part of the second original image; or, the first stitching region is a part of the first original image, the second stitching region is a part of the second original image, or, the first stitching region is a part of the first original image, and the second stitching region is a part of the second original image;
The method for acquiring the stitching line in the stitched image comprises the following steps:
and determining the pixel value of each pixel point in the stitching line based on the pixel values of N pixel points adjacent to the stitching line in the first stitching region and the pixel values of N pixel points adjacent to the stitching line in the second stitching region, and determining the stitching line based on the pixel value of each pixel point in the stitching line.
4. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a spliced image obtained by splicing the first original image and the second original image, wherein the spliced image comprises a spliced line, one side of the spliced line is a first spliced area, and the other side of the spliced line is a second spliced area;
a processing module configured to determine a first attenuation region adjacent to the stitching line from the first stitching region and a second attenuation region adjacent to the stitching line from the second stitching region; respectively carrying out attenuation processing on the first attenuation region and the second attenuation region to obtain a target image, wherein the attenuation processing comprises the following steps:
determining, for each pixel point in the first attenuation region and the second attenuation region, an attenuation degree value corresponding to the pixel point based on the size of the region, the configured attenuation speed value corresponding to the region, and the distance between the pixel point in the region and the stitching line; wherein the attenuation level value is proportional to the dimension, the attenuation level value is inversely proportional to the attenuation speed value, and the attenuation level value is inversely proportional to the distance;
Determining a first pixel point with the same ordinate as the pixel point from the stitching line under the condition that the stitching line is the stitching line in the vertical direction, determining a maximum attenuation factor corresponding to the pixel point based on a target scale factor of the first pixel point, determining a target attenuation factor corresponding to the pixel point based on the attenuation degree value and the maximum attenuation factor, and determining a target pixel value of the pixel point based on the target attenuation factor and an original pixel value of the pixel point;
the target scale factor of each first pixel point on the piece of stitching line is determined based on the following mode: and respectively determining n pixel points which are the same as the ordinate of the first pixel point from the left side and the right side of the stitching line, determining an initial scale factor of the first pixel point based on the pixel values of the n pixel points on the left side and the pixel values of the n pixel points on the right side, and filtering the initial scale factor to obtain a target scale factor corresponding to the first pixel point, wherein n is a positive integer and smaller than the abscissa of the first pixel point.
5. An image processing method, comprising:
acquiring a spliced image;
and executing smoothing processing on at least one row of pixel points in the spliced image, wherein the smoothing processing comprises the following steps:
Selecting one pixel point from the row of pixel points as a joint point;
in response to the joint point, performing smoothing processing on a plurality of other pixel points except the joint point based on a target attenuation factor, and generating a plurality of processed other pixel points; the target attenuation factor is defined as a function operation obtained by taking a transverse coordinate difference value and a maximum attenuation factor of the other pixel points and the seam point as parameter values, so that when the maximum attenuation factor is smaller than 1, the variation between the pixel value of the plurality of other pixel points after smoothing and the pixel value before smoothing increases along with the increase of the transverse coordinate difference value, and when the maximum attenuation factor is larger than 1, the variation between the pixel value of the plurality of other pixel points after smoothing and the pixel value before smoothing decreases along with the increase of the transverse coordinate difference value; wherein the pixel value is a luminance value and/or a chrominance value.
6. The method of claim 5, wherein the splice point is:
when the spliced image has an overlapping area, selecting a left boundary point or a right boundary point of the overlapping area as the splice point; and
And when the spliced image does not have an overlapping area, selecting a pixel point positioned in the middle area of the spliced image as the splice point.
7. The method of claim 5, wherein the step of determining the position of the probe is performed,
the left maximum attenuation factor is defined by performing function operation by taking the sum of pixel values of a plurality of other pixel points at the left side of the joint point and the sum of pixel values of a plurality of other pixel points at the right side of the joint point as parameters, wherein the sum of pixel values of the other pixel points at the left side of the joint point and the sum of pixel values of the other pixel points at the right side of the joint point are preset smooth radii;
the right side maximum attenuation factor is defined by performing function operation by taking the sum of pixel values of a plurality of other pixel points at the left side of the joint point and the sum of pixel values of a plurality of other pixel points at the right side of the joint point as parameters, wherein the other pixel points at the right side of the joint point correspond to the same right side maximum attenuation factor;
wherein the right side maximum attenuation factor is less than 1 when the left side maximum attenuation factor is greater than 1, and greater than 1 when the left side maximum attenuation factor is less than 1.
CN202110402157.7A 2021-04-14 2021-04-14 Image processing method and device Active CN113077387B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110402157.7A CN113077387B (en) 2021-04-14 2021-04-14 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110402157.7A CN113077387B (en) 2021-04-14 2021-04-14 Image processing method and device

Publications (2)

Publication Number Publication Date
CN113077387A CN113077387A (en) 2021-07-06
CN113077387B true CN113077387B (en) 2023-06-27

Family

ID=76617901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110402157.7A Active CN113077387B (en) 2021-04-14 2021-04-14 Image processing method and device

Country Status (1)

Country Link
CN (1) CN113077387B (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101583974B (en) * 2006-12-13 2012-11-14 杜比实验室特许公司 Method and apparatus for stitching digital images
CN106940877B (en) * 2016-01-05 2021-04-20 富士通株式会社 Image processing apparatus and method
CN105931188A (en) * 2016-05-06 2016-09-07 安徽伟合电子科技有限公司 Method for image stitching based on mean value duplication removal
CN105957018B (en) * 2016-07-15 2018-12-14 武汉大学 A kind of unmanned plane images filter frequency dividing joining method
CN106469444B (en) * 2016-09-20 2020-05-08 天津大学 Rapid image fusion method for eliminating splicing gap
CN107248137B (en) * 2017-04-27 2021-01-15 努比亚技术有限公司 Method for realizing image processing and mobile terminal
CN109300084B (en) * 2017-07-25 2023-07-04 杭州海康汽车技术有限公司 Image stitching method and device, electronic equipment and storage medium
CN110782424B (en) * 2019-11-08 2021-02-09 重庆紫光华山智安科技有限公司 Image fusion method and device, electronic equipment and computer readable storage medium
CN112233154A (en) * 2020-11-02 2021-01-15 影石创新科技股份有限公司 Color difference elimination method, device and equipment for spliced image and readable storage medium

Also Published As

Publication number Publication date
CN113077387A (en) 2021-07-06

Similar Documents

Publication Publication Date Title
Matsushita et al. Full-frame video stabilization with motion inpainting
JP6561216B2 (en) Generating intermediate views using optical flow
US9595128B2 (en) Depth dependent filtering of image signal
US7359576B1 (en) Using difference kernels for image filtering
CA2702165C (en) Image generation method and apparatus, program therefor, and storage medium which stores the program
US20120008857A1 (en) Method of time-efficient stereo matching
TWI478097B (en) Clipless time and lens bounds for improved sample test efficiency in image rendering
US6959117B2 (en) Method and apparatus for deblurring and re-blurring image segments
CN104702928B (en) Method of correcting image overlap area, recording medium, and execution apparatus
CN109509146A (en) Image split-joint method and device, storage medium
JP4454657B2 (en) Blur correction apparatus and method, and imaging apparatus
US7840070B2 (en) Rendering images based on image segmentation
US10810707B2 (en) Depth-of-field blur effects generating techniques
US11282176B2 (en) Image refocusing
US20130016239A1 (en) Method and apparatus for removing non-uniform motion blur using multi-frame
JP2011211556A (en) Device and method for generating image, and program
KR100521963B1 (en) Image artifact removal technique for lcp
CN113077387B (en) Image processing method and device
Luo et al. Defocus to focus: Photo-realistic bokeh rendering by fusing defocus and radiance priors
CN112184609A (en) Image fusion method and device, storage medium and terminal
KR102587298B1 (en) Real-time omnidirectional stereo matching method using multi-view fisheye lenses and system therefore
CN111105350A (en) Real-time video splicing method based on self homography transformation under large parallax scene
CN112669355B (en) Method and system for splicing and fusing focusing stack data based on RGB-D super pixel segmentation
US11734792B2 (en) Method and apparatus for virtual viewpoint image synthesis by mixing warped image
Liang et al. Guidance network with staged learning for image enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant