CN111815547A - Image processing method and device, electronic device and storage medium - Google Patents

Image processing method and device, electronic device and storage medium Download PDF

Info

Publication number
CN111815547A
CN111815547A CN202010617432.2A CN202010617432A CN111815547A CN 111815547 A CN111815547 A CN 111815547A CN 202010617432 A CN202010617432 A CN 202010617432A CN 111815547 A CN111815547 A CN 111815547A
Authority
CN
China
Prior art keywords
image
pixel
pixel point
fused
twenty
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010617432.2A
Other languages
Chinese (zh)
Inventor
王东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TetrasAI Technology Co Ltd
Original Assignee
Shenzhen TetrasAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TetrasAI Technology Co Ltd filed Critical Shenzhen TetrasAI Technology Co Ltd
Priority to CN202010617432.2A priority Critical patent/CN111815547A/en
Publication of CN111815547A publication Critical patent/CN111815547A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring at least two images to be fused, wherein the at least two images to be fused comprise: the image fusion method comprises the steps that a first image to be fused and a second image to be fused are obtained, wherein the first image to be fused and the second image to be fused both comprise first type pixel points; carrying out downsampling processing on the first image to be fused to obtain a first image, and carrying out downsampling processing on the second image to be fused to obtain a second image; and carrying out fusion processing on the first image and the second image to obtain a third image.

Description

Image processing method and device, electronic device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
In the field of image processing, the image quality is positively correlated with the information carried by the image, and the information in at least two images can be utilized to improve the image quality by performing fusion processing (such as fusion processing and noise reduction processing) on the at least two images.
Since the RAW image format (RAW) image is not processed, the RAW image carries more accurate and abundant information than an image obtained by processing the RAW image. Therefore, in order to improve the fusion effect, the current method performs fusion processing on the RAW image. However, the fusion effect of this method is still poor.
Disclosure of Invention
The application provides an image processing method and device, an electronic device and a storage medium.
In a first aspect, an image processing method is provided, the method comprising:
acquiring at least two images to be fused, wherein the at least two images to be fused comprise: the image fusion method comprises the steps that a first image to be fused and a second image to be fused are obtained, wherein the first image to be fused and the second image to be fused both comprise first type pixel points;
performing downsampling processing on the first image to be fused to obtain a first image, and performing downsampling processing on the second image to be fused to obtain a second image, wherein the first image and the second image are both continuous images, the first image and the second image both comprise the first type of pixel points, the ratio of the resolution of the first image to be fused is greater than a first threshold, and the ratio of the resolution of the second image to be fused is greater than the first threshold;
And carrying out fusion processing on the first image and the second image to obtain a third image.
In this aspect, since the ratio of the resolution of the first image to be registered is greater than 0.25, and the ratio of the resolution of the second image to be registered is greater than 0.25, the fusion effect of the first channel of the first image to be fused and the first channel of the second image to be fused can be improved by performing fusion processing on the first image and the second image.
With reference to any embodiment of the present application, the performing downsampling on the first image to be fused to obtain a first image, and performing downsampling on the second image to be fused to obtain a second image includes:
rotating the first image to be fused by a first angle to obtain a fourth image, and rotating the second image to be fused by a second angle to obtain a fifth image, wherein the first angle and the second angle are both odd multiples of 45 degrees;
magnifying the coordinate axis scale of a first pixel coordinate system by n times to obtain a second pixel coordinate system, and magnifying the coordinate axis scale of a third pixel coordinate system by n times to obtain a fourth pixel coordinate system, wherein the first pixel coordinate system is the pixel coordinate system of the third image, and the third pixel coordinate system is the pixel coordinate system of the fourth image;
Determining the pixel value of each pixel point in the second pixel coordinate system according to the pixel value of the pixel point in the fourth image to obtain the first image;
and determining the pixel value of each pixel point in the fourth pixel coordinate system according to the pixel value of the pixel point in the fifth image to obtain the second image.
In combination with any embodiment of the present application, the first type of pixel belongs to a first channel, the first image to be fused further includes a second channel different from the first channel, and the method further includes:
extracting the second channel in the first image to be fused to obtain a sixth image;
performing upsampling processing on the third image to obtain a seventh image, wherein the size of the seventh image is the same as that of the first image to be fused;
and respectively taking the sixth image and the seventh image as a channel, and combining the sixth image and the seventh image to obtain an eighth image.
With reference to any embodiment of the present application, the performing upsampling processing on the third image to obtain a seventh image includes:
rotating the third image by a third angle to obtain a ninth image, wherein the third angle is a same angle of a terminal edge of a fourth angle, and the fourth angle is opposite to the first angle;
Reducing the coordinate axis scale of a fifth pixel coordinate system by the factor of n to obtain a sixth pixel coordinate system, wherein the fifth pixel coordinate system is the pixel coordinate system of the ninth image;
and determining the pixel value of each pixel point in the sixth pixel coordinate system according to the pixel value of the pixel point in the ninth image to obtain the seventh image.
With reference to any embodiment of the present application, before the taking the sixth image and the seventh image as one channel respectively, and combining the sixth image and the seventh image to obtain an eighth image, the method further includes:
extracting the second channel in the second image to be fused to obtain a tenth image;
performing fusion processing on the sixth image and the tenth image to obtain an eleventh image;
taking the sixth image and the seventh image as a channel respectively, and combining the sixth image and the seventh image to obtain an eighth image, including:
and respectively taking the seventh image and the eleventh image as a channel, and combining the seventh image and the eleventh image to obtain the eighth image.
With reference to any embodiment of the present application, the first image to be fused further includes a third channel different from the first channel;
The ratio of the number of the second-type pixels to the number of the third-type pixels is equal to the ratio of the number of the fourth-type pixels to the number of the fifth-type pixels, wherein the second-type pixels comprise the first-type pixels in the first image to be fused, the third-type pixels comprise pixels belonging to the third channel in the first image to be fused, the fourth-type pixels comprise the first-type pixels in the second image to be fused, and the fifth-type pixels comprise pixels belonging to the third channel in the second image to be fused.
With reference to any embodiment of the present application, before the fusing the first image and the second image to obtain a third image, the method further includes:
aligning the first image with the second image to obtain a first aligned image;
the fusing the first image and the second image to obtain a third image, including:
and performing fusion processing on the second image and the first aligned image to obtain the third image.
With reference to any embodiment of the present application, the first image to be fused includes: first pixel, second pixel, third pixel, fourth pixel, the second is waited to fuse the image and is included: a fifth pixel point, a sixth pixel point, a seventh pixel point and an eighth pixel point;
The coordinate of the first pixel point is (i, j), the coordinate of the second pixel point is (i +1, j), the coordinate of the third pixel point is (i, j +1), and the coordinate of the fourth pixel point is (i +1, j + 1); the coordinates of the fifth pixel point are (i, j), the coordinates of the sixth pixel point are (i +1, j), the coordinates of the seventh pixel point are (i, j +1), and the coordinates of the eighth pixel point are (i +1, j +1), wherein i and j are positive integers;
under the condition that the first pixel point and the fifth pixel point are the first-class pixel points, the second pixel point, the third pixel point, the sixth pixel point and the seventh pixel point are not the first-class pixel points, the fourth pixel point and the eighth pixel point are the first-class pixel points, under the condition that the first pixel point and the fifth pixel point are not the first-class pixel points, the second pixel point, the third pixel point, the sixth pixel point and the seventh pixel point are the first-class pixel points, and the fourth pixel point and the eighth pixel point are not the first-class pixel points; or the like, or, alternatively,
under the condition that the first pixel point is the first-class pixel point and the fifth pixel point is not the first-class pixel point, the second pixel point, the third pixel point and the eighth pixel point are not the first-class pixel point, the fourth pixel point, the sixth pixel point and the seventh pixel point are all the first-class pixel points, under the condition that the first pixel point is not the first-class pixel point and the fifth pixel point is the first-class pixel point, the second pixel point, the third pixel point and the eighth pixel point are all the first-class pixel points, and the fourth pixel point, the sixth pixel point and the seventh pixel point are not the first-class pixel points.
In combination with any embodiment of the present application, both the arrangement manner of the pixel points in the first image to be fused and the arrangement manner of the pixel points in the second image to be fused are bayer arrays.
In a second aspect, there is provided an image processing method, the method comprising:
acquiring at least two images to be fused, wherein the at least two images to be fused comprise: the image fusion method comprises the steps that a third image to be fused and a fourth image to be fused are obtained, wherein the third image to be fused and the fourth image to be fused both comprise sixth-type pixel points;
extracting a fourth channel in the third image to be fused to obtain a twelfth image, and extracting the fourth channel in the fourth image to be fused to obtain a thirteenth image, wherein the sixth type of pixel points belong to the fourth channel;
performing downsampling processing on the twelfth image to obtain a fourteenth image, and performing the downsampling processing on the thirteenth image to obtain a fifteenth image, wherein the fourteenth image and the fifteenth image are both continuous images, the fourteenth image and the fifteenth image both include the sixth type of pixel points, a ratio of a resolution of the fourteenth image to a resolution of the third image to be fused is greater than a second threshold, and a ratio of the resolution of the fifteenth image to the resolution of the fourth image to be fused is greater than the second threshold;
And carrying out fusion processing on the fourteenth image and the fifteenth image to obtain a sixteenth image.
In this aspect, since the ratio of the resolution of the fourteenth image to the resolution of the third image to be registered is greater than 0.25, and the ratio of the resolution of the fifteenth image to the resolution of the fourth image to be registered is greater than 0.25, by performing the fusion processing on the fourteenth image and the fifteenth image, the fusion effect of the fourth channel of the third image to be fused and the fourth channel of the fourth image to be fused can be improved
With reference to any embodiment of the present application, the downsampling the twelfth image to obtain a fourteenth image, and the downsampling the thirteenth image to obtain a fifteenth image include:
rotating the twelfth image by a fifth angle to obtain a seventeenth image, and rotating the thirteenth image by a sixth angle to obtain an eighteenth image, wherein the fifth angle and the sixth angle are both odd multiples of 45 degrees;
magnifying the coordinate axis scale of a seventh pixel coordinate system by m times to obtain an eighth pixel coordinate system, and magnifying the coordinate axis scale of a ninth pixel coordinate system by m times to obtain a tenth pixel coordinate system, wherein the seventh pixel coordinate system is the pixel coordinate system of the seventeenth image, and the ninth pixel coordinate system is the pixel coordinate system of the eighteenth image;
Determining the pixel value of each pixel point under the eighth pixel coordinate system according to the pixel value of the pixel point in the seventeenth image to obtain the fourteenth image;
and determining the pixel value of each pixel point in the tenth pixel coordinate system according to the pixel value of the pixel point in the eighteenth image to obtain the fifteenth image.
With reference to any embodiment of the present application, the downsampling the twelfth image to obtain a fourteenth image, and the downsampling the thirteenth image to obtain a fifteenth image include:
constructing a nineteenth image and a twentieth image, wherein the nineteenth image contains the sixth type of pixel points in the third image to be fused, and the twentieth image contains the sixth type of pixel points in the fourth image to be fused;
reducing the pixel values in the nineteenth image by s times to obtain the fourteenth image;
and reducing the pixel values in the twentieth image by the factor of s to obtain the fifteenth image.
With reference to any embodiment of the present application, a diagonal line of the third image to be fused includes a first line segment, and a diagonal line of the fourth image to be fused includes a second line segment;
The constructing the nineteenth and twentieth images includes:
arranging at least one seventh pixel point with the center belonging to the same first diagonal line into a line of pixel points of an image according to the ascending order of the abscissa, and constructing a twenty-first image, wherein the seventh pixel point comprises the sixth pixel point in the third image to be fused, and the first diagonal line comprises: a line passing through the first line segment, a line parallel to the first line segment;
arranging at least one eighth type pixel point with the center belonging to the same second diagonal line into a line of pixel points of the image according to the sequence of the abscissa from small to large to construct a twenty-second image, wherein the eighth type pixel point comprises the sixth type pixel point in the fourth image to be fused, and the second diagonal line comprises: a straight line passing through the second line segment, a straight line parallel to the second line segment;
sequencing rows in the twenty-first image to obtain the nineteenth image, and sequencing rows in the twenty-second image to obtain the twentieth image; or the like, or, alternatively,
arranging at least one seventh type pixel point with the center belonging to the same first diagonal line into a row of pixel points of an image according to the ascending order of the abscissa, and constructing a twenty-third image, wherein the seventh type pixel point comprises the sixth type pixel point in the third image to be fused, and the first diagonal line comprises: a line passing through the first line segment, a line parallel to the first line segment;
Arranging at least one eighth type pixel point with the center belonging to the same second diagonal line into a row of pixel points of an image according to the sequence of horizontal coordinates from small to large to construct a twenty-fourth image, wherein the eighth type pixel point comprises a sixth type pixel point in the fourth image to be fused, and the second diagonal line comprises: a straight line passing through the second line segment, a straight line parallel to the second line segment;
and sequencing columns in the twenty-third image to obtain the nineteenth image, and sequencing the twenty-fourth image to obtain the twentieth image.
With reference to any embodiment of the present application, the sorting the rows in the twenty-first image to obtain the nineteenth image, and the sorting the rows in the twenty-second image to obtain the twentieth image includes:
determining a first mean value of the ordinate of each row of pixel points in the twenty-first image, and obtaining a first index according to the first mean value, wherein the first mean value and the first index are in positive correlation or negative correlation;
arranging the rows in the twenty-first image according to the descending order of the first index to obtain the nineteenth image;
Determining a second average value of the ordinate of each row of pixel points in the twenty-second image, and obtaining a second index according to the second average value, wherein the second average value and the second index are in positive correlation or negative correlation;
and arranging the rows in the twenty-second image according to the sequence of the second index from big to small to obtain the twentieth image.
In combination with any embodiment of the present application, in the case that the first average is positively correlated with the first index, the second average is positively correlated with the second index;
and under the condition that the first average value and the first index are in negative correlation, the second average value and the second index are in negative correlation.
With reference to any embodiment of the present application, the diagonal line of the third image to be fused further includes a third line segment different from the first line segment, and the diagonal line of the fourth image to be fused further includes a fourth line segment different from the second line segment;
the sorting of the rows in the twenty-first image to obtain the nineteenth image and the sorting of the rows in the twenty-second image to obtain the twentieth image comprises:
arranging rows in the twenty-first image according to a first sequence to obtain the nineteenth image, wherein the first sequence is a sequence from the great ordinate of the first index pixel point to the small ordinate, the first sequence or the sequence from the small ordinate of the first index pixel point to the large ordinate of the first index pixel point, and the first index pixel point comprises a pixel point of which the center belongs to a first straight line; under the condition that the third line segment passes through the center of the seventh pixel point, the first straight line is a straight line passing through the third line segment; when the third line segment does not exceed the center of the seventh-type pixel point, the first line is a line which is parallel to the third line segment and is closest to the third line segment among lines which pass through the center of the seventh-type pixel point;
Arranging rows in the twenty-second image according to a second sequence to obtain the twentieth image, wherein the second sequence is the sequence from the great ordinate of the second index pixel point to the small ordinate, the second sequence or the sequence from the small ordinate of the second index pixel point to the large ordinate, and the second index pixel point comprises a pixel point of which the center belongs to a second straight line; under the condition that the fourth line segment passes through the center of the eighth pixel point, the second line is a line passing through the third line segment; in a case where the fourth line segment does not exceed the center of the eighth type pixels, the second line is a line closest to the fourth line segment among lines parallel to the fourth line segment and passing through the center of the eighth type pixels.
In combination with any embodiment of the present application, in a case that the first order is a descending order of the vertical coordinates of the first index pixel points, the second order is a descending order of the vertical coordinates of the second index pixel points;
and under the condition that the first sequence is the sequence from small to large of the vertical coordinates of the first index pixel points, the second sequence is the sequence from small to large of the vertical coordinates of the second index pixel points.
With reference to any embodiment of the present application, the sorting the columns in the twenty-third image to obtain the nineteenth image, and the sorting the twenty-fourth image to obtain the twentieth image includes:
determining a third mean value of the ordinate of each row of pixel points in the twenty-third image, and obtaining a third index according to the third mean value, wherein the third mean value and the third index are in positive correlation or negative correlation;
arranging the columns in the twenty-third image according to the descending order of the third index to obtain the nineteenth image;
determining a fourth mean value of the ordinate of each row of pixel points in the twenty-fourth image, and obtaining a fourth index according to the fourth mean value, wherein the fourth mean value and the fourth index are in positive correlation or negative correlation;
and arranging the columns in the twenty-fourth image according to the sequence of the fourth index from large to small to obtain the twentieth image.
In combination with any embodiment of the present application, in the case that the third mean value is positively correlated with the third index, the fourth mean value is positively correlated with the fourth index;
and under the condition that the third average value and the third index are in negative correlation, the fourth average value and the fourth index are in negative correlation.
With reference to any embodiment of the present application, the diagonal line of the third image to be fused further includes a third line segment different from the first line segment, and the diagonal line of the fourth image to be fused further includes a fourth line segment different from the second line segment;
said sorting columns in said twenty-third image results in said nineteenth image, and said sorting said twenty-fourth image results in said twentieth image, comprising:
arranging the rows in the twenty-third image according to a third sequence to obtain the nineteenth image, wherein the third sequence is a sequence from the great ordinate of the third index pixel point to the small ordinate, the third sequence or the sequence from the small ordinate of the third index pixel point to the large ordinate, and the third index pixel point includes a pixel point whose center belongs to a third straight line; under the condition that the third line segment passes through the center of the seventh pixel point, the third line is a line passing through the third line segment; when the third line segment does not exceed the center of the seventh-type pixel point, the third line segment is a line closest to the third line segment among lines parallel to the third line segment and passing through the center of the seventh-type pixel point;
Arranging the rows in the twenty-fourth image according to a fourth sequence to obtain the twentieth image, wherein the fourth sequence is a sequence from the great ordinate of the fourth index pixel point to the small ordinate, the fourth sequence or the sequence from the small ordinate of the fourth index pixel point to the large ordinate, and the fourth index pixel point comprises a pixel point of which the center belongs to a fourth straight line; under the condition that the fourth line segment passes through the center of the eighth pixel point, the fourth line is a line passing through the third line segment; in a case where the fourth line is not more than the center of the eighth type pixels, the fourth line is a line closest to the fourth line among lines parallel to the fourth line and passing through the center of the eighth type pixels.
In combination with any embodiment of the present application, in a case that the third order is a descending order of the vertical coordinates of the third index pixel points, the fourth order is a descending order of the vertical coordinates of the fourth index pixel points;
and under the condition that the third sequence is the sequence from small to large of the vertical coordinates of the third index pixel points, the fourth sequence is the sequence from small to large of the vertical coordinates of the fourth index pixel points.
With reference to any embodiment of the present application, the third image to be fused further includes a fifth channel different from the fourth channel, and the method further includes:
extracting the fifth channel in the third image to be fused to obtain a twenty-fifth image;
performing upsampling processing on the sixteenth image to obtain a twenty-sixth image, wherein the size of the twenty-sixth image is the same as that of the third image to be fused;
and respectively taking the twenty-fifth image and the twenty-sixth image as a channel, and combining the twenty-fifth image and the twenty-sixth image to obtain a twenty-seventh image.
With reference to any one of the embodiments of the present application, the performing upsampling processing on the twenty-fifth image to obtain a twenty-sixth image includes:
rotating the twenty-fifth image by a seventh angle to obtain a twenty-eighth image, wherein the seventh angle is a same angle on a final side of an eighth angle, and the eighth angle and the fifth angle are opposite numbers;
reducing the coordinate axis scale of an eleventh pixel coordinate system by the factor of m to obtain a twelfth pixel coordinate system, wherein the eleventh pixel coordinate system is the pixel coordinate system of the twenty-eighth image;
And determining the pixel value of each pixel point in the twelfth pixel coordinate system according to the pixel value of the pixel point in the twenty-eighth image to obtain the twenty-sixth image.
With reference to any embodiment of the present application, before the step of taking the twenty-fifth image and the twenty-sixth image as a channel respectively and combining the twenty-fifth image and the twenty-sixth image to obtain a twenty-seventh image, the method further includes:
extracting the fifth channel in the fourth image to be fused to obtain a twenty-ninth image;
performing fusion processing on the twenty-fifth image and the twenty-ninth image to obtain a thirtieth image;
the combining the twenty-fifth image and the twenty-sixth image to obtain a twenty-seventh image by using the twenty-fifth image and the twenty-sixth image as a channel respectively includes:
and respectively taking the twenty-ninth image and the thirtieth image as a channel, and combining the twenty-ninth image and the thirtieth image to obtain a twenty-seventh image.
With reference to any embodiment of the present application, the third image to be fused further includes a sixth channel different from the fourth channel;
The ratio of the number of the ninth pixels to the number of the tenth pixels is equal to the ratio of the number of the eleventh pixels to the number of the twelfth pixels, wherein the ninth pixels include the sixth pixels in the third image to be fused, the tenth pixels include the pixels in the third image to be fused that belong to the sixth channel, the eleventh pixels include the sixth pixels in the fourth image to be fused, and the twelfth pixels include the pixels in the fourth image to be fused that belong to the sixth channel.
With reference to any embodiment of the present application, before the fusing the fourteenth image and the fifteenth image to obtain the sixteenth image, the method further includes:
aligning the fourteenth image with the fifteenth image, resulting in a second aligned image;
the fusion processing of the fourteenth image and the fifteenth image to obtain a sixteenth image includes:
and performing fusion processing on the fifteenth image and the second aligned image to obtain the sixteenth image.
In combination with any embodiment of the present application, the third image to be fused includes: the ninth pixel point, the tenth pixel point, the eleventh pixel point and the twelfth pixel point, and the fourth image to be fused includes: a thirteenth pixel point, a fourteenth pixel point, a fifteenth pixel point and a sixteenth pixel point;
The coordinates of the ninth pixel point are (p, q), the coordinates of the tenth pixel point are (p +1, q), the coordinates of the eleventh pixel point are (p, q +1), and the coordinates of the twelfth pixel point are (p +1, q + 1); the coordinates of the thirteenth pixel point are (p, q), the coordinates of the fourteenth pixel point are (p +1, q), the coordinates of the fifteenth pixel point are (p, q +1), the coordinates of the sixteenth pixel point are (p +1, q +1), wherein both p and q are positive integers;
under the condition that the ninth pixel point and the thirteenth pixel point are all the sixth pixel point, the tenth pixel point, the eleventh pixel point, the fourteenth pixel point and the fifteenth pixel point are all the sixth pixel point, and the twelfth pixel point and the sixteenth pixel point are all the sixth pixel point; or the like, or, alternatively,
Under the condition that the ninth pixel point is the sixth pixel point and the thirteenth pixel point is not the sixth pixel point, the tenth pixel point, the eleventh pixel point and the sixteenth pixel point are not the sixth pixel point, the twelfth pixel point, the fourteenth pixel point and the fifteenth pixel point are all the sixth pixel point, under the condition that the ninth pixel point is not the sixth pixel point and the thirteenth pixel point is the sixth pixel point, the tenth pixel point, the eleventh pixel point and the sixteenth pixel point are all the sixth pixel point, and the twelfth pixel point, the fourteenth pixel point and the fifteenth pixel point are not the sixth pixel point.
In combination with any embodiment of the present application, both the arrangement manner of the pixel points in the third image to be fused and the arrangement manner of the pixel points in the fourth image to be fused are bayer arrays.
In a third aspect, there is provided an image processing apparatus comprising:
the image fusion system comprises a first acquisition unit and a second acquisition unit, wherein the first acquisition unit is used for acquiring at least two images to be fused, and the at least two images to be fused comprise: the image fusion method comprises the steps that a first image to be fused and a second image to be fused are obtained, wherein the first image to be fused and the second image to be fused both comprise first type pixel points;
The first processing unit is used for performing downsampling processing on the first image to be fused to obtain a first image and performing downsampling processing on the second image to be fused to obtain a second image, wherein the first image and the second image are both continuous images, the first image and the second image both comprise the first type of pixel points, the ratio of the resolution of the first image to be fused is greater than a first threshold, and the ratio of the resolution of the second image to be fused is greater than the first threshold;
and the second processing unit is used for carrying out fusion processing on the first image and the second image to obtain a third image.
In a fourth aspect, there is provided another image processing apparatus including:
the second acquiring unit is used for acquiring at least two images to be fused, wherein the at least two images to be fused comprise: the image fusion method comprises the steps that a third image to be fused and a fourth image to be fused are obtained, wherein the third image to be fused and the fourth image to be fused both comprise sixth-type pixel points;
the second extraction unit is used for extracting a fourth channel in the third image to be fused to obtain a twelfth image and extracting the fourth channel in the fourth image to be fused to obtain a thirteenth image, wherein the sixth pixel belongs to the fourth channel;
A fourth processing unit, configured to perform downsampling on the twelfth image to obtain a fourteenth image, and perform the downsampling on the thirteenth image to obtain a fifteenth image, where the fourteenth image and the fifteenth image are both continuous images, both the fourteenth image and the fifteenth image include the sixth type of pixel point, a ratio of a resolution of the fourteenth image to a resolution of the third image to be fused is greater than a second threshold, and a ratio of a resolution of the fifteenth image to a resolution of the fourth image to be fused is greater than the second threshold;
and the fifth processing unit is used for carrying out fusion processing on the fourteenth image and the fifteenth image to obtain a sixteenth image.
In a fifth aspect, a processor is provided, which is configured to perform the method of the first aspect and any one of the possible implementations thereof.
In a sixth aspect, an electronic device is provided, comprising: a processor, transmitting means, input means, output means, and a memory for storing computer program code comprising computer instructions, which, when executed by the processor, cause the electronic device to perform the method of the first aspect and any one of its possible implementations.
In a seventh aspect, a computer-readable storage medium is provided, in which a computer program is stored, the computer program comprising program instructions that, if executed by a processor, cause the processor to perform the method according to the first aspect and any one of the possible implementation manners thereof.
In an eighth aspect, there is provided a computer program product comprising a computer program or instructions which, when run on a computer, causes the computer to perform the method of the first aspect and any of its possible implementations.
In a ninth aspect, a processor is provided for performing the method of the second aspect and any one of its possible implementations.
In a tenth aspect, there is provided an electronic device comprising: a processor, transmitting means, input means, output means, and a memory for storing computer program code comprising computer instructions, which, when executed by the processor, cause the electronic device to perform the method of the second aspect and any one of its possible implementations.
In an eleventh aspect, there is provided a computer-readable storage medium having stored therein a computer program comprising program instructions which, if executed by a processor, cause the processor to perform the method of the second aspect and any one of its possible implementations.
In a twelfth aspect, a computer program product is provided, which comprises a computer program or instructions, which, if run on a computer, causes the computer to perform the method of the second aspect and any possible implementation thereof.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
Fig. 1a is an image to be fused according to an embodiment of the present disclosure;
FIG. 1b is a schematic diagram of another image to be fused according to an embodiment of the present disclosure;
fig. 2a is an image to be fused according to an embodiment of the present disclosure;
FIG. 2b is a schematic diagram of another image to be fused according to an embodiment of the present disclosure;
fig. 3a is a RAW image according to an embodiment of the present application;
fig. 3b is an image obtained by performing 0.5-fold down-sampling on the RAW image shown in fig. 3a according to an embodiment of the present application;
fig. 4 is a schematic diagram of a pixel coordinate system according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a first downsampling process performed on a first image to be fused according to an embodiment of the present disclosure;
FIG. 7 is a first image provided by an embodiment of the present application;
FIG. 8a is a schematic diagram of a diagonal array according to an embodiment of the present application;
FIG. 8b is a schematic diagram of another diagonal array provided in an embodiment of the present application;
FIG. 9a is a schematic diagram of another diagonal array provided in an embodiment of the present application;
FIG. 9b is a schematic diagram of another diagonal array provided in an embodiment of the present application;
fig. 10 is a schematic flowchart of another image processing method according to an embodiment of the present application;
Fig. 11 is a first image to be fused according to an embodiment of the present disclosure;
FIG. 12 is a fourth image provided by an embodiment of the present application;
fig. 13 is a schematic diagram of a scale of an enlarged first pixel coordinate system according to an embodiment of the present disclosure;
fig. 14 is a first image obtained based on the coordinate system shown in fig. 13 according to an embodiment of the present disclosure;
fig. 15 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 16a is a schematic diagram of a first image to be fused according to an embodiment of the present application;
fig. 16b is a schematic diagram of a sixth image obtained by extracting a green channel from the first image to be fused shown in fig. 16a according to an embodiment of the present disclosure;
FIG. 17a is a schematic diagram of an image of a green color channel according to an embodiment of the present disclosure;
fig. 17b is a schematic diagram of an image obtained by upsampling the image shown in fig. 17a according to an embodiment of the present application;
fig. 18 is a schematic flowchart of another image processing method according to an embodiment of the present application;
FIG. 19a is a schematic diagram of a third image according to an embodiment of the present disclosure;
fig. 19b is a schematic diagram of a ninth image obtained by rotating the third image shown in fig. 19a according to an embodiment of the present application;
Fig. 19c is a schematic diagram of a seventh image obtained based on the ninth image shown in fig. 19b according to an embodiment of the present application;
fig. 20 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 21a is a schematic view of a third image to be fused according to an embodiment of the present disclosure;
fig. 21b is a schematic diagram of a twelfth image obtained by extracting a green channel from the third image to be fused shown in fig. 21a according to an embodiment of the present disclosure;
fig. 22 is a schematic diagram of performing a second downsampling process on a twelfth image according to an embodiment of the present application;
fig. 23 is a fourteenth image provided in the embodiment of the present application;
FIG. 24a is a schematic diagram of a diagonal array according to an embodiment of the present application;
FIG. 24b is a schematic view of another diagonal array provided by an embodiment of the present application;
FIG. 25a is a schematic view of another diagonal array provided in an embodiment of the present application;
FIG. 25b is a schematic view of another diagonal array provided by an embodiment of the present application;
fig. 26 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 27 is a twelfth image provided in an embodiment of the present application;
fig. 28 is a seventeenth image provided in an embodiment of the present application;
Fig. 29 is a schematic diagram of a scale of a seventh pixel coordinate system according to an embodiment of the present application;
fig. 30 is a fourteenth image obtained based on the pixel coordinate system shown in fig. 29 according to an embodiment of the present disclosure;
fig. 31 is a schematic flowchart of another image processing method according to an embodiment of the present application;
FIG. 32 is a schematic diagram of another third image to be fused according to an embodiment of the present application;
fig. 33 is a nineteenth image provided by an embodiment of the present application;
FIG. 34 is another nineteenth image provided by embodiments of the present application;
FIG. 35 is a nineteenth image provided in accordance with embodiments of the present application;
FIG. 36 is a first intermediate image provided by an embodiment of the present application;
FIG. 37 is another first intermediate image provided in an embodiment of the present application;
fig. 38a is another third image to be fused provided in the embodiment of the present application;
fig. 38b is a twenty-first image provided in an embodiment of the present application;
fig. 39a is a nineteenth image provided by the embodiment of the present application;
FIG. 39b is a nineteenth image provided in accordance with embodiments of the present application;
FIG. 40a is a schematic diagram of another third image to be fused according to an embodiment of the present application;
FIG. 40b is another twenty-third image provided by an embodiment of the present application;
Fig. 41a is a nineteenth image provided in an embodiment of the present application;
FIG. 41b is a nineteenth image provided in accordance with an embodiment of the present application;
fig. 42 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 43a is a schematic view of a third image to be fused according to an embodiment of the present disclosure;
fig. 43b is a schematic diagram of a twenty-fifth image obtained by extracting a green channel from the third image to be fused shown in fig. 43a according to an embodiment of the present application;
FIG. 44a is a schematic diagram of an image of a green color channel according to an embodiment of the present disclosure;
fig. 44b is a schematic diagram of an image obtained by upsampling the image shown in fig. 44a according to an embodiment of the present application;
fig. 45 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 46a is a schematic diagram of a twenty-fifth image provided in an embodiment of the present application;
FIG. 46b is a schematic diagram of a twenty-eighth image obtained by rotating the twenty-fifth image shown in FIG. 46a according to an embodiment of the present disclosure;
fig. 46c is a schematic diagram of a twenty-sixth image obtained based on the twenty-eighth image shown in fig. 46b according to an embodiment of the present application;
Fig. 47 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 48 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present application;
fig. 49 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present application;
fig. 50 is a schematic diagram of a hardware structure of another image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more, "at least two" means two or three and three or more, "and/or" for describing an association relationship of associated objects, meaning that three relationships may exist, for example, "a and/or B" may mean: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" may indicate that the objects associated with each other are in an "or" relationship, meaning any combination of the items, including single item(s) or multiple items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural. The character "/" may also represent a division in a mathematical operation, e.g., a/b-a divided by b; 6/3 ═ 2. At least one of the following "or similar expressions.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Before proceeding with the following explanation, the same points in the embodiments of the present application are defined. In the embodiment of the application, the pixel points corresponding to the same physical point in different images are called homonymous points. For example, the pixel a in fig. 1a and the pixel C in fig. 1b are the same name. The pixel point B in fig. 1a and the pixel point D in fig. 1B are the same name point.
In the field of image processing, the image quality is positively correlated with the information carried by the image, and the information in at least two images can be fused by carrying out fusion processing on at least two images, so that the information quantity carried by the image is increased, and the image quality is improved. Specifically, at least two images are fused, information carried by pixel points of the same name points in the at least two images is fused, and the quality of the images is improved. Because the RAW image is not processed, compared with an image obtained by processing the RAW image, the information carried by the RAW image is more accurate and richer in information amount. By performing image fusion processing on the RAW image, the fusion effect can be improved, and the quality of the image obtained by fusion processing is higher. Therefore, in the conventional technique, a mode of performing image fusion processing on two RAW images is adopted to obtain a fused image.
Since human eyes have different sensitivities to different colors, in a case where a RAW image includes at least two color channels, in order to allow the human eyes to obtain better visual perception and more information by observing the RAW image, in the RAW image, the color channel to which the human eyes are most sensitive (hereinafter, referred to as a sensitive channel) generally includes the largest number of pixel points. For example, the sensitivity of the human eye to green is higher than the sensitivity of the human eye to red or the sensitivity of the human eye to blue, and therefore, in the case where the RAW image includes three channels of R (R in this case means red), G (G in this case means green), and B (B in this case means blue), the number of pixel points included in the G channel is the largest. For another example, the sensitivity of the human eye to yellow is higher than the sensitivity of the human eye to red or the sensitivity of the human eye to blue, and therefore, in the case where the RAW image includes three channels of Y (Y in this text means yellow) and G, B, the number of pixel points included in the Y channel is the largest.
Since the human eye has the highest sensitivity to a color channel (which will be referred to as a sensitivity channel hereinafter) containing the largest number of pixel points among all color channels of the RAW image, the fusion effect on the RAW image mainly depends on the fusion effect of the sensitivity channel. Therefore, the image fusion processing of the at least two RAW images can be realized by performing the image fusion processing on the pixel points of the sensitive channels in the at least two RAW images.
In the RAW image, the pixels are usually arranged in a bayer pattern (bayer pattern), that is, the RAW image includes three channels of red, green, and blue. Because the meanings represented by the pixel values of the pixel points of different channels are different, the effect of fusion processing is reduced. In the embodiment of the application, the fusion effect is used for representing the quality of the image obtained by fusion processing. Specifically, the fusion effect is good, and the quality of the image obtained by representing fusion processing is high; the fusion effect is poor, and the quality of the image obtained by the representation fusion processing is low.
For example, assume that a pixel point A in the image shown in FIG. 2a (which will be referred to as image 1 hereinafter)11And a pixel point B in the image shown in fig. 2B (which will be referred to as image 2 hereinafter)12Are the same as each other. Due to the pixel point A11Belongs to B channel and pixel point B12Belongs to a G channel, and cannot determine a pixel point A from G channel pixel points in an image 2 by carrying out image fusion processing on the image 1 and the image 211The pixel points with the same name point or the B channel pixel point in the image 1 can not be determined and the pixel point B12And the pixel points are the same as the name points. And further results in the inability to determine pixel a from image 111Is a pixel point B12Or, the pixel point B cannot be determined from the image 2 12Is a pixel point A11The same name point of (1). Resulting in poor fusion.
In order to solve the above problems, in the conventional method, downsampling processing is performed on the RAW images respectively, so that pixel points in the two RAW images belong to sensitive channels, and then image fusion processing can be performed on the two RAW images subjected to downsampling processing, so that fusion of the pixel points of the sensitive channels in the two RAW images is realized.
Due to the fact that the resolution of the RAW image is reduced through the down-sampling processing, the image fusion processing is conducted on the image obtained through the down-sampling processing, and the fusion effect of the pixel points of the sensitive channels is poor. Specifically, the larger the downsampling magnification is, the better the fusion effect is, and conversely, the smaller the downsampling magnification is, the worse the fusion effect is.
In the embodiment of the present application, the down-sampling magnification in the down-sampling process is equal to the length of the image after the down-sampling process/the length of the image before the down-sampling process is equal to the width of the image after the down-sampling process/the width of the image before the down-sampling process. For example, the size of the RAW image shown in fig. 3a is 4 × 4, and the down-sampling process of 0.5 times is performed on the image, so that an image with the size of 2 × 2 shown in fig. 3b can be obtained. In the image shown in fig. 3B, each pixel includes two pixels of the G channel, one pixel of the B channel, and one pixel of the R channel. Such as: pixel point B 11Includes a pixel point A11Pixel point A12Pixel point A21Pixel point A22Pixel point B12Includes a pixel point A13Pixel point A14Pixel point A23Pixel point A24Pixel point B21Includes a pixel point A31Pixel point A32Pixel point A41Pixel point A42Pixel point B22Includes a pixel point A33Pixel point A34Pixel point A43Pixel point A44
Obviously, in the conventional method, the maximum value of the downsampling magnification is 0.5. That is, in the conventional image fusion method, the fusion effect is best obtained by performing downsampling processing with a downsampling magnification of 0.5 on an image to be fused. The embodiment of the application provides an image fusion method, which can improve the fusion effect of pixel points of sensitive channels in RAW images on the premise of realizing fusion processing of two RAW images.
The execution subject of the embodiment of the present application is an image processing apparatus, and optionally, the image processing apparatus may be one of the following: cell-phone, computer, server, panel computer.
Before proceeding with the following explanation, the pixel coordinate system in the embodiment of the present application is first enteredAnd (6) defining a row. The pixel coordinate system in the embodiment of the application is used for representing the position of the pixel point in the image, wherein the abscissa is used for representing the column number of the pixel point, and the ordinate is used for representing the row number of the pixel point. For example, in the image shown in fig. 4, a pixel coordinate system XOY is constructed with the upper left corner of the image as the origin O of coordinates, the direction parallel to the rows of the image as the direction of the X axis, and the direction parallel to the columns of the image as the direction of the Y axis. The units of the abscissa and the ordinate are pixel points. For example, pixel A in FIG. 4 11Has the coordinate of (1, 1), and the pixel point A23Has the coordinates of (3, 2), and the pixel point A42Has the coordinates of (2, 4), and the pixel point A34The coordinates of (2) are (4, 3).
The embodiments of the present application will be described below with reference to the drawings. Referring to fig. 5, fig. 5 is a schematic flowchart of an image fusion method according to an embodiment of the present disclosure.
501. Acquiring at least two images to be fused, wherein the at least two images to be fused comprise: the image fusion method comprises a first image to be fused and a second image to be fused.
In the embodiment of the application, when the number of the images to be fused is 2, at least two images to be fused are a first image to be fused and a second image to be fused. And under the condition that the number of the images to be fused is more than 2, the first image to be fused and the second image to be fused are parts of at least two images to be fused.
In the embodiment of the application, both the first image to be fused and the second image to be fused are RAW images. Because human eyes have different sensitivities to different colors, under the condition that the RAW image comprises at least two color channels, in order to facilitate human eyes to obtain better visual perception and more information by observing the RAW image, the color channel which is most sensitive to the human eyes usually comprises the most pixel points in the RAW image. For example, the sensitivity of the human eye to green is higher than the sensitivity of the human eye to red or the sensitivity of the human eye to blue, and therefore, in the case where the RAW image includes R, G, B three channels, the G channel includes the largest number of pixel points. For another example, since the sensitivity of the human eye to yellow is higher than the sensitivity of the human eye to red or the sensitivity of the human eye to blue, when the RAW image includes Y, G, B three channels, the number of pixels included in the Y channel is the largest.
In the embodiment of the application, the number of the channels in the first image to be fused and the number of the channels in the second image to be fused are not less than 2, and the channel with the largest number of the pixels in the first image to be fused is the same as the channel with the largest number of the pixels in the second image to be fused. The channel with the largest number of pixel points in the first image to be fused is called a first channel, the pixel points belonging to the first channel are called first-type pixel points, and the first image to be fused and the second image to be fused both contain the first-type pixel points.
For example, the first channel is a G channel, and the first image to be fused includes: the device comprises a pixel point a, a pixel point b, a pixel point c and a pixel point d, wherein the pixel point a and the pixel point c belong to a G channel. The second image to be fused includes: the system comprises a pixel point e, a pixel point f, a pixel point G and a pixel point h, wherein the pixel point e and the pixel point G belong to a G channel. At this time, the first type of pixel points include: pixel a, pixel c, pixel e and pixel g.
It should be understood that, in the case that the first image to be fused includes two channels, and the number of the pixel points of each channel is equal, the first channel may be any one of the channels in the first image to be fused. For example, in the first image to be fused, the number of pixels of the R channel is: the number of pixels of the G channel is 1:1, and the first channel may be an R channel or a G channel.
In one implementation of acquiring at least two images to be fused, the image processing device receives at least two images to be fused input by a user through the input component. The above-mentioned input assembly includes: keyboard, mouse, touch screen, touch pad, audio input device, etc.
In another implementation manner of acquiring at least two images to be fused, the image processing device receives the at least two images to be fused sent by the first terminal. Optionally, the first terminal may be any one of the following: cell-phone, computer, panel computer, server, wearable equipment.
In another implementation manner of acquiring at least two images to be fused, the image processing device may acquire the at least two images to be fused through the imaging component. Optionally, the imaging component may be a camera.
502. And performing downsampling processing on the first image to be fused to obtain a first image, and performing downsampling processing on the second image to be fused to obtain a second image.
Before proceeding to the following explanation, successive images are defined. In the embodiment of the present application, the continuous image means that all the pixel points belong to the same channel, and for convenience of description, the images except the continuous image are hereinafter referred to as non-continuous images. For example, the first image to be fused shown in fig. 6 is a discontinuous image, and the first image shown in fig. 7 is a continuous image.
It should be understood that the consecutive images may contain filler pixels, such as the consecutive image shown in fig. 7 containing the second filler pixels. If the pixel points except the filling pixel points in the continuous image are called channel pixel points, no filling pixel point exists between any two adjacent channel pixel points in the continuous image.
In the embodiment of the present application, the first image and the second image are both continuous images, and both the first image and the second image include first-type pixel points.
In the embodiment of the application, the ratio of the resolution of the first image to be fused is greater than a first threshold, and the ratio of the resolution of the second image to be fused is greater than the first threshold. Optionally, the first threshold is 0.25.
If the downsampling process performed on the first image to be fused and the downsampling process performed on the second image to be fused are referred to as a first downsampling process, the downsampling magnification of the first downsampling process is greater than 0.5.
In a possible implementation manner, the first image to be fused and the second image to be fused are both image matrixes, and the shapes of pixel points in the image matrixes are both squares. The shape of the first downsampling window of the first downsampling process is also square, the center of the first downsampling window is the same as the center of the first type of pixel points, the center of the first downsampling window is the intersection point of two diagonal lines of the downsampling window, and the center of the first type of pixel points is the intersection point of the two diagonal lines of the first type of pixel points. The area of the first down-sampling window is larger than that of the first type of pixel points, and the top point of the first type of pixel points is located on the boundary of the first down-sampling window.
The image processing device divides the first image to be fused into at least one pixel point region through at least one first down-sampling window. And taking each pixel point region as a pixel point, and determining the pixel value of the pixel point corresponding to the pixel point region according to the pixel value in each pixel point region, so as to realize the first down-sampling processing of the first image to be fused. And in the same way, the first down-sampling processing of the second image to be fused can be realized.
For example, the first image shown in fig. 7 can be obtained by performing the first downsampling process on the first image to be fused shown in fig. 6. Assume that in the first image to be fused shown in fig. 6, pixel point B11Has a center of C1Pixel point G12Has a center of C2Pixel point B13Has a center of C3Pixel point G14Has a center of C4Pixel point G21Has a center of C5Pixel point R22Has a center of C6Pixel point G23Has a center of C7Pixel point R24Has a center of C8Pixel point B31Has a center of C9Pixel point G32Has a center of C10Pixel point B33Has a center of C11Pixel point G34Has a center of C12Pixel point G41Has a center of C13Pixel point R42Has a center of C14Pixel point G43Has a center of C15Pixel point R44Has a center of C16
First downsampling window TC 1C6C9(hereinafter, will be referred to as a first downsampling window 1) has a center of C7The area of the first lower sampling window 1 is larger than the pixel point G21And pixel point G21Are located on the four edges of the first downsampling window 1, respectively. First downsampling window C1AC3C6(hereinafter, will be referred to as the first downsampling window 2) has a center of C2The area of the first down-sampling window 2 is larger than the pixel point G12And pixel point G12Are located on the four edges of the first downsampling window 2, respectively. First downsampling window QC9C14O (which will be referred to as the first downsampling window 3 hereinafter) is centered at C13The area of the first down-sampling window 3 is larger than the pixel point G41And pixel point G41Are located on four sides of the first downsampling window 3, respectively. First downsampling window C9C6C11C14(hereinafter, will be referred to as the first downsampling window 4) has a center of C10The area of the first down-sampling window 4 is larger than the pixel point G32And pixel point G32Are located on four sides of the first downsampling window 4, respectively. First downsampling window C6C3C8C11(which will be referred to as the first downsampling window 5 hereinafter) has a center of C7The area of the first down-sampling window 5 is larger than the pixel point G23And pixel point G23Are located on the four edges of the first downsampling window 5, respectively. First downsampling window C 3DFC8(hereinafter, will be referred to as the first downsampling window 6) has a center of C4The area of the first lower sampling window 6 is larger than the pixel point G14And pixel point G14Are located on the four edges of the first downsampling window 6, respectively. First downsampling window C14C11C16L (which will be referred to as the first downsampling window 7 hereinafter) has a center C15The area of the first down-sampling window 7 is larger than the pixel point G43And pixel point G43Are located on four sides of the first downsampling window 7, respectively. First downsampling window C11C8IC16(hereinafter, will be referred to as the first downsampling window 8) has a center of C12The area of the first down-sampling window 8 is larger than the pixelPoint G34And pixel point G34Are located on the four edges of the first downsampling window 8, respectively.
Taking the pixel point region in the first down-sampling window 1 as a pixel point D in the first image12Determining a pixel point D according to the pixel value in the first down-sampling window 112The pixel value of (2). Taking the pixel point region in the first down-sampling window 2 as the pixel point D in the first image13Determining a pixel point D according to the pixel value in the first down-sampling window 213The pixel value of (2). Taking the pixel point region in the first down-sampling window 3 as the pixel point D in the first image 21Determining a pixel point D according to the pixel value in the first down-sampling window 321The pixel value of (2). Taking the pixel point region in the first down-sampling window 4 as the pixel point D in the first image22Determining a pixel point D according to the pixel value in the first down-sampling window 422The pixel value of (2). Taking the pixel point region in the first down-sampling window 5 as the pixel point D in the first image23Determining a pixel point D according to the pixel value in the first down-sampling window 523The pixel value of (2). Taking the pixel point region in the first down-sampling window 6 as the pixel point D in the first image24Determining a pixel point D according to the pixel value in the first down-sampling window 624The pixel value of (2). Taking the pixel point region in the first down-sampling window 7 as the pixel point D in the first image32Determining a pixel point D according to the pixel value in the first down-sampling window 732The pixel value of (2). Taking the pixel point region in the first down-sampling window 8 as the pixel point D in the first image33Determining a pixel point D according to the pixel value in the first down-sampling window 833The pixel value of (2). Optionally, a mean value of pixel values in each first downsampling window is determined to be a pixel value of a pixel point corresponding to the first downsampling window, for example, the mean value of the pixel values in the first downsampling window 1 is used as a pixel point D 12The pixel value of (2).
It should be understood that in fig. 6, the following pixel regions are all the first filled pixels: triangle area ABW, triangle area DEC, triangle area FGE. Triangle region IJH, triangle region LMK, triangle region PQN, triangle region RSQ, and triangle region UVT. The pixel values in the first filling pixel point are all first values, and optionally, the first values are 0. In fig. 9, the following pixels are all the second filling pixels: pixel point D11Pixel point D14Pixel point D31Pixel point D34. The pixel value of the second filling pixel point is used for representing the green brightness degree, namely the second filling pixel point is the pixel point of the G channel. And the pixel values of the second filling pixel points are all second values. Optionally, the second value is 0.
As can be seen from fig. 6, in the first image to be fused, a non-G channel pixel exists between any two G channel pixels, and similarly, in the second image to be fused, a non-G channel pixel exists between any two G channel pixels. Because the information carried by the pixel points of the G channel is different from the information carried by the pixel points of the non-G channel, the image fusion processing is carried out on the first image to be fused and the second image to be fused, and the fusion effect is reduced.
As can be seen from fig. 7, in the first image obtained by performing the first downsampling processing on the first image to be fused, all the pixel points are pixel points of the G channel except the second filling pixel point. In a similar way, in a second image obtained by performing first downsampling processing on a second image to be fused, except for second filling pixel points, all the pixel points are pixel points of a G channel. Because the second filling pixel point is the pixel point of the G channel, the first image and the second image can be subjected to image fusion processing, and the fusion effect can be improved.
503. And performing fusion processing on the first image and the second image to obtain a third image.
In a possible implementation manner, the image processing apparatus implements the fusion processing on the first image and the second image by determining pixel values of pixel points at the same position in the first image and the second image. For example, the first image includes: pixel a, pixel b, pixel c, pixel d, the second image includes: pixel A, pixel B, pixelPoint C, pixel point D, the third image includes: pixel e, pixel f, pixel g, and pixel h. The coordinates of the pixel point a in the pixel coordinate system of the first image are (1, 1), the coordinates of the pixel point B in the pixel coordinate system of the first image are (1, 2), the coordinates of the pixel point C in the pixel coordinate system of the first image are (2, 1), the coordinates of the pixel point D in the pixel coordinate system of the first image are (2, 2), the coordinates of the pixel point a in the pixel coordinate system of the second image are (1, 1), the coordinates of the pixel point B in the pixel coordinate system of the second image are (1, 2), the coordinates of the pixel point C in the pixel coordinate system of the second image are (2, 1), the coordinates of the pixel point D in the pixel coordinate system of the second image are (2, 2), the coordinates of the pixel point e in the pixel coordinate system of the third image are (1, 1), the coordinates of the pixel point f in the pixel coordinate system of the third image are (1, 2) the coordinate of the pixel point g in the pixel coordinate system of the third image is (2, 1), and the coordinate of the pixel point h in the pixel coordinate system of the third image is (2, 2). Suppose that: the pixel value of the pixel point a is p 1The pixel value of the pixel point b is p2The pixel value of the pixel point c is p3The pixel value of the pixel point d is p4The pixel value of the pixel point A is p5The pixel value of the pixel point B is p6The pixel value of the pixel point C is p7The pixel value of the pixel point D is p8. The pixel value of pixel point e is (p)1+p5) /2, pixel value of pixel point f is (p)2+p6) (p) pixel value of pixel point g is3+p7) The pixel value of the pixel point h is (p)4+p8)/2。
In the embodiment of the application, since the ratio of the resolution of the first image to be registered is greater than 0.25, and the ratio of the resolution of the second image to be registered is greater than 0.25, the fusion effect of the first channel of the first image to be fused and the first channel of the second image to be fused can be improved by fusing the first image and the second image.
As an alternative embodiment, the channel of the first image to be fused is the same as the channel of the second image to be fused. For example, the first image to be fused contains R, G two channels, and the second image to be fused also contains R, G two channels. For another example, the first image to be fused includes R, G, B channels, and the second image to be fused also includes R, G, B channels. For another example, the first image to be fused includes R, Y (Y in this case means yellow) and B channels, and the second image to be fused also includes R, Y, B channels.
As an alternative embodiment, in the case that the channel of the first image to be fused is the same as the channel of the second image to be fused, the first image to be fused and the second image to be fused each include a second channel different from the first channel. The first type of pixel points in the first image to be fused are called second type of pixel points, the pixel points belonging to the second channel in the first image to be fused are called third type of pixel points, the first type of pixel points in the second image to be fused are called fourth type of pixel points, and the pixel points belonging to the second channel in the second image to be fused are called fourth type of pixel points. The ratio of the number of the second-type pixels to the number of the third-type pixels is equal to the ratio of the number of the fourth-type pixels to the number of the fifth-type pixels.
For example, assume that the first image to be fused contains R, G two channels, where the first channel is the G channel and the third channel is the R channel. If in the first image that is waiting to fuse, the quantity of the pixel of R passageway/the quantity of the pixel of G passageway equals 1/2, then the ratio of the quantity of second type pixel and the quantity of third type pixel 1/2, and in the second image that is waiting to fuse, the quantity of the pixel of R passageway/the quantity of the pixel of G passageway equals 1/2, and the ratio of the quantity of fourth type pixel and the quantity of fifth type pixel 1/2 promptly.
For another example, assume that the first image to be fused includes R, G, B channels, where the first channel is a G channel, and the third channel is an R channel or a B channel. If in the first image to be fused, the number of pixels in the R channel/the number of pixels in the G channel is 1/2, and the number of pixels in the B channel/the number of pixels in the G channel is 1/2. Under the condition that the third channel is the R channel, the ratio of the number of the second-type pixels to the number of the third-type pixels is 1/2, and in the second image to be fused, the number of the R channel pixels/the number of the G channel pixels is 1/2, that is, the ratio of the number of the fourth-type pixels to the number of the fifth-type pixels is 1/2. Under the condition that the third channel is a B channel, the ratio of the number of the second-type pixels to the number of the third-type pixels is 1/2, and in the second image to be fused, the number of the pixels in the B channel/the number of the pixels in the G channel is 1/2, that is, the ratio of the number of the fourth-type pixels to the number of the fifth-type pixels is 1/2.
For another example, assume that the first image to be fused includes R, Y, B channels, where the first channel is a Y channel, and the third channel is an R channel or a B channel. If in the first image to be fused, the number of pixels in the R channel/the number of pixels in the Y channel is 1/2, and the number of pixels in the B channel/the number of pixels in the Y channel is 1/4. Under the condition that the third channel is the R channel, the ratio of the number of the second-type pixels to the number of the third-type pixels is 1/2, and in the second image to be fused, the number of the R channel pixels/the number of the Y channel pixels is 1/2, that is, the ratio of the number of the fourth-type pixels to the number of the fifth-type pixels is 1/2. Under the condition that the third channel is a B channel, the ratio of the number of the second-type pixels to the number of the third-type pixels is 1/4, and in the second image to be fused, the number of the pixels in the B channel/the number of the pixels in the Y channel is 1/4, that is, the ratio of the number of the fourth-type pixels to the number of the fifth-type pixels is 1/4.
It should be understood that, when the number of channels included in the first image to be fused and the second image to be fused is greater than or equal to 3, the ratio between the number of pixel points of different channels in the first image to be fused is equal to the ratio between the number of pixel points of corresponding channels in the second image to be fused. For example, it is assumed that the first image to be fused and the second image to be fused each contain R, G, B three channels, where the first channel is the G channel. In a first image to be fused, the number of pixels of an R channel is R, the number of pixels of a G channel is G, and the number of pixels of a B channel is B. In the second image to be fused, the number of pixels in the R channel is R, the number of pixels in the G channel is G, and the number of pixels in the B channel is B. Then R, G, B, R, G, B.
In the embodiment of the present application, in the process of performing the first downsampling processing on the first image to be fused, the image processing apparatus determines the pixel values in the first image according to the pixel values in each first downsampling window. Since each first down-sampling window not only includes the second-type pixels but also includes pixels other than the second-type pixels (hereinafter, referred to as non-second-type pixels), the pixel values in the first image are determined according to the pixel values of the second-type pixels and the pixel values of the non-second-type pixels. Therefore, by performing the first downsampling processing on the first image to be fused, the noise in the second type of pixel points can be suppressed, and the first image is obtained. Similarly, the noise in the fourth type of pixel points can be suppressed by performing the first downsampling processing on the second image to be fused, so that the second image is obtained. Therefore, the third image with less noise can be obtained by fusing the first image and the second image, and the fusion effect of the first channel in the first image to be fused and the second channel in the second image to be fused is improved.
Optionally, in this embodiment of the application, the arrangement manner of the pixel points in the first image to be fused and the arrangement manner of the pixel points in the second image to be fused are both diagonal arrays, where the meaning of the diagonal arrays can be referred to as follows:
it is assumed that the first image to be fused includes: the image processing device comprises a first pixel point, a second pixel point, a third pixel point and a fourth pixel point. The coordinate of the first pixel point is (i, j), the coordinate of the second pixel point is (i +1, j), the coordinate of the third pixel point is (i, j +1), the coordinate of the fourth pixel point is (i +1, j +1), wherein i and j are positive integers. Under the condition that the first pixel point is the first-class pixel point, the second pixel point and the third pixel point are not the first-class pixel point, the fourth pixel point is the first-class pixel point, under the condition that the first pixel point is not the first-class pixel point, the second pixel point and the third pixel point are both the first-class pixel point, and the fourth pixel point is not the first-class pixel point.
For example, as shown in fig. 8a, in the case that the first pixel is the first-type pixel, neither the second pixel nor the third pixel is the first-type pixel, and the fourth pixel is the first-type pixel. As shown in fig. 8b, in the case that the first pixel is not the first-type pixel, the second pixel and the third pixel are both the first-type pixel, and the fourth pixel is not the first-type pixel.
As can be seen from fig. 8a and 8b, in the case where the pixels are arranged in a diagonal array, the arrangement of the pixels in the image is as shown in fig. 9a or as shown in fig. 9 b.
The above example takes the first image to be fused as an example, and explains the diagonal array, and similarly, the arrangement manner of the pixel points in the second image to be fused can also refer to the above example, fig. 9a, and fig. 9 b.
In the embodiment of the present application, although the arrangement manner of the pixel points in the first image to be fused and the arrangement manner of the pixel points in the second image to be fused are both diagonal arrays, the arrangement manner of the pixel points in the first image to be fused and the arrangement manner of the pixel points in the second image to be fused may be the same or different.
It is assumed that the second image to be fused includes: the fifth pixel point, the sixth pixel point, the seventh pixel point and the eighth pixel point. The coordinates of the fifth pixel point are (i, j), the coordinates of the sixth pixel point are (i +1, j), the coordinates of the seventh pixel point are (i, j +1), and the coordinates of the eighth pixel point are (i +1, j +1), wherein i and j are positive integers.
It should be understood that the coordinate of the first pixel point refers to the coordinate of the first pixel point in the pixel coordinate system of the first image to be fused, the coordinate of the second pixel point refers to the coordinate of the second pixel point in the pixel coordinate system of the first image to be fused, the coordinate of the third pixel point refers to the coordinate of the third pixel point in the pixel coordinate system of the first image to be fused, the coordinate of the fourth pixel point refers to the coordinate of the fourth pixel point in the pixel coordinate system of the first image to be fused, the coordinate of the fifth pixel point refers to the coordinate of the fifth pixel point under the pixel coordinate system of the second image to be fused, the coordinate of the sixth pixel point refers to the coordinate of the sixth pixel point under the pixel coordinate system of the second image to be fused, the coordinate of the seventh pixel point refers to the coordinate of the seventh pixel point under the pixel coordinate system of the second image to be fused, and the coordinate of the eighth pixel point refers to the coordinate of the eighth pixel point under the pixel coordinate system of the second image to be fused.
That is to say, the position of the first pixel point in the first image to be fused is the same as the position of the fifth pixel point in the second image to be fused, the position of the second pixel point in the first image to be fused is the same as the position of the sixth pixel point in the second image to be fused, the position of the third pixel point in the first image to be fused is the same as the position of the seventh pixel point in the second image to be fused, and the position of the fourth pixel point in the first image to be fused is the same as the position of the eighth pixel point in the second image to be fused.
Under the condition that the first pixel point and the fifth pixel point are the first-class pixel points, the second pixel point, the third pixel point, the sixth pixel point and the seventh pixel point are not the first-class pixel points, and the fourth pixel point and the eighth pixel point are the first-class pixel points. And under the condition that the first pixel point and the fifth pixel point are not the first-class pixel points, the second pixel point, the third pixel point, the sixth pixel point and the seventh pixel point are all the first-class pixel points, and the fourth pixel point and the eighth pixel point are not the first-class pixel points. At this time, the arrangement form of the pixel points in the first image to be fused is the same as the arrangement form of the pixel points in the second image to be fused. For example, the arrangement form of the pixel points in the first image to be fused and the arrangement form of the pixel points in the second image to be fused are both as shown in fig. 9 a. For another example, the arrangement form of the pixel points in the first image to be fused and the arrangement form of the pixel points in the second image to be fused are both as shown in fig. 9 b.
Under the condition that the first pixel point is the first-class pixel point and the fifth pixel point is not the first-class pixel point, the second pixel point, the third pixel point and the eighth pixel point are not the first-class pixel point, the fourth pixel point, the sixth pixel point and the seventh pixel point are all the first-class pixel points, under the condition that the first pixel point is not the first-class pixel point and the fifth pixel point is the first-class pixel point, the second pixel point, the third pixel point and the eighth pixel point are all the first-class pixel points, and the fourth pixel point, the sixth pixel point and the seventh pixel point are not the first-class pixel point. At this time, the arrangement form of the pixel points in the first image to be fused is different from the arrangement form of the pixel points in the second image to be fused. For example, the arrangement of the pixels in the first image to be fused is shown in fig. 9a, and the arrangement of the pixels in the second image to be fused is shown in fig. 9 b. For another example, the arrangement form of the pixel points in the first image to be fused is shown in fig. 9b, and the arrangement form of the pixel points in the second image to be fused is shown in fig. 9 a.
Optionally, the arrangement mode of the pixel points in the first image to be fused and the arrangement mode of the pixel points in the second image to be fused are both bayer arrays.
Referring to fig. 10, fig. 10 is a flowchart illustrating a method for implementing step 502 according to an embodiment of the present disclosure.
1001. And rotating the first image to be fused by a first angle to obtain a fourth image, and rotating the second image to be fused by a second angle to obtain a fifth image.
In the embodiment of the present application, the first angle and the second angle are both odd multiples of 45 degrees. Assume that the first angle is J1The second angle is J2,J1、J2Satisfies the following formula:
Figure 1
wherein r is1And r2Are all odd numbers.
For example, assume that: and the rotation angle obtained by clockwise rotating the first image to be fused is positive, and the rotation angle obtained by anticlockwise rotating the first image to be fused is negative. At r1In the case where the first angle is 45 degrees, 1, the fourth image is obtained by rotating the first image to be fused clockwise by 45 degrees. At r1In the case where the first angle is-45 degrees, the fourth image is obtained by rotating the first image to be fused by 45 degrees counterclockwise. At r1When the first angle is 135 degrees, 3 degrees, the first image to be fused is rotated clockwise by 135 degrees, so that the fourth image is obtained. At r1When the angle is-5 and the first angle is-225 degrees, the first angle is adjusted byAnd rotating the image to be fused by 225 degrees in the anticlockwise direction to obtain a fourth image.
For another example, assume: and the rotation angle obtained by rotating the first image to be fused anticlockwise is positive, and the rotation angle obtained by rotating the first image to be fused clockwise is negative. At r1In the case where the first angle is 45 degrees, 1, the fourth image is obtained by rotating the first image to be fused by 45 degrees counterclockwise. At r1In the case where the first angle is-45 degrees, the fourth image is obtained by rotating the first image to be fused clockwise by 45 degrees. At r1When the first angle is 135 degrees, the first image to be fused is rotated by 135 degrees counterclockwise, so that the fourth image is obtained. At r1When the first angle is-5 degrees to-225 degrees, the fourth image is obtained by rotating the first image to be fused clockwise by 225 degrees.
In a possible implementation manner, the rotating the first image to be fused by the first angle may be rotating the first image to be fused by the first angle around an origin of a pixel coordinate system of the first image to be fused, for example, the pixel coordinate system of the first image to be fused is xoy, and the origin of the pixel coordinate system is o. And rotating the first image to be fused by a first angle around o to obtain a fourth image.
In another possible implementation manner, the rotating the first image to be fused by the first angle may be rotating the first image to be fused by the first angle around the center of the first image to be fused, where the center of the first image to be fused is an intersection of two diagonal lines of the first image to be fused. For example, the fourth image shown in fig. 12 can be obtained by rotating the first image to be fused shown in fig. 11 by 45 degrees around the center of the first image to be fused.
In yet another possible implementation manner, the rotating the first image to be fused by the first angle may be rotating the first image to be fused by the first angle around a coordinate axis of a pixel coordinate system of the first image to be fused. For example, the pixel coordinate system of the first image to be fused is xoy, and the abscissa axis of the pixel coordinate system is ox. And rotating the first image to be fused by a first angle around the ox to obtain a fourth image. For another example, the pixel coordinate system of the first image to be fused is xoy, and the ordinate axis of the pixel coordinate system is oy. And rotating the first image to be fused by a first angle around oy to obtain a fourth image.
On the premise that the rotation angle is the first angle, the method for rotating the first image to be fused is not limited. In a similar way, on the premise that the rotation angle is the second angle, the method for fusing the second rotation image to be fused is not limited.
Optionally, the second angle is a terminal same angle of the first angle. The rotation direction of the first image to be fused is the same as the rotation direction of the second image to be fused, and for example, when the rotation angle obtained by rotating the first image to be fused clockwise is positive and the rotation angle obtained by rotating the first image to be fused counterclockwise is negative, the rotation angle obtained by rotating the second image to be fused clockwise is positive and the rotation angle obtained by rotating the second image to be fused counterclockwise is negative. And under the condition that the rotation angle obtained by rotating the first image to be fused anticlockwise is positive and the rotation angle obtained by rotating the first image to be fused clockwise is negative, the rotation angle obtained by rotating the second image to be fused anticlockwise is positive and the rotation angle obtained by rotating the second image to be fused clockwise is negative.
In a possible implementation manner, the rotating the second image to be fused by the second angle may be rotating the second image to be fused by the second angle around an origin of a pixel coordinate system of the second image to be fused, for example, the pixel coordinate system of the second image to be fused is xoy, and the origin of the pixel coordinate system is o. And rotating the second image to be fused by a second angle around the o to obtain a fifth image.
In another possible implementation manner, the second image to be fused is rotated by a second angle, which may be that the second image to be fused is rotated by the second angle around the center of the second image to be fused, where the center of the second image to be fused is an intersection point of two diagonal lines of the second image to be fused. For example, the center of the second image to be fused is o. And rotating the second image to be fused by a second angle around the o to obtain a fifth image.
In yet another possible implementation manner, the rotating the second image to be fused by the second angle may be rotating the second image to be fused by the second angle around a coordinate axis of a pixel coordinate system of the second image to be fused. For example, the pixel coordinate system of the second image to be fused is xoy, and the abscissa axis of the pixel coordinate system is ox. And rotating the second image to be fused by a second angle around the ox to obtain a fifth image. For another example, the pixel coordinate system of the second image to be fused is xoy, and the ordinate axis of the pixel coordinate system is oy. And rotating the second image to be fused by a second angle around oy to obtain a fifth image.
1002. And amplifying the coordinate axis scale of the first pixel coordinate system by n times to obtain a second pixel coordinate system, and amplifying the coordinate axis scale of the third pixel coordinate system by the n times to obtain a fourth pixel coordinate system.
In the embodiment of the present application, the first pixel coordinate system is a pixel coordinate system of the fourth image, and the third pixel coordinate system is a pixel coordinate system of the fifth image.
In the embodiment of the application, n is a positive number. Alternatively to this, the first and second parts may,
Figure BDA0002564220580000201
and obtaining a second pixel coordinate system by amplifying the abscissa axis scale and the ordinate axis scale of the first pixel coordinate system by n times.
For example, suppose
Figure BDA0002564220580000202
The second pixel coordinate system shown in fig. 13 is obtained by enlarging the abscissa axis scale and the ordinate axis scale of the first pixel coordinate system (i.e., xoy) shown in fig. 12.
Similarly, the abscissa axis scale and the ordinate axis scale of the third pixel coordinate system are both amplified by n times, and a fourth pixel coordinate system can be obtained.
1003. And determining the pixel value of each pixel point under the second pixel coordinate system according to the pixel value of the pixel point in the fourth image to obtain the first image, and determining the pixel value of each pixel point under the fourth pixel coordinate system according to the pixel value of the pixel point in the fifth image to obtain the second image.
Because the scale of the pixel coordinate system takes the pixel point as a unit, that is, the scale of the pixel coordinate system is the side length of one pixel point, under the condition that the scale of the pixel coordinate system of the image is changed, the area covered by the pixel point in the image is also correspondingly changed. The image processing device determines the pixel value of each pixel point under the second pixel coordinate system according to the pixel value of the pixel point in the fourth image to obtain a first image, and determines the pixel value of each pixel point under the fourth pixel coordinate system according to the pixel value of the pixel point in the fifth image to obtain a second image. Optionally, the image processing apparatus uses an average of pixel values in an area covered by each pixel point in the second pixel coordinate system as a pixel value of the pixel point, and uses an average of pixel values in an area covered by each pixel point in the fourth pixel coordinate system as a pixel value of the pixel point.
For example, the image processing apparatus determines the pixel value of each pixel point in the second pixel coordinate system (i.e., xoy) according to the pixel value of the pixel point in the fourth image shown in fig. 13, so as to obtain the first image shown in fig. 14. In fig. 13, the following pixel regions are all the third filling pixels: triangle area ABW, triangle area DEC, triangle area GHF, triangle area HIJ, triangle area KLM, triangle area NPQ, triangle area RST, triangle area TUV. The pixel values in the third filling pixel point region are all third values, and optionally, the third value is 0. In fig. 14, the following pixels are all fourth filling pixels: pixel point D 11Pixel point D14Pixel point D31Pixel point D34. The pixel value of the fourth filling pixel point is used for representing the green brightness degree, namely the fourth filling pixel point is a pixel point of the G channel. The pixel values of the fourth filling pixel points are all fourth values, and optionally, the fourth value is 0.
Similarly, the image processing device determines the pixel value of each pixel point in the fourth pixel coordinate system according to the pixel value of the pixel point in the fifth image, and can obtain the second image.
In the implementation, the fourth image is obtained by rotating the first image to be fused, and the fifth image is obtained by rotating the second image to be fused. The coordinate axis scale of the pixel coordinate system of the fourth image is adjusted to obtain the first image, and the coordinate axis scale of the pixel coordinate system of the fifth image is adjusted to obtain the second image, so that the discontinuous image can be converted into the continuous image, and the effects of reducing data processing amount and improving processing speed can be achieved.
As an alternative embodiment, after obtaining the third image, the image processing apparatus executes a flowchart of the method shown in fig. 15.
1501. And extracting the second channel in the first image to be fused to obtain a sixth image.
In the embodiment of the present application, the second channel is a channel different from the first channel in the first image to be fused. For example, the first image to be fused contains R, G two channels. In the case where the first channel is a G channel, the R channel is a second channel.
It is to be understood that the second and third channels may be the same or different. For example, in the case where the first image to be fused and the second image to be fused each include R, G two channels, and the first channel is a G channel, the second channel and the third channel are both R channels. For another example, in the case where the first image to be fused and the second image to be fused each include R, G, B two channels, and the first channel is a G channel, the second channel and the third channel may both be an R channel. The second channel may be an R channel, and the third channel may be a B channel. The second channel may be a B channel, and the third channel may be an R channel.
And extracting a second channel in the first image to be fused to obtain a sixth image. The size of the sixth image is the same as the size of the first image to be fused. In the sixth image, the pixel values of the pixel points of the second channel are the same as those of the pixel points of the second channel in the first image to be fused, the pixel points except the pixel points of the second channel are all fifth filling pixel points, and the pixel values of the fifth filling pixel points are all fifth values. Optionally, the fifth value is 0.
For example, the first image to be fused shown in fig. 16a includes R, G channels, and G channel in the first image to be fused is extracted to obtain fig. 1 6 b. Pixel G in first image to be fused12Pixel value of (2) and pixel point G in sixth image12Has the same pixel value, and the pixel point G in the first image to be fused14Pixel value of (2) and pixel point G in sixth image14…, pixel point G in the first image to be fused44Pixel value of (2) and pixel point G in sixth image44The pixel values of (a) are the same. In the sixth image, pixel point N11Pixel value, pixel point N13Pixel value, pixel point N22Pixel value, pixel point N24Pixel value, pixel point N31Pixel value, pixel point N33Pixel value, pixel point N42Pixel value, pixel point N44The pixel values of (2) are all 0.
1502. And performing upsampling processing on the third image to obtain a seventh image.
In the embodiment of the present application, the up-sampling magnification in the up-sampling process is equal to the length of the image after the up-sampling process/the length of the image before the up-sampling process is equal to the width of the image after the up-sampling process/the width of the image before the up-sampling process. For example, the size of the RAW image shown in fig. 17a is 2 × 2, and the image with the size of 4 × 4 shown in fig. 17b can be obtained by performing up-sampling processing on the image by 2 times. In fig. 17b, the following pixels are the sixth filling pixel: pixel point N 12Pixel point N14Pixel point N21Pixel point N22Pixel point N23Pixel point N24Pixel point N32Pixel point N33Pixel point N41Pixel point N42Pixel point N43Pixel point N44. And the pixel values of the sixth filling pixel points are all sixth values. An optional sixth value is 0.
The implementation of the first upsampling process may be one of the following: bilinear interpolation processing, nearest interpolation processing, high-order interpolation and deconvolution processing, and the specific implementation mode of the upsampling layer is not limited in the application.
In the embodiment of the present application, the up-sampling magnification of the first up-sampling process and the down-sampling magnification of the first down-sampling process are reciprocal. For example, when the down-sampling magnification of the first down-sampling process is a, the up-sampling magnification of the first up-sampling process is 1/a. Therefore, by performing the first upsampling process on the third image, the size of the third image can be enlarged to the size of the first image to be fused, resulting in a seventh image.
1503. And combining the sixth image and the seventh image by using the sixth image and the seventh image as one channel, respectively, to obtain an eighth image.
The sixth image is used as an image of one channel, the seventh image is used as an image of one channel, and the sixth image and the seventh image are combined to obtain an eighth image including two channels.
Since the human eye is more sensitive to the information contained in the first channel than to the information contained in the second channel, the fusion effect of the first image to be fused and the second image to be fused depends mainly on the fusion effect of the first channel in the first image to be fused and the first channel in the second image to be fused. By combining the sixth image and the seventh image, the first channel in the first image to be fused and the first channel in the second image to be fused can be fused, and further the first image to be fused and the second image to be fused can be fused.
Because based on the technical scheme provided by the embodiment of the application, the first image and the second image are fused, the fusion effect can be improved, and the fusion effect of the first image to be fused and the second image to be fused can be improved.
As an optional implementation manner, a second channel in the second image to be fused is extracted to obtain a first candidate image. And respectively taking the sixth image and the first alternative image as a channel, combining the sixth image and the first alternative image to obtain a second alternative image, and realizing the fusion of the first image to be fused and the second image to be fused. In a similar way, due to the technical scheme provided by the embodiment of the application, the fusion processing is performed on the first image and the second image, so that the fusion effect can be improved, and the fusion effect of the first image to be fused and the second image to be fused can be improved by combining the sixth image and the first alternative image.
It should be understood that, in the case that the number of channels included in the first image to be fused and the second image to be fused are both greater than 2, the first image to be fused and the second image to be fused can still be fused according to steps 1501 to 1503.
For example, in the case that the first image to be fused and the second image to be fused both include R, G, B three channels, and the first channel is a G channel, extracting the R channel in the first image to be fused obtains a third candidate image, and extracting the B channel in the first image to be fused obtains a fourth candidate image. And respectively taking the third candidate image, the fourth candidate image and the seventh image as a channel, and combining the third candidate image, the fourth candidate image and the seventh image to obtain a fifth candidate image. And taking the fifth alternative image as an image obtained by fusing the first image to be fused and the second image to be fused.
For another example, when the first image to be fused and the second image to be fused both include R, G, B three channels and the first channel is the G channel, the R channel in the first image to be fused is extracted to obtain the sixth candidate image, and the B channel in the second image to be fused is extracted to obtain the seventh candidate image. And respectively taking the sixth alternative image, the seventh alternative image and the seventh image as a channel, and combining the sixth alternative image, the seventh alternative image and the seventh image to obtain an eighth alternative image. And taking the eighth alternative image as an image obtained by fusing the first image to be fused and the second image to be fused.
For another example, when the first image to be fused and the second image to be fused both include R, G, B three channels and the first channel is a G channel, the R channel in the second image to be fused is extracted to obtain a ninth candidate image, and the B channel in the second image to be fused is extracted to obtain a tenth candidate image. And respectively taking the ninth candidate image, the tenth candidate image and the seventh image as a channel, and combining the ninth candidate image, the tenth candidate image and the seventh image to obtain an eleventh candidate image. And taking the eleventh alternative image as an image obtained by fusing the first image to be fused and the second image to be fused.
Referring to fig. 18, fig. 18 is a flowchart illustrating a method for implementing step 1502 when the first image and the second image are obtained through steps 1001 to 1003 according to an embodiment of the present application.
1801. And rotating the third image by a third angle to obtain a ninth image.
In the embodiment of the present application, the third angle is a terminal edge identical angle of the fourth angle, and the fourth angle is opposite to the first angle. For example, assuming that the first angle is 45 degrees, the fourth angle is-45 degrees and the third angle is the same angle at the end of-45 degrees.
In one possible implementation, rotating the third image by a third angle may be rotating the third image by the third angle around an origin of a pixel coordinate system of the third image, for example, the pixel coordinate system of the third image is xoy, and the origin of the pixel coordinate system is o. And rotating the third image by a third angle around the o to obtain a ninth image.
In another possible implementation, rotating the third image by a third angle may be rotating the third image by the third angle around a center of the third image, where the center of the third image is an intersection of two diagonal lines of the third image.
In yet another possible implementation, rotating the third image by a third angle may be rotating the third image by the third angle around a coordinate axis of a pixel coordinate system of the third image. For example, the pixel coordinate system of the third image is xoy, and the abscissa axis of the pixel coordinate system is ox. A ninth image is obtained by rotating the third image by a third angle around ox. For another example, the pixel coordinate system of the third image is xoy, and the ordinate axis of the pixel coordinate system is oy. And rotating the third image by a third angle around oy to obtain a ninth image.
1802. And reducing the coordinate axis scale of the fifth pixel coordinate system by the factor of n to obtain a sixth pixel coordinate system.
In the embodiment of the present application, the fifth pixel coordinate system is the pixel coordinate system of the ninth image.
N in this step is the same as n in step 1002. And reducing the abscissa axis scale and the ordinate axis scale of the fifth pixel coordinate system by n times to obtain a sixth pixel coordinate system.
1803. And determining the pixel value of each pixel point in the sixth pixel coordinate system according to the pixel value of the pixel point in the ninth image to obtain the seventh image.
As described above, when the scale of the pixel coordinate system of the image is changed, the area covered by the pixel points in the image is also changed accordingly. And the image processing device determines the pixel value of each pixel point in the second pixel coordinate system according to the pixel value of the pixel point in the ninth image to obtain a seventh image.
In a possible implementation manner, the image processing apparatus uses an average value of pixel values in an area covered by each pixel point in the sixth pixel coordinate system as the pixel value of the pixel point.
In other possible implementation manners, the pixel value of each pixel point in the seventh image is determined by:
regarding a pixel point with the center being the center of the first-class pixel point, taking the pixel value of the first-class pixel point corresponding to the pixel point as the pixel value of the pixel point;
And taking the pixel value of the pixel point as a seventh value for the pixel point of which the center is not the center of the first-type pixel point. Optionally, the seventh value is 0.
For example, the third image shown in fig. 19a is rotated 45 degrees counterclockwise, resulting in the ninth image shown in fig. 19 b. The scale of the coordinate axis of the pixel coordinate system of the ninth image shown in fig. 19b is reduced by n times, and a seventh image shown in fig. 19c is obtained. In the seventh image shown in fig. 19c, the centers of the following pixels are not the centers of the first-type pixels: pixel point N11Pixel point N13Pixel point N22Pixel point N24Pixel point N31Pixel point N33Pixel point N42Pixel point N44. The pixel values of the pixel points are all seventh values.
Because in fig. 19b and 19c, pixel point D13Center and pixel point G12Have the same center, pixel point D24Center and image ofPrime point G12Have the same center, pixel point D12Center and pixel point G21Have the same center, pixel point D23Center and pixel point G23Have the same center, pixel point D22Center and pixel point G32Have the same center, pixel point D33Center and pixel point G34Have the same center, pixel point D21Center and pixel point G41Have the same center, pixel point D32Center and pixel point G 43Are the same at the center. Therefore, the pixel point D13Pixel value and pixel point G12Has the same pixel value, pixel point D24Pixel value and pixel point G12Has the same pixel value, pixel point D12Pixel value and pixel point G21Has the same pixel value, pixel point D23Pixel value and pixel point G23Has the same pixel value, pixel point D22Pixel value and pixel point G32Has the same pixel value, pixel point D33Pixel value and pixel point G34Has the same pixel value, pixel point D21Pixel value and pixel point G41Has the same pixel value, pixel point D32Pixel value and pixel point G43The pixel values of (a) are the same.
According to the embodiment, the ninth image is obtained by rotating the third image, and the seventh image is obtained by adjusting the coordinate axis scale of the pixel coordinate system of the third image, so that the effects of reducing the data processing amount and improving the processing speed can be achieved.
As an alternative embodiment, based on the above, before executing step 1503, the image processing apparatus executes the following steps:
11. and extracting the second channel in the second image to be fused to obtain a tenth image.
The second channel in this step is the same as the second channel in step 1501. And extracting a second channel in the second image to be fused to obtain a tenth image. The size of the tenth image is the same as the size of the second image to be fused. In the tenth image, the pixel values of the pixel points of the second channel are the same as those of the pixel points of the second channel in the second image to be fused, the pixel points except the pixel points of the second channel are all sixth filling pixel points, and the pixel values of the sixth filling pixel points are all eighth values. Optionally, the seventh value is 0.
12. And performing fusion processing on the sixth image and the tenth image to obtain an eleventh image.
And fusing the second channel of the first image to be fused and the second channel of the second image to be fused by fusing the sixth image and the tenth image to obtain the eleventh image.
After obtaining the eleventh image, the step 1503 executed by the image processing apparatus specifically includes the following steps:
21. the eighth image is obtained by combining the seventh image and the eleventh image with each other using the seventh image and the eleventh image as one channel.
And taking the seventh image as an image of one channel, taking the eleventh image as an image of one channel, and combining the seventh image and the eleventh image, so that the second channel of the first image to be fused and the second channel of the second image to be fused can be fused while the first channel of the first image to be fused and the first channel of the second image to be fused are fused, and the eighth image is obtained. And the eighth image is the fused image of the first image to be fused and the second image to be fused. Therefore, the fusion effect of the first image to be fused and the second image to be fused can be further improved.
It should be understood that, when the number of channels included in the first image to be fused and the second image to be fused is greater than 2, each channel in the first image to be fused and each channel in the second image to be fused may be extracted, the corresponding channels are fused, and all fused images are merged to improve the fusion effect of the first image to be fused and the second image to be fused.
For example, when the first image to be fused and the second image to be fused both include R, G, B three channels, and the first channel is a G channel and the second channel is an R channel, the R channel in the first image to be fused is extracted to obtain a sixth image, the R channel in the second image to be fused is extracted to obtain a tenth image, the B channel in the first image to be fused is extracted to obtain a twelfth candidate image, and the B channel in the first image to be fused is extracted to obtain a thirteenth candidate image. And carrying out fusion processing on the sixth image and the tenth image to obtain an eleventh image. And performing fusion processing on the twelfth alternative image and the thirteenth alternative image to obtain a fourteenth alternative image. And respectively taking the seventh image, the eleventh image and the fourteenth candidate image as a channel, merging the sixth image and the tenth image, and performing fusion processing to obtain an eleventh image and a fifteenth candidate image. And taking the fifteenth alternative image as an image obtained by fusing the first image to be fused and the second image to be fused.
It should be understood that, in the drawings in the embodiments of the present application, the first images to be fused each include R, G, B three channels, and the first channels are all G channels, but in practical applications, the three channels included in the first images to be fused may not be R, G, B, and the first channels may not be G channels. The drawings provided in the embodiments of the present application are only examples and should not be construed as limiting the present application.
As an alternative embodiment, before executing step 503, the image processing apparatus further executes the following steps: and aligning the first image with the second image to obtain a first aligned image.
The image processing device can reduce the displacement difference of the pixel points of the same name point in the first image and the second image by aligning the first image with the second image to obtain a first aligned image, wherein the two pixel points of the same name point correspond to the same physical point. Optionally, the image processing apparatus may align the first image with the second image by one of the following methods: scale-invariant feature transform (SIFT), feature detection algorithm (HOG), feature extraction algorithm (ORB), Sobel (Sobel) operator.
Since the ratio of the resolution of the first image to be registered is greater than 0.25, and the ratio of the resolution of the first image to be registered is greater than 0.25, the alignment accuracy between the first image to be registered and the second image to be registered can be improved by performing the alignment processing on the first image and the second image.
After obtaining the first aligned image, the image processing apparatus performs the following steps in the process of performing step 503: and performing fusion processing on the second image and the first aligned image to obtain the third image.
The image processing device can improve the fusion effect by fusing the second image and the first aligned image, and further obtain a third image.
Based on the technical scheme provided by the above embodiment, the embodiment of the application also provides a possible application scenario.
With the popularization of mobile phones and the improvement of the photographing function of the mobile phones, more and more people use the mobile phones to take photos. However, due to various reasons, the quality of the images obtained by the mobile phone may be poor, such as: image blur, improper exposure of the image, etc. Therefore, when the quality of an image captured by a mobile phone is poor, it is necessary to process the image to improve the quality of the image, and image fusion processing is one of them. Based on the technical scheme provided by the embodiment of the application, the fusion accuracy of the images to be fused can be improved, and the effect of image fusion processing is further improved.
For example, when the user presses a photographing shutter key, the mobile phone captures an image a and an image b in a short time. Based on the technical scheme provided by the embodiment of the application, the mobile phone processes the image a and the image b, adjusts the position of at least one pixel point in the image a, and realizes the fusion of the image a to the image b to obtain the image c. And the mobile phone fuses the image b and the image c to obtain an image d, and presents the image d to the user.
Referring to fig. 20, fig. 20 is a schematic flowchart of another image fusion method according to an embodiment of the present disclosure.
2001. Acquiring at least two images to be fused, wherein the at least two images to be fused comprise: a third image to be fused and a fourth image to be fused.
In the embodiment of the application, under the condition that the number of the images to be fused is 2, at least two images to be fused are a third image to be fused and a fourth image to be fused. And under the condition that the number of the images to be fused is more than 2, the third image to be fused and the fourth image to be fused are part of at least two images to be fused.
In the embodiment of the application, the third image to be fused and the fourth image to be fused are both RAW images. Because human eyes have different sensitivities to different colors, under the condition that the RAW image comprises at least two color channels, in order to facilitate human eyes to obtain better visual perception and more information by observing the RAW image, the color channel which is most sensitive to the human eyes usually comprises the most pixel points in the RAW image. For example, the sensitivity of the human eye to green is higher than the sensitivity of the human eye to red or the sensitivity of the human eye to blue, and therefore, in the case where the RAW image includes R, G, B three channels, the G channel includes the largest number of pixel points. For another example, since the sensitivity of the human eye to yellow is higher than the sensitivity of the human eye to red or the sensitivity of the human eye to blue, when the RAW image includes Y, G, B three channels, the number of pixels included in the Y channel is the largest.
In the embodiment of the application, the number of the channels in the first image to be fused and the number of the channels in the second image to be fused are not less than 2, and the channel with the largest number of the pixels in the first image to be fused is the same as the channel with the largest number of the pixels in the second image to be fused. The channel with the largest number of pixel points in the first image to be fused is called a first channel, the pixel points belonging to the first channel are called first-type pixel points, and the first image to be fused and the second image to be fused both contain the first-type pixel points.
For example, the first channel is a G channel, and the first image to be fused includes: the device comprises a pixel point a, a pixel point b, a pixel point c and a pixel point d, wherein the pixel point a and the pixel point c belong to a G channel. The second image to be fused includes: the system comprises a pixel point e, a pixel point f, a pixel point G and a pixel point h, wherein the pixel point e and the pixel point G belong to a G channel. At this time, the first type of pixel points include: pixel a, pixel c, pixel e and pixel g.
It should be understood that, in the case that the first image to be fused includes two channels, and the number of the pixel points of each channel is equal, the first channel may be any one of the channels in the first image to be fused. For example, in the first image to be fused, the number of pixels of the R channel is: the number of pixels of the G channel is 1:1, and the first channel may be an R channel or a G channel.
In one implementation of acquiring at least two images to be fused, the image processing device receives at least two images to be fused input by a user through the input component. The above-mentioned input assembly includes: keyboard, mouse, touch screen, touch pad, audio input device, etc.
In another implementation manner of acquiring at least two images to be fused, the image processing device receives the at least two images to be fused sent by the first terminal. Optionally, the second terminal may be any one of the following: cell-phone, computer, panel computer, server, wearable equipment.
In another implementation manner of acquiring at least two images to be fused, the image processing device may acquire the at least two images to be fused through the imaging component. Optionally, the imaging component may be a camera.
2002. And extracting a fourth channel in the third image to be fused to obtain a twelfth image, and extracting the fourth channel in the fourth image to be fused to obtain a thirteenth image.
In this embodiment, the sixth type of pixel point belongs to the fourth channel. And the image processing device extracts a fourth channel in the third image to be fused, namely extracts a sixth type of pixel points in the third image to be fused to obtain a twelfth image. And the image processing device extracts a fourth channel in the fourth image to be fused, namely extracts a sixth type of pixel points in the fourth image to be fused to obtain a thirteenth image.
And the image processing device extracts a fourth channel in the third image to be fused to obtain a twelfth image. And the image processing device extracts a fourth channel in the fourth image to be fused to obtain a thirteenth image. The size of the twelfth image is the same as the size of the third image to be fused. In the twelfth image, the pixel value of the pixel point of the fourth channel is the same as the pixel value of the pixel point of the fourth channel in the third image to be fused, the pixel points except the pixel point of the fourth channel are all seventh filling pixel points, and the pixel values of the seventh filling pixel points are all eighth values. Optionally, the eighth value is 0.
For example, the third image to be fused shown in fig. 21a includes R, G, B channels, and the G channel in the third image to be fused is extracted, resulting in the twelfth image shown in fig. 21 b. Pixel point G in the third image to be fused12Pixel value of (1) and pixel point G in the twelfth image12Has the same pixel value, and a pixel point G in the third image to be fused14Pixel value of (1) and pixel point G in the twelfth image14…, pixel point G in the third image to be fused44Pixel value of (1) and pixel point G in the twelfth image44The pixel values of (a) are the same. In the twelfth image, pixel point N 11Pixel value, pixel point N13Pixel value, pixel point N22Pixel value, pixel point N24Pixel value, pixel point N31Pixel value, pixel point N33Pixel value, pixel point N42Pixel value, pixel point N44The pixel values of (2) are all 0.
2003. A fourteenth image is obtained by performing a downsampling process on the twelfth image, and a fifteenth image is obtained by performing the downsampling process on the thirteenth image.
Before proceeding to the following explanation, successive images are defined. In the embodiment of the present application, the continuous image means that all the pixel points belong to the same channel, and for convenience of description, the images except the continuous image are hereinafter referred to as non-continuous images. For example, the third image to be fused shown in fig. 21a is a discontinuous image, and the fourteenth image shown in fig. 23 is a continuous image.
It should be understood that the continuous image may include the fill pixel, and the continuous image shown in fig. 23 includes the eighth fill pixel. If the pixel points except the filling pixel points in the continuous image are called channel pixel points, no filling pixel point exists between any two adjacent channel pixel points in the continuous image.
In the embodiment of the present application, the fourteenth image and the fifteenth image are both continuous images, and the fourteenth image and the fifteenth image both include the sixth type of pixel points.
In the embodiment of the present application, a ratio of the resolution of the fourteenth image to the resolution of the third image to be fused is greater than a second threshold, and a ratio of the resolution of the fifteenth image to the resolution of the fourth image to be fused is greater than the second threshold. Optionally, the second threshold is 0.25.
If the downsampling process performed on the third image to be fused and the downsampling process performed on the fourth image to be fused are referred to as a second downsampling process, the downsampling magnification of the second downsampling process is greater than 0.5.
The shape of a second downsampling window of the second downsampling processing is also square, the center of the second downsampling window is the same as the center of the sixth type of pixel points, the center of the second downsampling window is the intersection point of two diagonal lines of the downsampling window, and the center of the sixth type of pixel points is the intersection point of the two diagonal lines of the sixth type of pixel points. The area of the second down-sampling window is larger than that of the sixth pixel point, and the vertex of the sixth pixel point is positioned on the boundary of the second down-sampling window.
The image processing means divides the twelfth image into at least one pixel point region through at least one second downsampling window. And taking each pixel point region as a pixel point, and determining the pixel value of the pixel point corresponding to the pixel point region according to the pixel value in each pixel point region to realize second downsampling processing of the twelfth image. The same can implement the second down-sampling process for the thirteenth image.
For example, the twelfth image shown in fig. 22 is subjected to the second downsampling process, whereby the fourteenth image shown in fig. 23 is obtained. Suppose that in the twelfth image shown in FIG. 22, a pixel point N is present11Has a center of Z1Pixel point G12Has a center of Z2Pixel point N13Has a center of Z3Pixel point G14Has a center of Z4Pixel point G21Has a center of Z5Pixel point N22Has a center of Z6Pixel point G23Has a center of Z7Pixel point N24Has a center of Z8Pixel point N31Has a center of Z9Pixel point G32Has a center of Z10Pixel point N33Has a center of Z11Pixel point G34Has a center of Z12Pixel point G41Has a center of Z13Pixel point N42Has a center of Z14Pixel point G43Has a center of Z15Pixel point N44Has a center of Z16
Second downsampling window TZ1Z6Z9(hereinafter, will be referred to as a second downsampling window 1) has a center Z7The area of the second down-sampling window 1 is larger than the pixel point G21And pixel point G21Are located on four sides of the second downsampling window 1, respectively. Second downsampling window Z1AZ3Z6(hereinafter, will be referred to as a second downsampling window 2) has a center Z2The area of the second down-sampling window 2 is larger than the pixel point G12And pixel point G12Are located on four sides of the second downsampling window 2, respectively. Second downsampling window QZ 9Z14O (which will be referred to as a second downsampling window 3 hereinafter) is centered at Z13The area of the second down-sampling window 3 is larger than the pixel point G41And pixel point G41Are located on four sides of the second downsampling window 3, respectively. Second downsampling window Z9Z6Z11Z14(hereinafter, will be referred to as a second downsampling window 4) has a center Z10The area of the second down-sampling window 4 is larger than the pixel point G32And pixel point G32Are located on four sides of the second downsampling window 4, respectively. Second downsampling window Z6Z3Z8Z11(hereinafter, will be referred to as a second downsampling window 5) has a center Z7The area of the second down-sampling window 5 is larger than the pixel point G23The area of (a) is,and pixel point G23Are located on four sides of the second downsampling window 5, respectively. Second downsampling window Z3DFZ8(hereinafter, will be referred to as a second downsampling window 6) is centered at Z4The area of the second down-sampling window 6 is larger than the pixel point G14And pixel point G14Are located on four sides of the second downsampling window 6. Second downsampling window Z14Z11Z16L (which will be referred to as a second downsampling window 7 hereinafter) is centered at Z15The area of the second down-sampling window 7 is larger than the pixel point G43And pixel point G43Are located on four sides of the second downsampling window 7, respectively. Second downsampling window Z 11Z8IZ16(hereinafter, will be referred to as a second downsampling window 8) is centered at Z12The area of the second down-sampling window 8 is larger than the pixel point G34And pixel point G34Are located on four sides of the second downsampling window 8, respectively.
Taking the pixel point region in the second down-sampling window 1 as a pixel point D in the fourteenth image12Determining a pixel point D according to the pixel value in the second down-sampling window 112The pixel value of (2). Taking the pixel point region in the second down-sampling window 2 as a pixel point D in the fourteenth image13Determining a pixel point D according to the pixel value in the second down-sampling window 213The pixel value of (2). Taking the pixel point region in the second down-sampling window 3 as a pixel point D in the fourteenth image21Determining a pixel point D according to the pixel value in the second down-sampling window 321The pixel value of (2). Taking the pixel point region in the second down-sampling window 4 as a pixel point D in the fourteenth image22Determining a pixel point D according to the pixel value in the second down-sampling window 422The pixel value of (2). Taking the pixel point region in the second down-sampling window 5 as a pixel point D in the fourteenth image23Determining a pixel point D according to the pixel value in the second down-sampling window 523The pixel value of (2). Taking the pixel point region in the second down-sampling window 6 as a pixel point D in the fourteenth image 24Determining a pixel point D according to the pixel value in the second down-sampling window 624The pixel value of (2). Taking the pixel point region in the second down-sampling window 7 as a pixel point D in the fourteenth image32Determining a pixel point D according to the pixel value in the second down-sampling window 732The pixel value of (2). Taking the pixel point region in the second down-sampling window 8 as a pixel point D in the fourteenth image33Determining a pixel point D according to the pixel value in the second down-sampling window 833The pixel value of (2). Optionally, a mean value of pixel values in each second downsampling window is determined to be a pixel value of a pixel point corresponding to the second downsampling window, for example, the mean value of pixel values in the second downsampling window 1 is used as a pixel point D12The pixel value of (2).
It should be understood that, in fig. 22, the following pixel regions are all the eighth filling pixels: triangle region ABW, triangle region DEC, triangle region FGE, triangle region IJH, triangle region LMK, triangle region PQN, triangle region RSQ, triangle region UVT. And the pixel values in the eighth filling pixel point region are all ninth values. Optionally, the ninth value is 0. In fig. 23, the following pixels are all the ninth filling pixel: pixel point D 11Pixel point D14Pixel point D31Pixel point D34. And the pixel value of the ninth filling pixel point is used for representing the green brightness degree, namely the ninth filling pixel point is the pixel point of the G channel. And the pixel values of the ninth filling pixel point are all tenth values. Optionally, the tenth value is 0.
As can be seen from fig. 22, in the twelfth image, a non-G channel pixel exists between any two G channel pixels, and similarly, in the thirteenth image, a non-G channel pixel exists between any two G channel pixels. Because the information carried by the pixel points of the G channel is different from the information carried by the pixel points of the non-G channel, the twelfth image and the thirteenth image are subjected to image fusion processing, and the fusion effect is reduced.
As can be seen from fig. 23, in the fourteenth image obtained by performing the second downsampling process on the twelfth image, all the pixel points are pixel points of the G channel except for the seventh filling pixel point. Similarly, in a fifteenth image obtained by performing the second downsampling processing on the thirteenth image, all the pixel points are pixel points of the G channel except the seventh filling pixel point. Because the seventh filling pixel point can be regarded as the pixel point of the G channel, the fusion effect can be improved by carrying out image fusion processing on the fourteenth image and the fifteenth image.
For convenience, the image containing the pixel points of two or more channels is referred to as a non-continuous image, and the image containing the pixel points of one channel is referred to as a continuous image, for example, the twelfth image shown in fig. 22 is a non-continuous image, and the fourteenth image shown in fig. 23 is a continuous image.
2004. And performing fusion processing on the fourteenth image and the fifteenth image to obtain a sixteenth image.
In the embodiment of the present application, the image registration processing may be implemented by an algorithm capable of implementing image registration, where the algorithm includes: SIFT, HOG, ORB, Sobel operator.
In one possible implementation manner, the image processing apparatus implements the fusion processing on the fourteenth image and the fifteenth image by determining pixel values of pixel points at the same positions in the fourteenth image and the fifteenth image. For example, the fourteenth image includes: pixel a, pixel b, pixel c, pixel d, and the fifteenth image includes: pixel A, pixel B, pixel C, pixel D, the sixteenth image includes: pixel e, pixel f, pixel g, and pixel h. The coordinates of the pixel point a in the pixel coordinate system of the fourteenth image are (1, 1), the coordinates of the pixel point B in the pixel coordinate system of the fourteenth image are (1, 2), the coordinates of the pixel point C in the pixel coordinate system of the fourteenth image are (2, 1), the coordinates of the pixel point D in the pixel coordinate system of the fourteenth image are (2, 2), the coordinates of the pixel point a in the pixel coordinate system of the fifteenth image are (1, 1), the coordinates of the pixel point B in the pixel coordinate system of the fifteenth image are (1, 2), the coordinates of the pixel point C in the pixel coordinate system of the fifteenth image are (2, 1), the coordinates of the pixel point D in the pixel coordinate system of the fifteenth image are (2, 2) The coordinates of the pixel point e in the pixel coordinate system of the sixteenth image are (1, 1), the coordinates of the pixel point f in the pixel coordinate system of the sixteenth image are (1, 2), the coordinates of the pixel point g in the pixel coordinate system of the sixteenth image are (2, 1), and the coordinates of the pixel point h in the pixel coordinate system of the sixteenth image are (2, 2). Suppose that: the pixel value of the pixel point a is p1The pixel value of the pixel point b is p2The pixel value of the pixel point c is p3The pixel value of the pixel point d is p4The pixel value of the pixel point A is p5The pixel value of the pixel point B is p6The pixel value of the pixel point C is p7The pixel value of the pixel point D is p8. The pixel value of pixel point e is (p)1+p5) /2, pixel value of pixel point f is (p)2+p6) (p) pixel value of pixel point g is3+p7) The pixel value of the pixel point h is (p)4+p8)/2。
In the embodiment of the present application, since the ratio of the resolution of the fourteenth image to the resolution of the third image to be registered is greater than 0.25, and the ratio of the resolution of the fifteenth image to the resolution of the fourth image to be registered is greater than 0.25, by performing fusion processing on the fourteenth image and the fifteenth image, the fusion effect of the fourth channel of the third image to be fused and the fourth channel of the fourth image to be fused can be improved.
As an alternative embodiment, the channel of the third image to be fused is the same as the channel of the fourth image to be fused. For example, the third image to be fused contains R, G two channels, and the fourth image to be fused also contains R, G two channels. For another example, the third image to be fused includes R, G, B three channels, and the fourth image to be fused also includes R, G, B three channels. As another example, the third image to be fused includes R, Y (Y in this case means yellow) and B channels, and the fourth image to be fused also includes R, Y, B channels.
As an optional implementation manner, in a case that a channel of the third image to be fused is the same as a channel of the fourth image to be fused, the third image to be fused and the fourth image to be fused each include a fifth channel different from the fourth channel. The sixth pixel points in the third image to be fused are called ninth pixel points, the pixel points in the third image to be fused belonging to the fifth channel are called tenth pixel points, the sixth pixel points in the fourth image to be fused are called eleventh pixel points, and the pixel points in the fourth image to be fused belonging to the fifth channel are called eleventh pixel points. The ratio of the number of the ninth type pixels to the number of the tenth type pixels is equal to the ratio of the number of the eleventh type pixels to the number of the twelfth type pixels.
For example, suppose that the third image to be fused contains R, G two channels, wherein the fourth channel is the G channel, and the sixth channel is the R channel. If in the third image to be fused, the number of pixels of the R channel/the number of pixels of the G channel is 1/2, the ratio of the number of pixels of the ninth type to the number of pixels of the tenth type is 1/2, and in the fourth image to be fused, the number of pixels of the R channel/the number of pixels of the G channel is 1/2, that is, the ratio of the number of pixels of the eleventh type to the number of pixels of the twelfth type is 1/2.
For another example, suppose that the third image to be fused includes R, G, B three channels, wherein the fourth channel is the G channel, and the sixth channel is the R channel or the B channel. If in the third image to be fused, the number of pixels in the R channel/the number of pixels in the G channel is 1/2, and the number of pixels in the B channel/the number of pixels in the G channel is 1/2. Under the condition that the sixth channel is the R channel, the ratio of the number of the ninth type of pixels to the number of the tenth type of pixels is 1/2, and in the fourth image to be fused, the number of the R channel pixels/the number of the G channel pixels is 1/2, that is, the ratio of the number of the eleventh type of pixels to the number of the twelfth type of pixels is 1/2. Under the condition that the sixth channel is the B channel, the ratio of the number of the ninth type of pixels to the number of the tenth type of pixels is 1/2, and in the fourth image to be fused, the number of the pixels of the B channel/the number of the pixels of the G channel is 1/2, that is, the ratio of the number of the eleventh type of pixels to the number of the twelfth type of pixels is 1/2.
For another example, suppose that the third image to be fused includes R, Y, B three channels, wherein the fourth channel is the Y channel, and the sixth channel is the R channel or the B channel. If in the third image to be fused, the number of pixels in the R channel/the number of pixels in the Y channel is 1/2, and the number of pixels in the B channel/the number of pixels in the Y channel is 1/4. Under the condition that the sixth channel is the R channel, the ratio of the number of the ninth type of pixels to the number of the tenth type of pixels is 1/2, and in the fourth image to be fused, the number of the R channel pixels/the number of the Y channel pixels is 1/2, that is, the ratio of the number of the eleventh type of pixels to the number of the twelfth type of pixels is 1/2. Under the condition that the sixth channel is the B channel, the ratio of the number of the ninth type of pixels to the number of the tenth type of pixels is 1/4, and in the fourth image to be fused, the number of the pixels of the B channel/the number of the pixels of the Y channel is 1/4, that is, the ratio of the number of the eleventh type of pixels to the number of the twelfth type of pixels is 1/4.
It should be understood that, when the number of channels included in the third image to be fused and the fourth image to be fused is greater than or equal to 3, the ratio between the number of pixel points of different channels in the third image to be fused is equal to the ratio between the number of pixel points of corresponding channels in the fourth image to be fused. For example, it is assumed that the third image to be fused and the fourth image to be fused each contain R, G, B three channels, where the fourth channel is a G channel. In the third image to be fused, the number of pixels in the R channel is R, the number of pixels in the G channel is G, and the number of pixels in the B channel is B. In the fourth image to be fused, the number of pixels in the R channel is R, the number of pixels in the G channel is G, and the number of pixels in the B channel is B. Then R, G, B, R, G, B.
Further, in the present embodiment, in the process of performing the second downsampling process on the twelfth image, the pixel value in the fourteenth image is determined in accordance with the pixel value in each second downsampling window. In each second down-sampling window, besides the seventh-class pixel points, there are filling pixel points, and the pixel values in the fourteenth image are determined according to the pixel values of the seventh-class pixel points and the pixel values of the filling pixel points. Therefore, the reduction of the resolution of the seventh type of pixel points can be reduced, and a fourteenth image is obtained. Similarly, by performing the second downsampling processing on the thirteenth image, the reduction of the resolution of the eighth type of pixel points can be reduced, and a fifteenth image is obtained. In this way, by performing the fusion processing on the fourteenth image and the fifteenth image, the sixteenth image with higher resolution can be obtained, so that the fusion effect of the fourth channel in the third image to be fused and the fourth channel in the fourth image to be fused is improved.
Optionally, in this embodiment of the application, an arrangement manner of pixel points in the third image to be fused and an arrangement manner of pixel points in the fourth image to be fused are both diagonal arrays, where the meaning of the diagonal arrays can be referred to as follows:
It is assumed that the third image to be fused includes: a ninth pixel point, a tenth pixel point, an eleventh pixel point, and a twelfth pixel point. The coordinates of the ninth pixel point are (p, q), the coordinates of the tenth pixel point are (p +1, q), the coordinates of the eleventh pixel point are (p, q +1), the coordinates of the twelfth pixel point are (p +1, q +1), wherein p and q are positive integers. Under the condition that the ninth pixel point is the sixth pixel point, the tenth pixel point and the eleventh pixel point are not the sixth pixel point, the twelfth pixel point is the sixth pixel point, under the condition that the ninth pixel point is not the sixth pixel point, the tenth pixel point and the eleventh pixel point are both the sixth pixel point, and the sixteenth pixel point of the twelfth pixel point is not the sixth pixel point.
For example, as shown in fig. 24a, in a case where the ninth pixel is the sixth pixel, neither the tenth pixel nor the eleventh pixel is the sixth pixel, and the twelfth pixel is the sixth pixel. As shown in fig. 24b, in the case that the ninth pixel is not the sixth pixel, the tenth pixel and the eleventh pixel are both the sixth pixel, and the twelfth pixel is not the sixth pixel.
As can be seen from fig. 24a and 24b, in the case where the pixels are arranged in a diagonal array, the arrangement of the pixels in the image is as shown in fig. 25a or as shown in fig. 25 b.
In the above example, the third image to be fused is taken as an example, and the diagonal array is explained, and similarly, the arrangement manner of the pixel points in the fourth image to be fused can also be referred to in the above example, fig. 25a, and fig. 25 b.
In the embodiment of the present application, although the arrangement manner of the pixel points in the third image to be fused and the arrangement manner of the pixel points in the fourth image to be fused are both diagonal arrays, the arrangement manner of the pixel points in the third image to be fused and the arrangement manner of the pixel points in the fourth image to be fused may be the same or different.
It is assumed that the fourth image to be fused includes: a thirteenth pixel point, a fourteenth pixel point, a fifteenth pixel point and a sixteenth pixel point. The coordinates of the thirteenth pixel point are (p, q), the coordinates of the fourteenth pixel point are (p +1, q), the coordinates of the fifteenth pixel point are (p, q +1), and the coordinates of the sixteenth pixel point are (p +1, q +1), wherein p and q are positive integers.
It should be understood that the coordinate of the ninth pixel point refers to the coordinate of the ninth pixel point in the pixel coordinate system of the third image to be fused, the coordinate of the tenth pixel point refers to the coordinate of the tenth pixel point in the pixel coordinate system of the third image to be fused, the coordinate of the eleventh pixel point refers to the coordinate of the eleventh pixel point in the pixel coordinate system of the third image to be fused, the coordinate of the twelfth pixel point refers to the coordinate of the twelfth pixel point in the pixel coordinate system of the third image to be fused, the coordinate of the thirteenth pixel point refers to the coordinate of the thirteenth pixel point in the pixel coordinate system of the fourth image to be fused, the coordinate of the fourteenth pixel point refers to the coordinate of the fourteenth pixel point in the pixel coordinate system of the fourth image to be fused, the coordinate of the fifteenth pixel point refers to the coordinate of the fifteenth pixel point in the pixel coordinate system of the fourth image to be fused, and the coordinate of the sixteenth pixel point refers to the coordinate of the sixteenth pixel point in the pixel coordinate system of the fourth image to be fused.
That is to say, the position of the ninth pixel point in the third image to be fused is the same as the position of the thirteenth pixel point in the fourth image to be fused, the position of the tenth pixel point in the third image to be fused is the same as the position of the fourteenth pixel point in the fourth image to be fused, the position of the eleventh pixel point in the third image to be fused is the same as the position of the fifteenth pixel point in the fourth image to be fused, and the position of the twelfth pixel point in the third image to be fused is the same as the position of the sixteenth pixel point in the fourth image to be fused.
Under the condition that the ninth pixel point and the thirteenth pixel point are all the sixth pixel points, the tenth pixel point, the eleventh pixel point, the fourteenth pixel point and the fifteenth pixel point are not the sixth pixel points, and the twelfth pixel point and the sixteenth pixel point are all the sixth pixel points. And under the condition that the ninth pixel point and the thirteenth pixel point are not the sixth pixel point, the tenth pixel point, the eleventh pixel point, the fourteenth pixel point and the fifteenth pixel point are all the sixth pixel points, and the twelfth pixel point and the sixteenth pixel point are not the sixth pixel points. At this time, the arrangement form of the pixel points in the third image to be fused is the same as the arrangement form of the pixel points in the fourth image to be fused. For example, the arrangement form of the pixel points in the third image to be fused and the arrangement form of the pixel points in the fourth image to be fused are both as shown in fig. 25 a. For another example, the arrangement form of the pixel points in the third image to be fused and the arrangement form of the pixel points in the fourth image to be fused are both as shown in fig. 25 b.
Under the condition that the ninth pixel point is the sixth pixel point and the thirteenth pixel point is not the sixth pixel point, the tenth pixel point, the eleventh pixel point and the sixteenth pixel point are not the sixth pixel point, the twelfth pixel point, the fourteenth pixel point and the fifteenth pixel point are all the sixth pixel point, under the condition that the ninth pixel point is not the sixth pixel point and the thirteenth pixel point is the sixth pixel point, the tenth pixel point, the eleventh pixel point and the sixteenth pixel point are all the sixth pixel points, and the twelfth pixel point, the fourteenth pixel point and the fifteenth pixel point are not the sixth pixel point. At this time, the arrangement form of the pixel points in the third image to be fused is different from the arrangement form of the pixel points in the fourth image to be fused. For example, the arrangement of the pixel points in the third image to be fused is shown in fig. 25a, and the arrangement of the pixel points in the fourth image to be fused is shown in fig. 25 b. For another example, the arrangement form of the pixel points in the third image to be fused is shown in fig. 25b, and the arrangement form of the pixel points in the fourth image to be fused is shown in fig. 25 a.
Optionally, the arrangement mode of the pixel points in the third image to be fused and the arrangement mode of the pixel points in the fourth image to be fused are both bayer arrays.
Referring to fig. 26, fig. 26 is a flowchart illustrating a method for implementing step 2003 according to an embodiment of the present disclosure.
2601. And rotating the twelfth image by a fifth angle to obtain a seventeenth image, and rotating the thirteenth image by a sixth angle to obtain an eighteenth image.
In the embodiment of the present application, the fifth angle and the sixth angle are both odd multiples of 45 degrees. Assume that the first angle is J3The second angle is J4,J3、J4Satisfies the following formula:
Figure 2
wherein r is3And r4Are all odd numbers.
For example, assume that: the rotation angle obtained by rotating the twelfth image clockwise is positive, and the rotation angle obtained by rotating the twelfth image counterclockwise is negative. At r3When the fifth angle is 45 degrees when the number is 1, a seventeenth image is obtained by rotating the twelfth image by 45 degrees clockwise. At r3When the fifth angle is-45 degrees at-1, the twelfth image is rotated 45 degrees counterclockwise, thereby obtaining a seventeenth image. At r3When the fifth angle is 135 degrees, the twelfth image is rotated by 135 degrees clockwise to obtain a seventeenth image. At r3When the twelfth image is rotated by 225 degrees counterclockwise, a seventeenth image is obtained when the fifth angle is-5 degrees or-225 degrees.
For another example, assume: the rotation angle obtained by rotating the twelfth image counterclockwise is positive, and the rotation angle obtained by rotating the twelfth image clockwise is negative. At r3When the fifth angle is 45 degrees when the number is 1, a seventeenth image is obtained by rotating the twelfth image by 45 degrees counterclockwise. At r3When the fifth angle is-45 degrees at-1, the twelfth image is rotated by 45 degrees clockwise to obtain a seventeenth imageAnd (4) an image. At r3When the fifth angle is 135 degrees, the twelfth image is rotated by 135 degrees counterclockwise to obtain a seventeenth image. At r3When the image is rotated by-5 degrees or a fifth angle of-225 degrees, the twelfth image is rotated by 225 degrees clockwise, thereby obtaining a seventeenth image.
In a possible implementation manner, the image processing apparatus rotates the twelfth image by a fifth angle, which may be that the twelfth image is rotated by the fifth angle around an origin of a pixel coordinate system of the third image to be fused, for example, the pixel coordinate system of the twelfth image is xoy, and the origin of the pixel coordinate system is o. A seventeenth image is obtained by rotating the twelfth image by a fifth angle around o.
In another possible implementation manner, the image processing apparatus rotates the twelfth image by a fifth angle, which may be to rotate the twelfth image by the fifth angle around a center of the twelfth image, where the center of the twelfth image is an intersection of two diagonal lines of the twelfth image. For example, the twelfth image shown in fig. 27 is rotated by 45 degrees around the center of the twelfth image, and the seventeenth image shown in fig. 28 is obtained.
In yet another possible implementation manner, the image processing apparatus rotates the twelfth image by a fifth angle, which may be that the seventh image is rotated by the fifth angle around a coordinate axis of a pixel coordinate system of the twelfth image. For example, the pixel coordinate system of the twelfth image is xoy, and the abscissa axis of the pixel coordinate system is ox. A seventeenth image is obtained by rotating the twelfth image by a fifth angle around ox. For another example, the pixel coordinate system of the twelfth image is xoy, and the ordinate axis of the pixel coordinate system is oy. A seventeenth image is obtained by rotating the twelfth image by a fifth angle around oy.
On the premise that the rotation angle is the fifth angle, the method for rotating the twelfth image is not limited in the application. Similarly, the present application does not limit the manner of the thirteenth image on the premise that the rotation angle is the second angle.
Optionally, the sixth angle is a terminal same angle of the fifth angle. The twelfth image is rotated in the same direction as the thirteenth image, and for example, when the rotation angle obtained by rotating the twelfth image clockwise is positive and the rotation angle obtained by rotating the twelfth image counterclockwise is negative, the rotation angle obtained by rotating the thirteenth image clockwise is positive and the rotation angle obtained by rotating the thirteenth image counterclockwise is negative. When the rotation angle obtained by rotating the twelfth image counterclockwise is positive and the rotation angle obtained by rotating the twelfth image clockwise is negative, the rotation angle obtained by rotating the thirteenth image counterclockwise is positive and the rotation angle obtained by rotating the thirteenth image clockwise is negative.
In one possible implementation, the image processing apparatus rotates the thirteenth image by a sixth angle, which may be that the thirteenth image is rotated by a sixth angle around an origin of a pixel coordinate system of the thirteenth image, for example, the pixel coordinate system of the thirteenth image is xoy, and the origin of the pixel coordinate system is o. An eighteenth image is obtained by rotating the thirteenth image by a sixth angle around o.
In another possible implementation manner, the image processing apparatus rotates the thirteenth image by a sixth angle, which may be to rotate the thirteenth image by the sixth angle around the center of the thirteenth image, where the center of the thirteenth image is an intersection of two diagonal lines of the thirteenth image. For example, the center of the thirteenth image is o. An eighteenth image is obtained by rotating the thirteenth image by a sixth angle around o.
In yet another possible implementation manner, the image processing apparatus rotates the thirteenth image by a sixth angle, which may be to rotate the thirteenth image by the sixth angle around a coordinate axis of a pixel coordinate system of the thirteenth image. For example, the pixel coordinate system of the thirteenth image is xoy, and the abscissa axis of the pixel coordinate system is ox. An eighteenth image is obtained by rotating the thirteenth image by a sixth angle around ox. For another example, the pixel coordinate system of the thirteenth image is xoy, and the ordinate axis of the pixel coordinate system is oy. An eighteenth image is obtained by rotating the thirteenth image by a sixth angle around oy.
The present application does not limit the manner of the rotation of the thirteenth image on the premise that the rotation angle is the sixth angle.
2602. And amplifying the coordinate axis scale of the seventh pixel coordinate system by m times to obtain an eighth pixel coordinate system, and amplifying the coordinate axis scale of the ninth pixel coordinate system by m times to obtain a tenth pixel coordinate system.
In the embodiment of the present application, the seventh pixel coordinate system is the pixel coordinate system of the seventeenth image, and the ninth pixel coordinate system is the pixel coordinate system of the eighteenth image.
In the embodiment of the present application, m is a positive number, and optionally,
Figure BDA0002564220580000322
the image processing apparatus obtains an eighth pixel coordinate system by enlarging both the abscissa axis scale and the ordinate axis scale of the seventh pixel coordinate system by m times.
For example, suppose
Figure BDA0002564220580000321
The eighth pixel coordinate system shown in fig. 29 is obtained by enlarging the abscissa axis scale and the ordinate axis scale of the seventh pixel coordinate system (i.e., xoy) shown in fig. 28.
Similarly, the image processing apparatus magnifies both the abscissa axis scale and the ordinate axis scale of the ninth pixel coordinate system by m times, and obtains the tenth pixel coordinate system.
2603. Determining the pixel value of each pixel point under the eighth pixel coordinate system according to the pixel value of the pixel point in the seventeenth image to obtain the fourteenth image, and determining the pixel value of each pixel point under the tenth pixel coordinate system according to the pixel value of the pixel point in the eighteenth image to obtain the fifteenth image.
Because the scale of the pixel coordinate system takes the pixel point as a unit, that is, the scale of the pixel coordinate system is the side length of one pixel point, under the condition that the scale of the pixel coordinate system of the image is changed, the area covered by the pixel point in the image is also correspondingly changed. The image processing device determines the pixel value of each pixel point under the eighth pixel coordinate system according to the pixel value of the pixel point in the seventeenth image to obtain a fourteenth image, and determines the pixel value of each pixel point under the tenth pixel coordinate system according to the pixel value of the pixel point in the eighteenth image to obtain a fifteenth image. Optionally, the image processing apparatus uses an average value of pixel values in an area covered by each pixel point in the eighth pixel coordinate system as a pixel value of the pixel point, and uses an average value of pixel values in an area covered by each pixel point in the tenth pixel coordinate system as a pixel value of the pixel point.
For example, the image processing apparatus determines the pixel values of the respective pixel points in the second pixel coordinate system (i.e., xoy) according to the pixel values of the pixel points in the seventeenth image shown in fig. 29, so as to obtain the fourteenth image shown in fig. 30. In fig. 29, the following pixel regions are all tenth filling pixels: triangle area ABW, triangle area DEC, triangle area GHF, triangle area HIJ, triangle area KLM, triangle area NPQ, triangle area RST, triangle area TUV. The pixel values in the tenth filling pixel point region are all the eleventh values. Optionally, the eleventh value is 0. In fig. 30, the following pixels are all the eleventh filling pixels: pixel point D 11Pixel point D14Pixel point D31Pixel point D34. The pixel value of the eleventh filling pixel point is used for representing the brightness degree of green, namely the eleventh filling pixel point is a pixel point of a G channel. And the pixel values of the eleventh filling pixel points are all tenth values. Optionally, the tenth value is 0.
Similarly, the image processing apparatus determines the pixel value of each pixel point in the tenth pixel coordinate system according to the pixel value of the pixel point in the eighteenth image, so as to obtain a fifteenth image.
The present embodiment obtains a seventeenth image by rotating the twelfth image, and obtains an eighteenth image by rotating the thirteenth image. The fourteenth image is obtained by adjusting the coordinate axis scale of the pixel coordinate system of the seventeenth image, and the fifteenth image is obtained by adjusting the coordinate axis scale of the pixel coordinate system of the eighteenth image, so that the discontinuous images can be converted into continuous images, and the effects of reducing the data processing amount and improving the processing speed can be achieved.
Referring to fig. 31, fig. 31 is a schematic flowchart illustrating another implementation method of step 2003 provided in this embodiment of the present application.
3101. A nineteenth image and a twentieth image are constructed.
In the embodiment of the present application, the nineteenth image includes a sixth type of pixel point in the third image to be fused, and the twentieth image includes a sixth type of pixel point in the fourth image to be fused. For example, assume that the fourth channel is a G channel. The third image to be fused includes: the device comprises a pixel point a, a pixel point b, a pixel point c and a pixel point d, wherein the pixel point a and the pixel point c belong to a G channel. The fourth image to be fused includes: the system comprises a pixel point e, a pixel point f, a pixel point G and a pixel point h, wherein the pixel point e and the pixel point G belong to a G channel. The nineteenth image includes: pixel a and pixel c, the twentieth image contains: pixel e and pixel g.
The size of the nineteenth image may be the same as or different from the size of the twentieth image. The nineteenth image may only include the sixth-type pixel points in the third image to be fused, or may include pixel points other than the sixth-type pixel points in the third image to be fused. The twentieth image may only include the sixth-type pixel points in the fourth image to be fused, or may include pixel points other than the sixth-type pixel points in the fourth image to be fused. The size of the nineteenth image and the size of the twentieth image are not limited in the present application.
For example, in the third image to be fused shown in fig. 32, the sixth type of pixel points include: pixel point G12Pixel point G14Pixel point G21Pixel point G23Pixel point G32Pixel point G34Pixel point G41Pixel point G43. Based on the sixth type of pixel points in the third image to be fused shown in fig. 32, the image processing apparatus may construct a nineteenth image shown in fig. 33, may also construct a nineteenth image shown in fig. 34, and may also construct a nineteenth image shown in fig. 35. In the nineteenth image shown in fig. 35, the following pixel points are twelfth filling pixel points: pixel point P1、P2、P3、P4And the pixel value of the twelfth filling pixel point is the thirteenth value. Optionally, the thirteenth value is 0.
By constructing the nineteenth image based on the sixth type of pixel points in the image to be fused, the image processing apparatus can construct the twentieth image based on the sixth type of pixel points in the second image to be fused.
3102. The fourteenth image is obtained by reducing the pixel value in the nineteenth image by s times, and the fifteenth image is obtained by reducing the pixel value in the twentieth image by s times.
Since the pixel value in the twelfth image is changed by performing the second downsampling process on the twelfth image, the pixel value in the twelfth image is different from the corresponding pixel value in the fourteenth image. Because the pixel values in the fourteenth image are determined according to the pixel values of the second downsampling window, and the areas of the sixth type of pixel points and the area of the fifth filling pixel point are the same in any one second downsampling window, the ratio of the pixel value in the twelfth image to the corresponding pixel value in the fourteenth image is determined.
For example (example 1), assume that the pixel values in the fourteenth image are the mean of the pixel values within the second downsampling window. Taking fig. 24 and 25 as an example, pixel point D12The pixel value of (2) is the mean value of the pixel values in the second down-sampling window 1, and the pixel point D 21Is the mean of the pixel values within the second down-sampling window 3. Further, assume a pixel point G shown in fig. 2421Has a pixel value of x1Pixel point G41Has a pixel value of x2. In the fourteenth image shown in fig. 25, in the case where the eighth value is 0, that is, the pixel value of the seventh pixel point is 0, the pixel point D is12Has a pixel value of x1/2, pixel D21Has a pixel value of x2At this time, pixel point G21Pixel value/pixel point D12Pixel value of (1) being pixel point G41Pixel value/pixel point D21I.e., the ratio of the pixel value in the twelfth image to the corresponding pixel value in the fourteenth image is 2. When the eighth value is 1, that is, the pixel value of the seventh pixel point is 1, in the fourteenth image shown in fig. 25, the pixel point D is12Has a pixel value of (x)1+1)/2,Pixel point D21Has a pixel value of (x)2+1)/2, at this time, pixel point G21Pixel value/pixel point D12Pixel value of 2+2x1Pixel point G41Pixel value/pixel point D21Pixel value of 2+2x2That is, the ratio of the pixel value in the twelfth image to the corresponding pixel value in the fourteenth image is 2 plus 2 times the pixel value in the twelfth image.
In the embodiment of the present application, s is used to characterize a ratio between a pixel value in the twelfth image and a corresponding pixel value in the fourteenth image. Continuing the example following example 1, where the eighth value is 0, s is 2; when the eighth value is 1, s is 2 plus the pixel value in the twelfth image multiplied by 2. The specific value of s can be adjusted according to actual requirements, and is not limited in the application.
Since s represents a ratio between a pixel value in the twelfth image and a corresponding pixel value in the fourteenth image, the nineteenth image includes the sixth type of pixel points in the twelfth image, and the image processing apparatus reduces the pixel value in the nineteenth image by s times, so that the first intermediate image including the pixel points in the fourteenth image can be obtained as the fourteenth image. Similarly, the image processing apparatus reduces the pixel value in the twentieth image by s times, thereby obtaining a second intermediate image including the pixel point in the fifteenth image as a fifteenth image.
For example, the third image to be fused shown in fig. 21a is the same as the third image to be fused shown in fig. 32. If the fourth channel of the third image to be fused is extracted, a twelfth image shown in fig. 21b is obtained, and the twelfth image is subjected to the second downsampling process (see fig. 22), a fourteenth image shown in fig. 23 is obtained. If a nineteenth image shown in fig. 33 is constructed according to the sixth type of pixel points in the third image to be fused, and the pixel value in the nineteenth image is reduced by s times, the first intermediate image shown in fig. 36 can be obtained. If a nineteenth image shown in fig. 34 is constructed according to the sixth type of pixel points in the third image to be fused, and the pixel value in the nineteenth image is reduced by s times, the first intermediate image shown in fig. 37 can be obtained. It is apparent that the pixel points are in FIG. 3 The position in the first intermediate image shown in fig. 6 or the position in the first intermediate image shown in fig. 37 may be different from the position of the pixel point in the fourteenth image shown in fig. 23 (it should be understood that the fourteenth image shown in fig. 23 is obtained based on the twelfth image shown in fig. 22, and the twelfth image shown in fig. 22 is obtained by extracting the G channel of the third image to be fused shown in fig. 32, so that the image 23 is compared with fig. 36 and 37 here). Such as: pixel point D12The fourteenth image shown in fig. 23 has a position (1, 3) and the pixel point D33The position in the fourteenth image shown in fig. 23 is (3, 2), and in the first intermediate image shown in fig. 36, the pixel point D is12Is (1, 2) and pixel point D33Is (2, 3), and in the first intermediate image shown in fig. 37, the pixel point D is12Is (1, 2) and pixel point D33The position of (2) is (4, 1).
In the case where the first intermediate image shown in fig. 36 is taken as the fourteenth image, the pixel point D is found in the nineteenth image shown in fig. 3312The corresponding pixel point is a pixel point G21I.e. pixel point G in the third image to be fused21Is and pixel point D12And (4) corresponding pixel points. In the case where the first intermediate image shown in fig. 36 is taken as the fourteenth image, the pixel point D is found in the nineteenth image shown in fig. 33 33The corresponding pixel point is a pixel point G34I.e. pixel point G in the third image to be fused34Is and pixel point D33And (4) corresponding pixel points. In the case where the first intermediate image shown in fig. 37 is taken as the fourteenth image, the nineteenth image shown in fig. 34 corresponds to the pixel point D12The corresponding pixel point is a pixel point G21I.e. pixel point G in the third image to be fused21Is and pixel point D12And (4) corresponding pixel points. In the case where the first intermediate image shown in fig. 37 is taken as the fourteenth image, the nineteenth image shown in fig. 34 corresponds to the pixel point D33The corresponding pixel point is a pixel point G34I.e. pixel point G in the third image to be fused34Is and pixel point D33And (4) corresponding pixel points.
As an alternative implementation, step 3101 performed by the image processing apparatus may include one of the following steps:
31. and arranging at least one seventh pixel point with the center belonging to the same first diagonal line into a line of pixel points of the image according to the ascending order of the abscissa to construct a twenty-first image. And sequencing the rows in the twenty-first image to obtain the nineteenth image. And arranging at least one eighth type pixel point with the center belonging to the same second diagonal line into a row of pixel points of the image according to the ascending order of the abscissa, and constructing a twenty-second image. And sequencing the rows in the twenty-second image to obtain the twentieth image.
In this embodiment of the application, the diagonal line of the third image to be fused includes: a first line segment. The first diagonal straight line includes: a straight line passing through the first line segment, and a straight line parallel to the first line segment. The diagonal line of the fourth image to be fused includes: a second line segment. The second diagonal line includes: a straight line of the second line segment, a straight line parallel to the second line segment. For example, the two diagonal lines of the third image to be fused are: line segment AC and line segment BD, the two diagonals of the fourth image to be fused are: line segment EG and line segment FH. The first diagonal straight line includes: a line passing through the AC, a line parallel to the AC, or a first diagonal line includes: a straight line passing through the BD, and a straight line parallel to the BD. The second diagonal line includes: a line through EG, a line parallel to EG, or a second diagonal line comprising: straight line through FH, straight line parallel to FH.
In the embodiment of the present application, the seventh type of pixel points include sixth type of pixel points in the third image to be fused, and the eighth type of pixel points include sixth type of pixel points in the fourth image to be fused. For example, in the third image to be fused, the following are included: pixel a, pixel b, pixel c, pixel d, the fourth treats that the fusion image contains: pixel e, pixel f, pixel g, and pixel h. And the pixel point a, the pixel point c, the pixel point e and the pixel point G are pixel points of a G channel. Under the condition that the fourth channel is a G channel, the seventh type of pixel points comprise: pixel a and pixel c, eighth class pixel includes: pixel e and pixel g.
Because the correlation exists between the adjacent pixel points, the nineteenth image keeps the position relation between the seventh pixel points in the twelfth image, and the accuracy of image fusion can be improved. Because a rotation angle exists between the third image to be fused and the fourteenth image, or a rotation angle exists between a pixel coordinate system of the third image to be fused and a pixel coordinate system of the fourteenth image, and the rotation angle is an odd multiple of 45 degrees, at least one seventh type of pixel points of which the centers belong to the same first diagonal line are arranged into a row of pixel points of the images according to the sequence from small to large of the abscissa, a twenty-first image is constructed, the rows in the twenty-first image are sequenced, and the position relationship among the seventh type of pixel points in the twelfth image can be kept, so that a nineteenth image is obtained.
For example (example 2), in the third image to be fused shown in fig. 38a, the seventh type of pixel points include: pixel point G12Pixel point G14Pixel point G21Pixel point G23Pixel point G32Pixel point G34Pixel point G34Pixel point G43And the two diagonal lines of the third image to be fused are as follows: segment OG and segment DJ. Suppose that: segment OG is a first segment, then the first diagonal line includes: straight line CE, straight line AF, straight line OG, straight line LH, straight line KI. Because the seventh type pixel point of the center crossing the straight line CE is only the pixel point G 14A pixel point G14As a line of pixel points of the image (which will be referred to as CE line pixel points hereinafter). The seventh type of pixel point of the center cross straight line AF includes: pixel point G12Pixel point G23Pixel point G34A pixel point G12Pixel point G23Pixel point G34The pixels are arranged in a row of the image in the order of the abscissa from small to large (hereinafter, referred to as AF row pixels). The seventh type pixel point of the center cross straight line LH includes: pixel point G21Pixel point G32Pixel point G43A pixel point G21Pixel point G32Pixel point G43The pixels are arranged in a row of pixels of the image (hereinafter referred to as LH row pixels) in the order of ascending abscissa. Because the seventh type of pixel points with centers passing through straight lines KI are only provided withPixel point G41A pixel point G41As a line of pixels of the image (which will be referred to as KI line of pixels hereinafter). Based on the CE line pixel points, the AF line pixel points, the LH line pixel points, and the KI line pixel points, a twenty-first image shown in fig. 38b is constructed. In the twenty-first image shown in FIG. 38b, pixel point P1Pixel point P2Pixel point P3Pixel point P4All the pixel values are thirteenth filling pixel points, and the pixel value of the thirteenth filling pixel point is a fourteenth value. Optionally, the fourteenth value is 0.
It should be understood that, in the twenty-first image shown in fig. 38b, the arrangement sequence of the CE line pixel points, the AF line pixel points, the LH line pixel points, and the KI line pixel points is only an example, and should not be limited to the present application. In practical application, the arrangement sequence of the CE line pixel points, the AF line pixel points, the LH line pixel points, and the KI line pixel points may be any sequence.
Sorting the rows in the twenty-first image shown in fig. 38b may result in the nineteenth image shown in fig. 39a or the nineteenth image shown in fig. 39 b.
In an implementation manner of sorting rows in the twenty-first image, a first mean value of vertical coordinates of pixel points in each row in the twenty-first image is determined, and a first index is obtained according to the first mean value. And arranging the rows in the twenty-first image according to the descending order of the first index to obtain a nineteenth image.
The first mean value refers to a mean value of vertical coordinates of all pixel points in each row of pixel points in the twenty-first image. The first index can be obtained according to the first mean value, and the first index and the first mean value line are in positive correlation or negative correlation.
Assume that the first mean value is A1First index t1. In one implementation, A is the first indicator obtained from the first mean value1、 t1Satisfies the following formula:
t1=a×A1… formula (3)
Wherein a is a non-0 real number.
In another implementation manner, a obtains the first index according to the first average value1、t1Satisfies the following formula:
t1=a×A1+ b … formula (4)
Wherein a is a non-0 real number and b is a real number.
In one implementation, A is the first indicator obtained from the first mean value1、t1Satisfies the following formula:
Figure BDA0002564220580000361
wherein a is a non-0 real number.
Continuing with example 2, the CE row pixels include pixel G14The mean value of the vertical coordinates of the CE row pixel points is the pixel point G14I.e. the first mean value of the CE row pixels is 1. The AF line pixel point includes: pixel point G12Pixel point G23Pixel point G34Determining pixel point G12Ordinate, pixel point G23Ordinate and pixel point G34The mean value of the ordinate of (2) is 2, i.e. the first mean value of the pixels in the AF row is 2. The LH row of pixel points comprises: pixel point G21Pixel point G32Pixel point G43Determining pixel point G21Ordinate, pixel point G32Ordinate and pixel point G43The mean value of the ordinate of (a) is 3, i.e. the first mean value of the pixels in the LH row is 3. The KI line pixel points comprise pixel points G41The mean value of the ordinate of the KI line is the pixel point G41The ordinate, i.e. the first mean value of the KI line pixels, is 4. Suppose that: the first mean value is positively correlated with the first index, the first mean value of the CE row pixel points is smaller than the first mean value of the AF row pixel points and smaller than the first mean value of the LH row pixel points and smaller than the first mean value of the KI row pixel points, and the first index of the CE row pixel points is smaller than the first index of the AF row pixel points and smaller than the first index of the LH row pixel points and smaller than the first index of the KI row pixel points. The nineteenth image shown in fig. 39a can be obtained by arranging the rows in the twenty-first image in descending order of the first index. Suppose that: the first mean value is negatively correlated with the first index, and the first mean value of the CE row pixel points is less than that of the AF row pixel points and is less than that of the LH row pixel points The average value is less than the first average value of KI line pixel points, and the first index of CE line pixel points is greater than the first index of AF line pixel points and greater than the first index of LH line pixel points and greater than the first index of KI line pixel points. The nineteenth image shown in fig. 39b can be obtained by arranging the rows in the twenty-first image in descending order of the first index.
Similarly, a second index is obtained by determining a second average value of the ordinate of each line of pixel points in the twenty-second image and according to the second average value. And arranging the lines in the twenty-second image according to the sequence from the second index to obtain a twentieth image.
The second mean value refers to a mean value of the vertical coordinates of all the pixel points in each row of pixel points in the twenty-second image. The first index can be obtained according to the second average value. And under the condition that the first average value is positively correlated with the first index, the second average value is positively correlated with the second index, and under the condition that the first average value is negatively correlated with the first index, the second average value is negatively correlated with the second index.
Assume that the second mean is A2Second index t2. In one implementation of obtaining the second index according to the second mean value, A2、 t2Satisfies the following formula:
t2=f×A2… formula (6)
Wherein f is a non-0 real number.
In another implementation manner, a obtains the first index according to the first average value2、t2Satisfies the following formula:
t2=f×A2+ d … formula (7)
Wherein f is a non-0 real number and d is a real number.
In one implementation, A is the first indicator obtained from the first mean value2、t2Satisfies the following formula:
Figure BDA0002564220580000371
wherein f is a non-0 real number.
The above example is an example of obtaining a nineteenth image by sorting rows in a twenty-first image, and similarly, the second mean value and the second index may be determined, and the rows in the twenty-second image may be sorted according to the second index to obtain a twentieth image, which will not be described herein again.
In another implementation of sorting the rows in the twenty-first image, the rows in the twenty-first image are arranged in the first order to obtain the nineteenth image.
In this implementation, the diagonal line of the third image to be fused further includes a third line segment, and the third line segment is different from the first line segment. In the embodiment of the present application, the first straight line is a straight line of the third line segment. Under the condition that the third line segment is at the center of the seventh-class pixel point, the first straight line is the straight line of the third line segment; and under the condition that the third line segment is not located at the center of the seventh type pixel point, the first straight line is the straight line which is parallel to the third line segment and is closest to the third line segment in the straight lines which are located at the center of the seventh type pixel point.
For example, assume that: in the third image to be fused shown in fig. 38a, the line segment JD is a third line segment. Since the line segment JD is located at the center of the seventh type of pixel point, the first straight line is the straight line JD.
For another example, assume: in the third image to be fused shown in fig. 38a, the line segment OG is a third line segment. The line segment OG is not larger than the center of the seventh-class pixel point, and the first straight line is a straight line which is parallel to the OG and is closest to the OG among straight lines of the centers of the seventh-class pixel points. A straight line parallel to the OG and passing through the centers of the seven classes of pixel points includes: straight line CE, straight line AF, straight line OG, straight line LH, straight line KI, wherein, the straight line closest to OG includes: straight line AF, straight line LH. Therefore, the first straight line is the straight line AF or the straight line LH.
And (3) calling the pixel point of which the center belongs to the first straight line as a first index pixel point, wherein each row of pixel points in the twenty-first image comprises one first index pixel point. And taking the sequence of the vertical coordinates of the first index pixel points from large to small as a first sequence, or taking the sequence of the vertical coordinates of the first index pixel points from small to large as a first sequence, and arranging the rows in the twenty-first image according to the first sequence to obtain a fifteenth image.
Continuing with example 2, in the third image to be fused shown in fig. 38a, the line JD is the third line. Since the line segment JD is located at the center of the seventh type of pixel point, the first straight line is the straight line JD. The pixel point whose center belongs to the first straight line includes: pixel point G14Pixel point G23Pixel point G32Pixel point G41Namely, the first index pixel includes: pixel point G14Pixel point G23Pixel point G32Pixel point G41. Suppose that: the first sequence is the sequence from the great ordinate to the small ordinate of the first index pixel point, because the pixel point G14Ordinate < pixel point G23Ordinate < pixel point G32Ordinate < pixel point G41The rows in the twenty-first image are arranged in the first order to obtain the image shown in fig. 39 a. Suppose that: the first sequence is the sequence from small to large of the ordinate of the first index pixel point, because the pixel point G14Ordinate < pixel point G23Ordinate < pixel point G32Ordinate < pixel point G41The rows in the twenty-first image are arranged in the first order to obtain the image shown in fig. 39 b.
Similarly, the columns in the twenty-second image are arranged in the second order to obtain a twentieth image.
In this embodiment of the application, under the condition that the first order is the order from large to small of the ordinate of the first index pixel point, the second order is the order from large to small of the ordinate of the second index pixel point. And under the condition that the first sequence is the sequence from small to large of the ordinate of the first index pixel point, the second sequence is the sequence from small to large of the ordinate of the second index pixel point. The second index pixel point comprises a pixel point of which the center belongs to a second straight line. And under the condition that the fourth line segment is at the center of the eighth pixel point, the second straight line is the straight line of the third line segment. Under the condition that the fourth line segment does not pass through the centers of the eighth-type pixel points, the second straight line is the straight line which is parallel to the fourth line segment and is closest to the fourth line segment in the straight line of the centers of the eighth-type pixel points.
The above example is an example of obtaining a nineteenth image by sorting rows in a twenty-first image according to the first order, and similarly, the second order may be determined, and the twentieth image may be obtained by sorting rows in a twenty-second image according to the second order, which will not be described herein again.
32. And arranging at least one seventh pixel point with the center belonging to the same first diagonal line into a row of pixel points of the image according to the ascending order of the abscissa to construct a twenty-third image. And sequencing the columns in the twenty-third image to obtain the nineteenth image. And arranging at least one eighth pixel point of which the center belongs to the same second diagonal line into a row of pixel points of the image according to the ascending order of the abscissa, and constructing a twenty-fourth image. And sequencing the twenty-fourth image to obtain the twentieth image.
In this step, the meaning of the first diagonal line, the meaning of the second diagonal line, the meaning of the seventh type of pixel points, and the meaning of the eighth type of pixel points can all be referred to in step 31, which will not be described herein again.
Because the correlation exists between the adjacent pixel points, the nineteenth image keeps the position relation between the seventh pixel points in the twelfth image, and the accuracy of image fusion can be improved. Because a rotation angle exists between the third image to be fused and the fourteenth image, or a rotation angle exists between a pixel coordinate system of the third image to be fused and a pixel coordinate system of the fourteenth image, and the rotation angle is an odd multiple of 45 degrees, at least one seventh type pixel point with the center belonging to the same first diagonal line is arranged into a row of pixel points of the images according to the sequence from small to large of the abscissa, a twenty-third image is constructed, the rows in the twenty-third image are sequenced, and the position relationship between the seventh type pixel points in the twelfth image can be kept, so that a nineteenth image is obtained.
For example (example 3), in the third image to be fused shown in fig. 40a, the seventh type of pixel points include: pixel point G12Pixel point G14Pixel point G21Pixel point G23Pixel point G32Pixel point G34Pixel point G34Pixel point G43Two of the third images to be fusedThe diagonal lines are: segment OG and segment DJ. Suppose that: segment OG is a third segment, then the first diagonal line includes: straight line CE, straight line AF, straight line OG, straight line LH, straight line KI, pixel point G14. Because the seventh type pixel point of the center crossing the straight line CE is only the pixel point G14A pixel point G14As a column of pixels of the image (which will be referred to as CE column of pixels hereinafter). The seventh type of pixel point of the center cross straight line AF includes: pixel point G12Pixel point G23Pixel point G34A pixel point G12Pixel point G23Pixel point G34A column of pixel points of the image (which will be referred to as AF column pixel points hereinafter) is arranged in order from small to large abscissa. The seventh type pixel point of the center cross straight line LH includes: pixel point G21Pixel point G32Pixel point G43A pixel point G21Pixel point G32Pixel point G43A row of pixel points (which will be referred to as LH row of pixel points hereinafter) of the image are arranged in order from small to large on the abscissa. Because the seventh pixel point with the center passing through the straight line KI only has the pixel point G 41A pixel point G41As a column of pixels of the image (hereinafter will be referred to as KI column of pixels). A twenty-third image shown in fig. 40b is constructed based on the CE column pixel points, the AF column pixel points, the LH column pixel points, and the KI column pixel points. In the twenty-third image shown in FIG. 40b, pixel P1Pixel point P2Pixel point P3Pixel point P4And all the pixel values are fourteenth filling pixel points, and the pixel value of the fourteenth filling pixel point is a fifteenth value. Optionally, the fifteenth value is 0. It should be understood that, in the twenty-third image shown in fig. 40b, the arrangement order of the CE column pixel, the AF column pixel, the LH column pixel, and the KI column pixel is only an example, and should not be construed as a limitation to the present application. In practical application, the arrangement sequence of the CE column pixel points, the AF column pixel points, the LH column pixel points, and the KI column pixel points may be any sequence.
Sorting the columns in the twenty-third image shown in fig. 40b may result in the nineteenth image shown in fig. 41a or the nineteenth image shown in fig. 41 b.
In an implementation manner of sorting columns in the twenty-third image, a third mean value of a vertical coordinate of each column of pixel points in the twenty-third image is determined, and a third index is obtained according to the third mean value. And arranging the columns in the twenty-third image according to the descending order of the third index to obtain a nineteenth image.
The third mean value refers to a mean value of the vertical coordinates of all the pixel points in each row of pixel points in the twenty-third image. And obtaining a third index according to the third mean value, wherein the third index and the third mean value line are in positive correlation or negative correlation.
Assume the third mean value is A3Third index t3. In an implementation manner of obtaining the third index according to the third mean value, A3、 t3Satisfies the following formula:
t3=d×A3… formula (9)
Wherein d is a non-0 real number.
In another implementation manner of obtaining the third index according to the third mean value, A3、t3Satisfies the following formula:
t3=d×A3+ l … formula (10)
Wherein d is a non-0 real number and l is a real number.
In an implementation manner of obtaining the third index according to the third mean value, A3、t3Satisfies the following formula:
Figure BDA0002564220580000391
wherein d is a non-0 real number.
Continuing with example 3, the CE row pixels include pixel G14The mean value of the vertical coordinates of the CE row pixels is the pixel G14The ordinate of (c), i.e. the third mean value of CE column pixel points, is 1. The AF column pixel includes: pixel point G12Pixel point G23Pixel point G34Determining pixel point G12Ordinate, pixel point G23Ordinate and pixel point G34The mean value of the ordinate of (3) is 2, that is, the third mean value of the pixels in the AF column is 2. The LH row of pixel points comprises: pixel point G21Pixel point G 32Pixel point G43Determining pixel point G21Ordinate, pixel point G32Ordinate and pixel point G43The mean value of the ordinate of (a) is 3, i.e. the third mean value of the LH column pixel points is 3. The KI column pixel points comprise pixel points G41The mean value of the vertical coordinates of the KI lines is a pixel point G41The ordinate, that is, the third mean value of KI row pixel points is 4. Suppose that: the third mean value is positively correlated with the third index, the third mean value of the CE row pixel points is smaller than the third mean value of the AF row pixel points and smaller than the third mean value of the LH row pixel points and smaller than the third mean value of the KI row pixel points, and the third index of the CE row pixel points is smaller than the third index of the AF row pixel points and smaller than the third index of the LH row pixel points and smaller than the third index of the KI row pixel points. The nineteenth image shown in fig. 41a can be obtained by arranging the columns in the twenty-third image in descending order of the third index. Suppose that: the third mean value is negatively correlated with the third index, and the third index of the CE row pixel point is greater than the third index of the AF row pixel point and greater than the third index of the LH row pixel point and greater than the third index of the KI row pixel point. The nineteenth image shown in fig. 41b can be obtained by arranging the columns in the twenty-third image in descending order of the third index.
Similarly, a fourth index is obtained by determining a fourth mean value of the vertical coordinate of each row of pixel points in the twenty-fourth image and according to the fourth mean value. And arranging the columns in the twenty-fourth image according to the fourth index from big to small to obtain a twentieth image.
The fourth mean value refers to a mean value of the vertical coordinates of all the pixel points in each row of pixel points in the twenty-fourth image. And obtaining a third index according to the fourth mean value. And under the condition that the third mean value and the third index are positively correlated, the fourth mean value and the fourth index are positively correlated, and under the condition that the third mean value and the third index are negatively correlated, the fourth mean value and the fourth index are negatively correlated.
Suppose the fourth mean value is A4Fourth index t4. An implementation manner for obtaining a fourth index according to a fourth mean valueIn (A)4、 t4Satisfies the following formula:
t4=e×A4… formula (12)
Wherein e is a non-0 real number.
In another implementation manner of obtaining the third index according to the third mean value, A4、t4Satisfies the following formula:
t4=e×A4+ w … formula (13)
Wherein e is a non-0 real number and w is a real number.
In an implementation manner of obtaining the third index according to the third mean value, A4、t4Satisfies the following formula:
Figure BDA0002564220580000401
wherein e is a non-0 real number.
The above example is an example of obtaining a nineteenth image by sorting columns in the twenty-third image, and similarly, the fourth mean value and the fourth index may be determined, and the twentieth image is obtained by sorting columns in the twenty-fourth image according to the fourth index, which will not be described herein again.
In another implementation of ordering columns in the twenty-third image, the columns in the twenty-third image are arranged in a third order, resulting in the nineteenth image described above.
In this implementation, the diagonal line of the third image to be fused further includes a third line segment, and the third line segment is different from the third line segment. In this embodiment, the third straight line is a straight line of the third line segment. Under the condition that the third line segment is at the center of the seventh-class pixel point, the third line is the line of the third line segment; and under the condition that the third line segment is not located at the center of the seventh type pixel point, the third line is the line which is parallel to the third line segment and is closest to the third line segment in the lines which are located at the center of the seventh type pixel point.
For example, assume that: in the third image to be fused shown in fig. 40a, the line segment JD is a third line segment. The third line is a straight line JD due to the center of the line JD of the seventh type of pixel points.
For another example, assume: in the third image to be fused shown in fig. 40a, the line segment OG is a third line segment. The line segment OG is not larger than the center of the seventh-class pixel point, and the third straight line is a straight line which is parallel to the OG and is closest to the OG among straight lines of the centers of the seventh-class pixel points. A straight line parallel to the OG and passing through the centers of the seven classes of pixel points includes: straight line CE, straight line AF, straight line OG, straight line LH, straight line KI, wherein, the straight line closest to OG includes: straight line AF, straight line LH. Therefore, the third line is the line AF or the line LH.
And (4) calling the pixel point of which the center belongs to the third straight line as a third index pixel point, wherein each row of pixel points in the twenty-third image comprises one third index pixel point. And taking the sequence of the vertical coordinates of the third index pixel points from large to small as a third sequence, or taking the sequence of the vertical coordinates of the third index pixel points from small to large as the third sequence, and arranging the rows in the twenty-third image according to the third sequence to obtain a fifteenth image.
Continuing with example 3, in the third image to be fused shown in fig. 40a, the line JD is the third line. The third line is a straight line JD due to the center of the line JD of the seventh type of pixel points. The pixel point whose center belongs to the third straight line includes: pixel point G14Pixel point G23Pixel point G32Pixel point G41Namely, the third index pixel point includes: pixel point G14Pixel point G23Pixel point G32Pixel point G41. Suppose that: the third sequence is the sequence from the great ordinate to the small ordinate of the third index pixel point, because the pixel point G14Ordinate < pixel point G23Ordinate < pixel point G32Ordinate < pixel point G41The rows in the twenty-third image are arranged in a third order to obtain the image shown in fig. 41 a. Suppose that: the third sequence is the sequence from small to large of the ordinate of the third index pixel point, because the pixel point G 14Ordinate < pixel point G23Ordinate < pixel point G32Ordinate < pixel point G41The columns in the twenty-third image are arranged in a third order, mayThe image shown in fig. 41b is obtained.
Similarly, the columns in the twenty-fourth image are arranged in the fourth order to obtain the twentieth image.
In this embodiment of the application, in a case where the third order is an order from a large ordinate to a small ordinate of the third index pixel point, the fourth order is an order from a large ordinate to a small ordinate of the fourth index pixel point. And under the condition that the third sequence is the sequence from small to large of the ordinate of the third index pixel point, the fourth sequence is the sequence from small to large of the ordinate of the fourth index pixel point. The fourth index pixel point includes a pixel point whose center belongs to the fourth straight line. And under the condition that the fourth line segment is at the center of the eighth pixel point, the fourth line segment is the line of the third line segment. And under the condition that the fourth line segment does not pass through the centers of the eighth-type pixel points, the fourth line is the line which is parallel to the fourth line segment and is closest to the fourth line segment in the line at the centers of the eighth-type pixel points.
The above example is an example of obtaining a nineteenth image by sorting columns in the twenty-third image according to the third order, and similarly, the fourth order may be determined, and obtaining a twentieth image by sorting columns in the twenty-fourth image according to the fourth order, which will not be described herein again.
As an alternative embodiment, after obtaining the sixteenth image, the image processing apparatus executes the flowchart of the method shown in fig. 42.
4201. And extracting the fifth channel in the third image to be fused to obtain a twenty-fifth image.
In this embodiment of the application, the fifth channel is a channel different from the fourth channel in the third image to be fused. For example, the third image to be fused contains R, G two channels. In the case where the fourth channel is a G channel, the R channel is a fifth channel.
It is to be understood that the fifth channel and the sixth channel may be the same or different. For example, in the case where the third image to be fused and the fourth image to be fused each include R, G two channels, and the fourth channel is a G channel, the fifth channel and the sixth channel are both R channels. For another example, in a case where the third image to be fused and the fourth image to be fused each include R, G, B two channels, and the fourth channel is a G channel, the fifth channel and the sixth channel may both be an R channel. The fifth channel may be an R channel, and the sixth channel may be a B channel. The fifth channel may be a B channel, and the sixth channel may be an R channel.
And extracting a fifth channel in the third image to be fused to obtain a twenty-fifth image. The size of the twenty-fifth image is the same as the size of the third image to be fused. In the twenty-fifth image, the pixel value of the pixel point of the fifth channel is the same as the pixel value of the pixel point of the fifth channel in the third image to be fused, the pixel points except the pixel point of the fifth channel are all fifteenth filling pixel points, and the pixel value of the fifteenth filling pixel point is all the sixteenth value. Optionally, the sixteenth value is 0.
For example, the third image to be fused shown in fig. 43a includes R, G three channels, and the R channel in the third image to be fused is extracted, resulting in the twenty-fifth image shown in fig. 43 b. Pixel point G in the third image to be fused12Pixel value of (1) and pixel point G in twenty-fifth image12Has the same pixel value, and a pixel point G in the third image to be fused14Pixel value of (1) and pixel point G in twenty-fifth image14…, pixel point G in the third image to be fused44Pixel value of (1) and pixel point G in twenty-fifth image44The pixel values of (a) are the same. In the twenty-fifth image, pixel point M11Pixel value, pixel point M13Pixel value, pixel point M22Pixel value, pixel point M24Pixel value, pixel point M31Pixel value, pixel point M33Pixel value, pixel point M42Pixel value, pixel point M44The pixel values of (2) are all 0.
4202. And performing upsampling processing on the twenty-fifth image to obtain a twenty-sixth image.
In the embodiment of the present application, the up-sampling magnification in the up-sampling process is equal to the length of the image after the up-sampling process/the length of the image before the up-sampling process is equal to the width of the image after the up-sampling process/the width of the image before the up-sampling process. For example, The size of the RAW image shown in fig. 44a is 2 × 2, and the image with the size of 4 × 4 shown in fig. 44b can be obtained by performing up-sampling processing on the image by 2 times. In fig. 44b, the following pixels are the sixteenth filling pixel: pixel point N12Pixel point N14Pixel point N21Pixel point N22Pixel point N23Pixel point N24Pixel point N32Pixel point N33Pixel point N41Pixel point N42Pixel point N43Pixel point N44. The pixel values of the sixteenth filling pixel point are all seventeenth values. An optional seventeenth value is 0.
The implementation of the second upsampling process may be one of the following: bilinear interpolation processing, nearest interpolation processing, high-order interpolation and deconvolution processing, and the specific implementation mode of the upsampling layer is not limited in the application.
In the embodiment of the present application, the up-sampling magnification of the second up-sampling process and the down-sampling magnification of the second down-sampling process are reciprocal. For example, when the down-sampling magnification of the second down-sampling process is a, the up-sampling magnification of the second up-sampling process is 1/a. Therefore, by performing the second upsampling process on the sixteenth image, the size of the sixteenth image can be enlarged to the size of the third image to be fused, so as to obtain a twenty-sixth image.
4203. And respectively taking the twenty-fifth image and the twenty-sixth image as a channel, and combining the twenty-fifth image and the twenty-sixth image to obtain a twenty-seventh image.
And taking the twenty-fifth image as an image of one channel, taking the twenty-sixth image as an image of one channel, and combining the twenty-fifth image and the twenty-sixth image to obtain a twenty-seventh image comprising two channels.
Since the human eye is more sensitive to the information contained in the fourth channel than to the information contained in the fifth channel, the fusion effect of the third image to be fused and the fourth image to be fused depends mainly on the fusion effect of the fourth channel in the third image to be fused and the fourth channel in the fourth image to be fused. By combining the twenty-fifth image and the twenty-sixth image, the fourth channel in the third image to be fused and the fourth channel in the fourth image to be fused can be fused, and further the third image to be fused and the fourth image to be fused can be fused.
Because the fourteenth image and the fifteenth image are fused based on the technical scheme provided by the embodiment of the application, the fusion effect can be improved, and therefore the fusion effect of the third image to be fused and the fourth image to be fused can be improved.
As an optional implementation manner, a fifth channel in the fourth image to be fused is extracted, so as to obtain a sixteenth candidate image. And respectively taking the twenty-fifth image and the sixteenth alternative image as a channel, combining the twenty-fifth image and the sixteenth alternative image to obtain a seventeenth alternative image, and realizing the fusion of the third image to be fused and the fourth image to be fused. In a similar way, due to the fact that the first image and the second image are subjected to fusion processing, the fusion effect can be improved, and the fusion effect of the third image to be fused and the fourth image to be fused can be improved by combining the twenty-fifth image and the sixteenth alternative image.
It should be understood that, when the number of channels included in the third image to be fused and the fourth image to be fused are both greater than 2, the third image to be fused and the fourth image to be fused can still be fused according to steps 4201 to 4203.
For example, when the third image to be fused and the fourth image to be fused both include R, G, B channels and the fourth channel is a G channel, the R channel in the third image to be fused is extracted to obtain an eighteenth candidate image, and the B channel in the third image to be fused is extracted to obtain a nineteenth candidate image. And respectively taking the eighteenth alternative image, the nineteenth alternative image and the twenty-sixth image as a channel, and combining the eighteenth alternative image, the nineteenth alternative image and the twenty-sixth image to obtain a twentieth alternative image. And taking the twentieth alternative image as an image obtained by fusing the third image to be fused and the fourth image to be fused.
For another example, when the third image to be fused and the fourth image to be fused both include R, G, B channels and the fourth channel is a G channel, the R channel in the third image to be fused is extracted to obtain a twenty-first candidate image, and the B channel in the fourth image to be fused is extracted to obtain a twenty-second candidate image. And respectively taking the twenty-first alternative image, the twenty-second alternative image and the twenty-sixth image as a channel, and combining the twenty-first alternative image, the twenty-second alternative image and the twenty-sixth image to obtain a twenty-third alternative image. And taking the twenty-third alternative image as an image obtained by fusing the third image to be fused and the fourth image to be fused.
For another example, when the third image to be fused and the fourth image to be fused both include R, G, B channels and the fourth channel is a G channel, extracting an R channel in the fourth image to be fused to obtain a twenty-fourth candidate image, and extracting a B channel in the fourth image to be fused to obtain a twenty-fifth candidate image. And respectively taking the twenty-fourth alternative image, the twenty-fifth alternative image and the twenty-sixth image as a channel, and combining the twenty-fourth alternative image, the twenty-fifth alternative image and the twenty-sixth image to obtain the twenty-sixth alternative image. And taking the twenty-sixth alternative image as an image obtained by fusing the third image to be fused and the fourth image to be fused.
Referring to fig. 45, fig. 45 is a flowchart illustrating an implementation method of step 4202 when the fourteenth image and the fifteenth image are obtained through steps 2601 to 2603 according to the embodiment of the present application.
4501. And rotating the twenty-fifth image by a seventh angle to obtain a twenty-eighth image.
In the embodiment of the present application, the seventh angle is a same angle as a terminal edge of the eighth angle, and the eighth angle and the fifth angle are opposite numbers. For example, assuming that the fifth angle is 45 degrees, the eighth angle is-45 degrees and the seventh angle is the terminal identity angle of-45 degrees.
In one possible implementation, rotating the twenty-fifth image by a seventh angle may be rotating the twenty-fifth image by the seventh angle around an origin of a pixel coordinate system of the twenty-fifth image, for example, the pixel coordinate system of the twenty-fifth image is xoy, and the origin of the pixel coordinate system is o. A twenty-eighth image is obtained by rotating the twenty-fifth image by a seventh angle around o.
In another possible implementation, rotating the twenty-fifth image by a seventh angle may be rotating the twenty-fifth image by the seventh angle around a center of the twenty-fifth image, where the center of the twenty-fifth image is an intersection of two diagonal lines of the twenty-fifth image.
In yet another possible implementation, the twenty-fifth image is rotated by a seventh angle, which may be the twenty-fifth image being rotated by the seventh angle around a coordinate axis of a pixel coordinate system of the twenty-fifth image. For example, the pixel coordinate system of the twenty-fifth image is xoy, and the abscissa axis of the pixel coordinate system is ox. A twenty-eighth image is obtained by rotating the twenty-fifth image by a seventh angle around ox. For another example, the pixel coordinate system of the twenty-fifth image is xoy, and the ordinate axis of the pixel coordinate system is oy. A twenty-eighth image is obtained by rotating the twenty-fifth image by a seventh angle about oy.
4502. And reducing the coordinate axis scale of the eleventh pixel coordinate system by m times to obtain a twelfth pixel coordinate system.
In the embodiment of the present application, the eleventh pixel coordinate system is the pixel coordinate system of the twenty-eighth image.
M in this step is the same as m in step 2602. And reducing the abscissa axis scale and the ordinate axis scale of the eleventh pixel coordinate system by m times to obtain a twelfth pixel coordinate system.
4503. And determining the pixel value of each pixel point in the twelfth pixel coordinate system according to the pixel value of the pixel point in the twenty-eighth image to obtain the twenty-sixth image.
As described above, when the scale of the pixel coordinate system of the image is changed, the area covered by the pixel points in the image is also changed accordingly. And the image processing device determines the pixel value of each pixel point in the twelfth pixel coordinate system according to the pixel value of the pixel point in the twenty-eighth image to obtain a twenty-sixth image.
In a possible implementation manner, the image processing apparatus uses an average value of pixel values in an area covered by each pixel point in the twelfth pixel coordinate system as a pixel value of a pixel point in the twenty-sixth image.
In another possible implementation manner, the pixel value of each pixel point in the twenty-sixth image is determined by:
regarding the pixel point with the center being the center of the sixth type pixel point, taking the pixel value of the sixth type pixel point corresponding to the pixel point as the pixel value of the pixel point;
and taking the pixel value of the pixel point as an eighteenth value for the pixel point of which the center is not the center of the sixth type of pixel point. Optionally, the eighteenth value is 0.
For example, rotating the twenty-fifth image shown in fig. 46a by 45 degrees counterclockwise results in the twenty-eighth image shown in fig. 46 b. The scale of the coordinate axis of the pixel coordinate system of the twenty-eighth image shown in fig. 46b is reduced by n times, and a twenty-sixth image shown in fig. 46c is obtained. In the twenty-sixth image shown in fig. 46c, the centers of the following pixel points are not the centers of the sixth type of pixel points: pixel point N 11Pixel point N13Pixel point N22Pixel point N24Pixel point N31Pixel point N33Pixel point N42Pixel point N44And the pixel values of the pixel points are all eighteenth values.
Because in fig. 46b and 46c, pixel point D13Center and pixel point G12Have the same center, pixel point D24Center and pixel point G12Have the same center, pixel point D12Center and pixel point G21Have the same center, pixel point D23Center and pixel point G23Have the same center, pixel point D22Center and pixel point G32Have the same center, pixel point D33Center and pixel point G34Have the same center, pixel point D21Center and pixel point G41Have the same center, pixel point D32Center and pixel point G43Are the same at the center. Therefore, the pixel point D13Pixel value and pixel point G12Has the same pixel value, pixel point D24Pixel value and pixel point G12Has the same pixel value, pixel point D12Pixel value and pixel point G21Has the same pixel value, pixel point D23Pixel value and pixel point G23Has the same pixel value, pixel point D22Pixel value and pixel point G32Has the same pixel value, pixel point D33Pixel value and pixel point G34Has the same pixel value, pixel point D21Pixel value and pixel point G41Has the same pixel value, pixel point D32Pixel value and pixel point G 43The pixel values of (a) are the same.
According to the implementation, the twenty-fifth image is rotated to obtain the twenty-eighth image, the coordinate axis scale of the pixel coordinate system of the twenty-eighth image is adjusted to obtain the twenty-sixth image, and the effects of reducing data processing amount and improving processing speed can be achieved.
As an alternative embodiment, based on the above, before performing step 4503, the image processing apparatus performs the following steps:
41. and extracting the fifth channel in the fourth image to be fused to obtain a twenty-ninth image.
The fifth channel in this step is the same as the fifth channel in step 4201. And extracting a fifth channel in the fourth image to be fused to obtain a twenty-ninth image. The size of the twenty-ninth image is the same as the size of the fourth image to be fused. In the twenty-ninth image, the pixel values of the pixel points of the fifth channel are the same as those of the pixel points of the fifth channel in the fourth image to be fused, the pixel points except the pixel point of the fifth channel are all seventeenth filling pixel points, and the pixel values of the seventeenth filling pixel points are all nineteenth values. Optionally, the nineteenth value is 0.
42. And carrying out fusion processing on the twenty-fifth image and the twenty-ninth image to obtain a thirtieth image.
By fusing the twenty-fifth image and the twenty-ninth image, the fifth channel of the third image to be fused and the fifth channel of the fourth image to be fused can be fused to obtain the thirtieth image.
After obtaining the thirtieth image, step 4203 executed by the image processing apparatus specifically includes the steps of:
51. and respectively taking the twenty-sixth image and the thirtieth image as a channel, and combining the twenty-sixth image and the thirtieth image to obtain the twenty-seventh image.
And taking the twenty-sixth image as an image of one channel, taking the thirtieth image as an image of one channel, combining the twenty-sixth image and the thirtieth image, and fusing the fourth channel of the third image to be fused and the fourth channel of the fourth image to be fused while fusing the fifth channel of the third image to be fused and the fifth channel of the fourth image to be fused to obtain the twenty-seventh image. And the twenty-seventh image is the fused image of the third image to be fused and the fourth image to be fused. Therefore, the fusion effect of the third image to be fused and the fourth image to be fused can be further improved.
It should be understood that, under the condition that the number of channels included in the third image to be fused and the fourth image to be fused is greater than 2, each channel in the third image to be fused and each channel in the fourth image to be fused can be extracted respectively, the corresponding channels are fused, and all fused images are merged, so as to improve the fusion effect of the third image to be fused and the fourth image to be fused.
For example, when the third image to be fused and the fourth image to be fused both include R, G, B channels, the fourth channel is a G channel, and the fifth channel is an R channel, the R channel in the third image to be fused is extracted to obtain a twenty-fifth image, the R channel in the fourth image to be fused is extracted to obtain a twenty-ninth image, the B channel in the third image to be fused is extracted to obtain a twenty-seventh candidate image, and the B channel in the second image to be fused is extracted to obtain a twenty-eighth candidate image. And carrying out fusion processing on the twenty-fifth image and the twenty-ninth image to obtain a thirtieth image. And carrying out fusion processing on the twenty-seventh alternative image and the twenty-eighth alternative image to obtain a twenty-ninth alternative image. And respectively taking the twenty-sixth image, the thirtieth image and the twenty-ninth alternative image as a channel, merging the twenty-fifth image and the twenty-ninth image, performing fusion processing on the twenty-fifth image and the twenty-ninth image to obtain a thirtieth image and obtain a thirtieth alternative image. And taking the thirtieth alternative image as an image obtained by fusing the third image to be fused and the fourth image to be fused.
It should be understood that, in the drawings in the embodiments of the present application, the third images to be fused each include R, G, B three channels, and the fourth channel is a G channel, but in practical applications, the three channels included in the third images to be fused may not be R, G, B, and the fourth channel may not be a G channel. The drawings provided in the embodiments of the present application are only examples and should not be construed as limiting the present application.
As an alternative embodiment, before executing step 2004, the image processing apparatus further executes the steps of: and aligning the fourteenth image with the fifteenth image to obtain a second aligned image.
The image processing device can reduce the displacement difference of the pixel points of the same name point in the fourteenth image and the fifteenth image by aligning the fourteenth image with the fifteenth image to obtain a second aligned image, wherein two pixel points of the same name point correspond to the same physical point. Optionally, the image processing apparatus may align the fourteenth image with the fifteenth image by one of the following methods: SIFT, HOG, ORB, sobel operators.
The image processing device performs second down-sampling processing on the twelfth image to obtain a fourteenth image, performs first down-sampling processing on the thirteenth image to obtain a fifteenth image, converts non-continuous images into continuous images, and can perform image alignment processing on the continuous images. Because the ratio of the area of the pixel point of the sixth type to the area of the second downsampling window is greater than 0.25, that is, the downsampling multiplying power of the second downsampling processing is greater than 0.5, the alignment effect of the fourth channel of the third image to be fused and the fourth channel of the fourth image to be fused can be improved by aligning the fourteenth image and the fifteenth image.
After obtaining the second aligned image, the image processing apparatus performs the following steps in executing step 2004: and performing fusion processing on the fifteenth image and the second aligned image to obtain the sixteenth image.
The image processing device can improve the fusion effect by performing fusion processing on the fifteenth image and the second aligned image, and further obtain the sixteenth image.
Based on the technical scheme provided by the above embodiment, the embodiment of the application also provides a possible application scenario.
With the popularization of mobile phones and the improvement of the photographing function of the mobile phones, more and more people use the mobile phones to take photos. However, due to various reasons, the quality of the images obtained by the mobile phone may be poor, such as: image blur, improper exposure of the image, etc. Therefore, when the quality of an image captured by a mobile phone is poor, it is necessary to process the image to improve the quality of the image, and image fusion processing is one of them. Based on the technical scheme provided by the embodiment of the application, the fusion accuracy of the images to be fused can be improved, and the effect of image fusion processing is further improved.
For example, when the user presses a photographing shutter key, the mobile phone captures an image a and an image B in a short time. Based on the technical scheme provided by the embodiment of the application, the mobile phone processes the image A and the image B, adjusts the position of at least one pixel point in the image a, and realizes the fusion of the image A to the image B to obtain the image C. And the mobile phone performs fusion processing on the image B and the image C to obtain an image D, and presents the image D to the user.
It will be understood by those skilled in the art that the above method of the specific embodiments is not meant to be strictly exemplary and should not be construed as limiting the scope of the claims.
The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 47, fig. 47 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, in which the apparatus 1 includes: a first acquiring unit 11, a first processing unit 12, a second processing unit 13, wherein:
the first obtaining unit 11 is configured to obtain at least two images to be fused, where the at least two images to be fused include: the image fusion method comprises the steps that a first image to be fused and a second image to be fused are obtained, wherein the first image to be fused and the second image to be fused both comprise first type pixel points;
a first processing unit 12, configured to perform downsampling on the first image to be fused to obtain a first image, and perform downsampling on the second image to be fused to obtain a second image, where the first image and the second image are both continuous images, both the first image and the second image include the first type of pixel points, a ratio of a resolution of the first image to be fused is greater than a first threshold, and a ratio of a resolution of the second image to be fused is greater than the first threshold;
And the second processing unit 13 is configured to perform fusion processing on the first image and the second image to obtain a third image.
In combination with any embodiment of the present application, the first processing unit 12 is configured to:
rotating the first image to be fused by a first angle to obtain a fourth image, and rotating the second image to be fused by a second angle to obtain a fifth image, wherein the first angle and the second angle are both odd multiples of 45 degrees;
magnifying the coordinate axis scale of a first pixel coordinate system by n times to obtain a second pixel coordinate system, and magnifying the coordinate axis scale of a third pixel coordinate system by n times to obtain a fourth pixel coordinate system, wherein the first pixel coordinate system is the pixel coordinate system of the third image, and the third pixel coordinate system is the pixel coordinate system of the fourth image;
determining the pixel value of each pixel point in the second pixel coordinate system according to the pixel value of the pixel point in the fourth image to obtain the first image;
and determining the pixel value of each pixel point in the fourth pixel coordinate system according to the pixel value of the pixel point in the fifth image to obtain the second image.
With reference to any embodiment of the present application, the first type of pixel belongs to a first channel, the first image to be fused further includes a second channel different from the first channel, and the image processing apparatus 1 further includes:
The first extraction unit 14 is configured to extract the second channel in the first image to be fused to obtain a sixth image;
a third processing unit 15, configured to perform upsampling processing on the third image to obtain a seventh image, where a size of the seventh image is the same as that of the first image to be fused;
a first merging unit 16, configured to take the sixth image and the seventh image as one channel respectively, and merge the sixth image and the seventh image to obtain an eighth image.
In combination with any embodiment of the present application, the third processing unit 15 is configured to:
rotating the third image by a third angle to obtain a ninth image, wherein the third angle is a same angle of a terminal edge of a fourth angle, and the fourth angle is opposite to the first angle;
reducing the coordinate axis scale of a fifth pixel coordinate system by the factor of n to obtain a sixth pixel coordinate system, wherein the fifth pixel coordinate system is the pixel coordinate system of the ninth image;
and determining the pixel value of each pixel point in the sixth pixel coordinate system according to the pixel value of the pixel point in the ninth image to obtain the seventh image.
With reference to any embodiment of the present application, the first extracting unit 14 is further configured to extract the second channel in the second image to be fused to obtain a tenth image before the sixth image and the seventh image are respectively used as one channel and the sixth image and the seventh image are combined to obtain an eighth image;
the second processing unit 13 is configured to perform fusion processing on the sixth image and the tenth image to obtain an eleventh image;
the first merging unit 16 is configured to:
and respectively taking the seventh image and the eleventh image as a channel, and combining the seventh image and the eleventh image to obtain the eighth image.
With reference to any embodiment of the present application, the first image to be fused further includes a third channel different from the first channel;
the ratio of the number of the second-type pixels to the number of the third-type pixels is equal to the ratio of the number of the fourth-type pixels to the number of the fifth-type pixels, wherein the second-type pixels comprise the first-type pixels in the first image to be fused, the third-type pixels comprise pixels belonging to the third channel in the first image to be fused, the fourth-type pixels comprise the first-type pixels in the second image to be fused, and the fifth-type pixels comprise pixels belonging to the third channel in the second image to be fused.
With reference to any one of the embodiments of the present application, the image processing apparatus 1 further includes:
a first aligning unit 17, configured to align the first image with the second image to obtain a first aligned image before performing fusion processing on the first image and the second image to obtain a third image;
the second processing unit 13 is configured to:
and performing fusion processing on the second image and the first aligned image to obtain the third image.
With reference to any embodiment of the present application, the first image to be fused includes: first pixel, second pixel, third pixel, fourth pixel, the second is waited to fuse the image and is included: a fifth pixel point, a sixth pixel point, a seventh pixel point and an eighth pixel point;
the coordinate of the first pixel point is (i, j), the coordinate of the second pixel point is (i +1, j), the coordinate of the third pixel point is (i, j +1), and the coordinate of the fourth pixel point is (i +1, j + 1); the coordinates of the fifth pixel point are (i, j), the coordinates of the sixth pixel point are (i +1, j), the coordinates of the seventh pixel point are (i, j +1), and the coordinates of the eighth pixel point are (i +1, j +1), wherein i and j are positive integers;
Under the condition that the first pixel point and the fifth pixel point are the first-class pixel points, the second pixel point, the third pixel point, the sixth pixel point and the seventh pixel point are not the first-class pixel points, the fourth pixel point and the eighth pixel point are the first-class pixel points, under the condition that the first pixel point and the fifth pixel point are not the first-class pixel points, the second pixel point, the third pixel point, the sixth pixel point and the seventh pixel point are the first-class pixel points, and the fourth pixel point and the eighth pixel point are not the first-class pixel points; or the like, or, alternatively,
under the condition that the first pixel point is the first-class pixel point and the fifth pixel point is not the first-class pixel point, the second pixel point, the third pixel point and the eighth pixel point are not the first-class pixel point, the fourth pixel point, the sixth pixel point and the seventh pixel point are all the first-class pixel points, under the condition that the first pixel point is not the first-class pixel point and the fifth pixel point is the first-class pixel point, the second pixel point, the third pixel point and the eighth pixel point are all the first-class pixel points, and the fourth pixel point, the sixth pixel point and the seventh pixel point are not the first-class pixel points.
In combination with any embodiment of the present application, both the arrangement manner of the pixel points in the first image to be fused and the arrangement manner of the pixel points in the second image to be fused are bayer arrays.
Referring to fig. 48, fig. 48 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present application, where the apparatus 2 includes: a second obtaining unit 21, a second extracting unit 22, a fourth processing unit 23, and a fifth processing unit 24, wherein:
the second obtaining unit 21 is configured to obtain at least two images to be fused, where the at least two images to be fused include: the image fusion method comprises the steps that a third image to be fused and a fourth image to be fused are obtained, wherein the third image to be fused and the fourth image to be fused both comprise sixth-type pixel points;
a second extracting unit 22, configured to extract a fourth channel in the third image to be fused to obtain a twelfth image, and extract the fourth channel in the fourth image to be fused to obtain a thirteenth image, where the sixth type of pixel point belongs to the fourth channel;
a fourth processing unit 23, configured to perform downsampling on the twelfth image to obtain a fourteenth image, and perform the downsampling on the thirteenth image to obtain a fifteenth image, where the fourteenth image and the fifteenth image are both continuous images, both the fourteenth image and the fifteenth image include the sixth type of pixel point, a ratio of a resolution of the fourteenth image to a resolution of the third image to be fused is greater than a second threshold, and a ratio of a resolution of the fifteenth image to a resolution of the fourth image to be fused is greater than the second threshold;
A fifth processing unit 24, configured to perform fusion processing on the fourteenth image and the fifteenth image to obtain a sixteenth image.
With reference to any embodiment of the present application, the fourth processing unit 23 is configured to:
rotating the twelfth image by a fifth angle to obtain a seventeenth image, and rotating the thirteenth image by a sixth angle to obtain an eighteenth image, wherein the fifth angle and the sixth angle are both odd multiples of 45 degrees;
magnifying the coordinate axis scale of a seventh pixel coordinate system by m times to obtain an eighth pixel coordinate system, and magnifying the coordinate axis scale of a ninth pixel coordinate system by m times to obtain a tenth pixel coordinate system, wherein the seventh pixel coordinate system is the pixel coordinate system of the seventeenth image, and the ninth pixel coordinate system is the pixel coordinate system of the eighteenth image;
determining the pixel value of each pixel point under the eighth pixel coordinate system according to the pixel value of the pixel point in the seventeenth image to obtain the fourteenth image;
and determining the pixel value of each pixel point in the tenth pixel coordinate system according to the pixel value of the pixel point in the eighteenth image to obtain the fifteenth image.
With reference to any embodiment of the present application, the fourth processing unit 23 is configured to:
constructing a nineteenth image and a twentieth image, wherein the nineteenth image contains the sixth type of pixel points in the third image to be fused, and the twentieth image contains the sixth type of pixel points in the fourth image to be fused;
reducing the pixel values in the nineteenth image by s times to obtain the fourteenth image;
and reducing the pixel values in the twentieth image by the factor of s to obtain the fifteenth image.
With reference to any embodiment of the present application, a diagonal line of the third image to be fused includes a first line segment, and a diagonal line of the fourth image to be fused includes a second line segment;
the fourth processing unit 23 is configured to:
arranging at least one seventh pixel point with the center belonging to the same first diagonal line into a line of pixel points of an image according to the ascending order of the abscissa, and constructing a twenty-first image, wherein the seventh pixel point comprises the sixth pixel point in the third image to be fused, and the first diagonal line comprises: a line passing through the first line segment, a line parallel to the first line segment;
Arranging at least one eighth type pixel point with the center belonging to the same second diagonal line into a line of pixel points of the image according to the sequence of the abscissa from small to large to construct a twenty-second image, wherein the eighth type pixel point comprises the sixth type pixel point in the fourth image to be fused, and the second diagonal line comprises: a straight line passing through the second line segment, a straight line parallel to the second line segment;
sequencing rows in the twenty-first image to obtain the nineteenth image, and sequencing rows in the twenty-second image to obtain the twentieth image; or the like, or, alternatively,
arranging at least one seventh type pixel point with the center belonging to the same first diagonal line into a row of pixel points of an image according to the ascending order of the abscissa, and constructing a twenty-third image, wherein the seventh type pixel point comprises the sixth type pixel point in the third image to be fused, and the first diagonal line comprises: a line passing through the first line segment, a line parallel to the first line segment;
arranging at least one eighth type pixel point with the center belonging to the same second diagonal line into a row of pixel points of an image according to the sequence of horizontal coordinates from small to large to construct a twenty-fourth image, wherein the eighth type pixel point comprises a sixth type pixel point in the fourth image to be fused, and the second diagonal line comprises: a straight line passing through the second line segment, a straight line parallel to the second line segment;
And sequencing columns in the twenty-third image to obtain the nineteenth image, and sequencing the twenty-fourth image to obtain the twentieth image.
With reference to any embodiment of the present application, the fourth processing unit 23 is configured to:
determining a first mean value of the ordinate of each row of pixel points in the twenty-first image, and obtaining a first index according to the first mean value, wherein the first mean value and the first index are in positive correlation or negative correlation;
arranging the rows in the twenty-first image according to the descending order of the first index to obtain the nineteenth image;
determining a second average value of the ordinate of each row of pixel points in the twenty-second image, and obtaining a second index according to the second average value, wherein the second average value and the second index are in positive correlation or negative correlation;
and arranging the rows in the twenty-second image according to the sequence of the second index from big to small to obtain the twentieth image.
In combination with any embodiment of the present application, in the case that the first average is positively correlated with the first index, the second average is positively correlated with the second index;
and under the condition that the first average value and the first index are in negative correlation, the second average value and the second index are in negative correlation.
With reference to any embodiment of the present application, the diagonal line of the third image to be fused further includes a third line segment different from the first line segment, and the diagonal line of the fourth image to be fused further includes a fourth line segment different from the second line segment;
the fourth processing unit 23 is configured to:
arranging rows in the twenty-first image according to a first sequence to obtain the nineteenth image, wherein the first sequence is a sequence from the great ordinate of the first index pixel point to the small ordinate, the first sequence or the sequence from the small ordinate of the first index pixel point to the large ordinate of the first index pixel point, and the first index pixel point comprises a pixel point of which the center belongs to a first straight line; under the condition that the third line segment passes through the center of the seventh pixel point, the first straight line is a straight line passing through the third line segment; when the third line segment does not exceed the center of the seventh-type pixel point, the first line is a line which is parallel to the third line segment and is closest to the third line segment among lines which pass through the center of the seventh-type pixel point;
arranging rows in the twenty-second image according to a second sequence to obtain the twentieth image, wherein the second sequence is the sequence from the great ordinate of the second index pixel point to the small ordinate, the second sequence or the sequence from the small ordinate of the second index pixel point to the large ordinate, and the second index pixel point comprises a pixel point of which the center belongs to a second straight line; under the condition that the fourth line segment passes through the center of the eighth pixel point, the second line is a line passing through the third line segment; in a case where the fourth line segment does not exceed the center of the eighth type pixels, the second line is a line closest to the fourth line segment among lines parallel to the fourth line segment and passing through the center of the eighth type pixels.
In combination with any embodiment of the present application, in a case that the first order is a descending order of the vertical coordinates of the first index pixel points, the second order is a descending order of the vertical coordinates of the second index pixel points;
and under the condition that the first sequence is the sequence from small to large of the vertical coordinates of the first index pixel points, the second sequence is the sequence from small to large of the vertical coordinates of the second index pixel points.
With reference to any embodiment of the present application, the fourth processing unit 23 is configured to:
determining a third mean value of the ordinate of each row of pixel points in the twenty-third image, and obtaining a third index according to the third mean value, wherein the third mean value and the third index are in positive correlation or negative correlation;
arranging the columns in the twenty-third image according to the descending order of the third index to obtain the nineteenth image;
determining a fourth mean value of the ordinate of each row of pixel points in the twenty-fourth image, and obtaining a fourth index according to the fourth mean value, wherein the fourth mean value and the fourth index are in positive correlation or negative correlation;
and arranging the columns in the twenty-fourth image according to the sequence of the fourth index from large to small to obtain the twentieth image.
In combination with any embodiment of the present application, in the case that the third mean value is positively correlated with the third index, the fourth mean value is positively correlated with the fourth index;
and under the condition that the third average value and the third index are in negative correlation, the fourth average value and the fourth index are in negative correlation.
With reference to any embodiment of the present application, the diagonal line of the third image to be fused further includes a third line segment different from the first line segment, and the diagonal line of the fourth image to be fused further includes a fourth line segment different from the second line segment;
the fourth processing unit 23 is configured to:
arranging the rows in the twenty-third image according to a third sequence to obtain the nineteenth image, wherein the third sequence is a sequence from the great ordinate of the third index pixel point to the small ordinate, the third sequence or the sequence from the small ordinate of the third index pixel point to the large ordinate, and the third index pixel point includes a pixel point whose center belongs to a third straight line; under the condition that the third line segment passes through the center of the seventh pixel point, the third line is a line passing through the third line segment; when the third line segment does not exceed the center of the seventh-type pixel point, the third line segment is a line closest to the third line segment among lines parallel to the third line segment and passing through the center of the seventh-type pixel point;
Arranging the rows in the twenty-fourth image according to a fourth sequence to obtain the twentieth image, wherein the fourth sequence is a sequence from the great ordinate of the fourth index pixel point to the small ordinate, the fourth sequence or the sequence from the small ordinate of the fourth index pixel point to the large ordinate, and the fourth index pixel point comprises a pixel point of which the center belongs to a fourth straight line; under the condition that the fourth line segment passes through the center of the eighth pixel point, the fourth line is a line passing through the third line segment; in a case where the fourth line is not more than the center of the eighth type pixels, the fourth line is a line closest to the fourth line among lines parallel to the fourth line and passing through the center of the eighth type pixels.
In combination with any embodiment of the present application, in a case that the third order is a descending order of the vertical coordinates of the third index pixel points, the fourth order is a descending order of the vertical coordinates of the fourth index pixel points;
and under the condition that the third sequence is the sequence from small to large of the vertical coordinates of the third index pixel points, the fourth sequence is the sequence from small to large of the vertical coordinates of the fourth index pixel points.
With reference to any embodiment of the present application, the third image to be fused further includes a fifth channel different from the fourth channel, and the second extracting unit 22 is further configured to:
extracting the fifth channel in the third image to be fused to obtain a twenty-fifth image;
the image processing apparatus 2 further includes:
a sixth processing unit 25, configured to perform upsampling on the sixteenth image to obtain a twenty-sixth image, where a size of the twenty-sixth image is the same as that of the third image to be fused;
a second merging unit 26, configured to take the twenty-fifth image and the twenty-sixth image as one channel, respectively, and merge the twenty-fifth image and the twenty-sixth image to obtain a twenty-seventh image.
In combination with any embodiment of the present application, the fifth processing unit 24 is configured to:
rotating the twenty-fifth image by a seventh angle to obtain a twenty-eighth image, wherein the seventh angle is a same angle on a final side of an eighth angle, and the eighth angle and the fifth angle are opposite numbers;
reducing the coordinate axis scale of an eleventh pixel coordinate system by the factor of m to obtain a twelfth pixel coordinate system, wherein the eleventh pixel coordinate system is the pixel coordinate system of the twenty-eighth image;
And determining the pixel value of each pixel point in the twelfth pixel coordinate system according to the pixel value of the pixel point in the twenty-eighth image to obtain the twenty-sixth image.
With reference to any embodiment of the present application, the second extracting unit 22 is further configured to, before the twenty-fifth image and the twenty-sixth image are respectively used as one channel and the twenty-fifth image and the twenty-sixth image are combined to obtain a twenty-seventh image, extract the fifth channel in the fourth image to be fused to obtain a twenty-ninth image;
the fifth processing unit 24 is further configured to perform fusion processing on the twenty-fifth image and the twenty-ninth image to obtain a thirtieth image;
the second merging unit 26 is further configured to:
and respectively taking the twenty-ninth image and the thirtieth image as a channel, and combining the twenty-ninth image and the thirtieth image to obtain a twenty-seventh image.
With reference to any embodiment of the present application, the third image to be fused further includes a sixth channel different from the fourth channel;
the ratio of the number of the ninth pixels to the number of the tenth pixels is equal to the ratio of the number of the eleventh pixels to the number of the twelfth pixels, wherein the ninth pixels include the sixth pixels in the third image to be fused, the tenth pixels include the pixels in the third image to be fused that belong to the sixth channel, the eleventh pixels include the sixth pixels in the fourth image to be fused, and the twelfth pixels include the pixels in the fourth image to be fused that belong to the sixth channel.
With reference to any one of the embodiments of the present application, the image processing apparatus 2 further includes:
a second aligning unit 27, configured to align the fourteenth image with the fifteenth image to obtain a second aligned image before performing fusion processing on the fourteenth image and the fifteenth image to obtain a sixteenth image;
the fifth processing unit 24 is further configured to:
and performing fusion processing on the fifteenth image and the second aligned image to obtain the sixteenth image.
In combination with any embodiment of the present application, the third image to be fused includes: the ninth pixel point, the tenth pixel point, the eleventh pixel point and the twelfth pixel point, and the fourth image to be fused includes: a thirteenth pixel point, a fourteenth pixel point, a fifteenth pixel point and a sixteenth pixel point;
the coordinates of the ninth pixel point are (p, q), the coordinates of the tenth pixel point are (p +1, q), the coordinates of the eleventh pixel point are (p, q +1), and the coordinates of the twelfth pixel point are (p +1, q + 1); the coordinates of the thirteenth pixel point are (p, q), the coordinates of the fourteenth pixel point are (p +1, q), the coordinates of the fifteenth pixel point are (p, q +1), the coordinates of the sixteenth pixel point are (p +1, q +1), wherein both p and q are positive integers;
Under the condition that the ninth pixel point and the thirteenth pixel point are all the sixth pixel point, the tenth pixel point, the eleventh pixel point, the fourteenth pixel point and the fifteenth pixel point are all the sixth pixel point, and the twelfth pixel point and the sixteenth pixel point are all the sixth pixel point; or the like, or, alternatively,
under the condition that the ninth pixel point is the sixth pixel point and the thirteenth pixel point is not the sixth pixel point, the tenth pixel point, the eleventh pixel point and the sixteenth pixel point are not the sixth pixel point, the twelfth pixel point, the fourteenth pixel point and the fifteenth pixel point are all the sixth pixel point, under the condition that the ninth pixel point is not the sixth pixel point and the thirteenth pixel point is the sixth pixel point, the tenth pixel point, the eleventh pixel point and the sixteenth pixel point are all the sixth pixel point, and the twelfth pixel point, the fourteenth pixel point and the fifteenth pixel point are not the sixth pixel point.
In combination with any embodiment of the present application, both the arrangement manner of the pixel points in the third image to be fused and the arrangement manner of the pixel points in the fourth image to be fused are bayer arrays.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present application may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Fig. 49 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus 3 includes a processor 31, a memory 32, an input device 33, and an output device 34. The processor 31, the memory 32, the input device 33 and the output device 34 are coupled by a connector, which includes various interfaces, transmission lines or buses, etc., and the embodiment of the present application is not limited thereto. It should be appreciated that in various embodiments of the present application, coupled refers to being interconnected in a particular manner, including being directly connected or indirectly connected through other devices, such as through various interfaces, transmission lines, buses, and the like.
The processor 31 may be one or more Graphics Processing Units (GPUs), and in the case that the processor 31 is one GPU, the GPU may be a single-core GPU or a multi-core GPU. Alternatively, the processor 31 may be a processor group composed of a plurality of GPUs, and the plurality of processors are coupled to each other through one or more buses. Alternatively, the processor may be other types of processors, and the like, and the embodiments of the present application are not limited.
Memory 32 may be used to store computer program instructions, as well as various types of computer program code for executing the program code of aspects of the present application. Alternatively, the memory includes, but is not limited to, Random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or compact disc read-only memory (CD-ROM), which is used for related instructions and data.
The input means 33 are for inputting data and/or signals and the output means 34 are for outputting data and/or signals. The input device 33 and the output device 34 may be separate devices or may be an integral device.
It is understood that, in the embodiment of the present application, the memory 32 may be used to store not only the relevant instructions, but also relevant data, for example, the memory 32 may be used to store the first image to be fused and the second image to be fused acquired through the input device 33, or the memory 32 may be used to store a third image obtained through the processor 31, and the like, and the embodiment of the present application is not limited to the data specifically stored in the memory.
It will be appreciated that fig. 49 shows only a simplified design of an image processing apparatus. In practical applications, the image processing apparatuses may further include other necessary components, including but not limited to any number of input/output devices, processors, memories, etc., and all image processing apparatuses that can implement the embodiments of the present application are within the scope of the present application.
Fig. 50 is a schematic diagram of a hardware structure of another image processing apparatus according to an embodiment of the present application. The image processing apparatus 4 includes a processor 41, a memory 42, an input device 43, and an output device 44. The processor 41, the memory 42, the input device 43 and the output device 44 are coupled by a connector, which includes various interfaces, transmission lines or buses, etc., and the embodiment of the present application is not limited thereto. It should be appreciated that in various embodiments of the present application, coupled refers to being interconnected in a particular manner, including being directly connected or indirectly connected through other devices, such as through various interfaces, transmission lines, buses, and the like.
The processor 41 may be one or more Graphics Processing Units (GPUs), and in the case that the processor 41 is one GPU, the GPU may be a single-core GPU or a multi-core GPU. Alternatively, the processor 31 may be a processor group composed of a plurality of GPUs, and the plurality of processors are coupled to each other through one or more buses. Alternatively, the processor may be other types of processors, and the like, and the embodiments of the present application are not limited.
Memory 42 may be used to store computer program instructions, as well as various types of computer program code for executing the program code of aspects of the present application. Alternatively, the memory includes, but is not limited to, Random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or compact disc read-only memory (CD-ROM), which is used for related instructions and data.
The input means 43 are for inputting data and/or signals and the output means 44 are for outputting data and/or signals. The input device 43 and the output device 44 may be separate devices or may be an integral device.
It is understood that, in the embodiment of the present application, the memory 42 may be used to store not only the relevant instructions, but also relevant data, for example, the memory 42 may be used to store the third image to be fused and the fourth image to be fused acquired through the input device 43, or the memory 42 may also be used to store the sixteenth image obtained through the processor 41, and the like, and the embodiment of the present application is not limited to the data specifically stored in the memory.
It will be appreciated that fig. 50 shows only a simplified design of an image processing apparatus. In practical applications, the image processing apparatuses may further include other necessary components, including but not limited to any number of input/output devices, processors, memories, etc., and all image processing apparatuses that can implement the embodiments of the present application are within the scope of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It is also clear to those skilled in the art that the descriptions of the various embodiments of the present application have different emphasis, and for convenience and brevity of description, the same or similar parts may not be repeated in different embodiments, so that the parts that are not described or not described in detail in a certain embodiment may refer to the descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media that can store program codes, such as a read-only memory (ROM) or a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Claims (34)

1. An image processing method, characterized in that the method comprises:
acquiring at least two images to be fused, wherein the at least two images to be fused comprise: the image fusion method comprises the steps that a first image to be fused and a second image to be fused are obtained, wherein the first image to be fused and the second image to be fused both comprise first type pixel points;
performing downsampling processing on the first image to be fused to obtain a first image, and performing downsampling processing on the second image to be fused to obtain a second image, wherein the first image and the second image are both continuous images, the first image and the second image both comprise the first type of pixel points, the ratio of the resolution of the first image to be fused is greater than a first threshold, and the ratio of the resolution of the second image to be fused is greater than the first threshold;
And carrying out fusion processing on the first image and the second image to obtain a third image.
2. The method according to claim 1, wherein the downsampling the first image to be fused to obtain a first image and the downsampling the second image to be fused to obtain a second image comprises:
rotating the first image to be fused by a first angle to obtain a fourth image, and rotating the second image to be fused by a second angle to obtain a fifth image, wherein the first angle and the second angle are both odd multiples of 45 degrees;
magnifying the coordinate axis scale of a first pixel coordinate system by n times to obtain a second pixel coordinate system, and magnifying the coordinate axis scale of a third pixel coordinate system by n times to obtain a fourth pixel coordinate system, wherein the first pixel coordinate system is the pixel coordinate system of the third image, and the third pixel coordinate system is the pixel coordinate system of the fourth image;
determining the pixel value of each pixel point in the second pixel coordinate system according to the pixel value of the pixel point in the fourth image to obtain the first image;
and determining the pixel value of each pixel point in the fourth pixel coordinate system according to the pixel value of the pixel point in the fifth image to obtain the second image.
3. The method according to claim 1 or 2, wherein the first type of pixel belongs to a first channel, the first image to be fused further comprises a second channel different from the first channel, and the method further comprises:
extracting the second channel in the first image to be fused to obtain a sixth image;
performing upsampling processing on the third image to obtain a seventh image, wherein the size of the seventh image is the same as that of the first image to be fused;
and respectively taking the sixth image and the seventh image as a channel, and combining the sixth image and the seventh image to obtain an eighth image.
4. The method of claim 3, where claim 3 is appended to claim 2, wherein the upsampling the third image to obtain a seventh image comprises:
rotating the third image by a third angle to obtain a ninth image, wherein the third angle is a same angle of a terminal edge of a fourth angle, and the fourth angle is opposite to the first angle;
reducing the coordinate axis scale of a fifth pixel coordinate system by the factor of n to obtain a sixth pixel coordinate system, wherein the fifth pixel coordinate system is the pixel coordinate system of the ninth image;
And determining the pixel value of each pixel point in the sixth pixel coordinate system according to the pixel value of the pixel point in the ninth image to obtain the seventh image.
5. The method according to claim 3 or 4, wherein before said taking the sixth image and the seventh image as one channel respectively, and combining the sixth image and the seventh image to obtain an eighth image, the method further comprises:
extracting the second channel in the second image to be fused to obtain a tenth image;
performing fusion processing on the sixth image and the tenth image to obtain an eleventh image;
taking the sixth image and the seventh image as a channel respectively, and combining the sixth image and the seventh image to obtain an eighth image, including:
and respectively taking the seventh image and the eleventh image as a channel, and combining the seventh image and the eleventh image to obtain the eighth image.
6. The method according to any one of claims 1 to 5, characterized in that the first image to be fused further comprises a third channel different from the first channel;
the ratio of the number of the second-type pixels to the number of the third-type pixels is equal to the ratio of the number of the fourth-type pixels to the number of the fifth-type pixels, wherein the second-type pixels comprise the first-type pixels in the first image to be fused, the third-type pixels comprise pixels belonging to the third channel in the first image to be fused, the fourth-type pixels comprise the first-type pixels in the second image to be fused, and the fifth-type pixels comprise pixels belonging to the third channel in the second image to be fused.
7. The method according to any one of claims 1 to 6, wherein before the fusing the first image and the second image to obtain a third image, the method further comprises:
aligning the first image with the second image to obtain a first aligned image;
the fusing the first image and the second image to obtain a third image, including:
and performing fusion processing on the second image and the first aligned image to obtain the third image.
8. The method according to any one of claims 1 to 7, characterized in that the first image to be fused comprises: first pixel, second pixel, third pixel, fourth pixel, the second is waited to fuse the image and is included: a fifth pixel point, a sixth pixel point, a seventh pixel point and an eighth pixel point;
the coordinate of the first pixel point is (i, j), the coordinate of the second pixel point is (i +1, j), the coordinate of the third pixel point is (i, j +1), and the coordinate of the fourth pixel point is (i +1, j + 1); the coordinates of the fifth pixel point are (i, j), the coordinates of the sixth pixel point are (i +1, j), the coordinates of the seventh pixel point are (i, j +1), and the coordinates of the eighth pixel point are (i +1, j +1), wherein i and j are positive integers;
Under the condition that the first pixel point and the fifth pixel point are the first-class pixel points, the second pixel point, the third pixel point, the sixth pixel point and the seventh pixel point are not the first-class pixel points, the fourth pixel point and the eighth pixel point are the first-class pixel points, under the condition that the first pixel point and the fifth pixel point are not the first-class pixel points, the second pixel point, the third pixel point, the sixth pixel point and the seventh pixel point are the first-class pixel points, and the fourth pixel point and the eighth pixel point are not the first-class pixel points; or the like, or, alternatively,
under the condition that the first pixel point is the first-class pixel point and the fifth pixel point is not the first-class pixel point, the second pixel point, the third pixel point and the eighth pixel point are not the first-class pixel point, the fourth pixel point, the sixth pixel point and the seventh pixel point are all the first-class pixel points, under the condition that the first pixel point is not the first-class pixel point and the fifth pixel point is the first-class pixel point, the second pixel point, the third pixel point and the eighth pixel point are all the first-class pixel points, and the fourth pixel point, the sixth pixel point and the seventh pixel point are not the first-class pixel points.
9. The method according to claim 8, wherein the arrangement of the pixel points in the first image to be fused and the arrangement of the pixel points in the second image to be fused are both bayer arrays.
10. An image processing method, characterized in that the method comprises:
acquiring at least two images to be fused, wherein the at least two images to be fused comprise: the image fusion method comprises the steps that a third image to be fused and a fourth image to be fused are obtained, wherein the third image to be fused and the fourth image to be fused both comprise sixth-type pixel points;
extracting a fourth channel in the third image to be fused to obtain a twelfth image, and extracting the fourth channel in the fourth image to be fused to obtain a thirteenth image, wherein the sixth type of pixel points belong to the fourth channel;
performing downsampling processing on the twelfth image to obtain a fourteenth image, and performing the downsampling processing on the thirteenth image to obtain a fifteenth image, wherein the fourteenth image and the fifteenth image are both continuous images, the fourteenth image and the fifteenth image both include the sixth type of pixel points, a ratio of a resolution of the fourteenth image to a resolution of the third image to be fused is greater than a second threshold, and a ratio of the resolution of the fifteenth image to the resolution of the fourth image to be fused is greater than the second threshold;
And carrying out fusion processing on the fourteenth image and the fifteenth image to obtain a sixteenth image.
11. The method according to claim 10, wherein the down-sampling the twelfth image to obtain a fourteenth image and the down-sampling the thirteenth image to obtain a fifteenth image comprises:
rotating the twelfth image by a fifth angle to obtain a seventeenth image, and rotating the thirteenth image by a sixth angle to obtain an eighteenth image, wherein the fifth angle and the sixth angle are both odd multiples of 45 degrees;
magnifying the coordinate axis scale of a seventh pixel coordinate system by m times to obtain an eighth pixel coordinate system, and magnifying the coordinate axis scale of a ninth pixel coordinate system by m times to obtain a tenth pixel coordinate system, wherein the seventh pixel coordinate system is the pixel coordinate system of the seventeenth image, and the ninth pixel coordinate system is the pixel coordinate system of the eighteenth image;
determining the pixel value of each pixel point under the eighth pixel coordinate system according to the pixel value of the pixel point in the seventeenth image to obtain the fourteenth image;
and determining the pixel value of each pixel point in the tenth pixel coordinate system according to the pixel value of the pixel point in the eighteenth image to obtain the fifteenth image.
12. The method according to claim 10, wherein the down-sampling the twelfth image to obtain a fourteenth image and the down-sampling the thirteenth image to obtain a fifteenth image comprises:
constructing a nineteenth image and a twentieth image, wherein the nineteenth image contains the sixth type of pixel points in the third image to be fused, and the twentieth image contains the sixth type of pixel points in the fourth image to be fused;
reducing the pixel values in the nineteenth image by s times to obtain the fourteenth image;
and reducing the pixel values in the twentieth image by the factor of s to obtain the fifteenth image.
13. The method according to claim 12, wherein the diagonal line of the third image to be fused comprises a first line segment, and the diagonal line of the fourth image to be fused comprises a second line segment;
the constructing the nineteenth and twentieth images includes:
arranging at least one seventh pixel point with the center belonging to the same first diagonal line into a line of pixel points of an image according to the ascending order of the abscissa, and constructing a twenty-first image, wherein the seventh pixel point comprises the sixth pixel point in the third image to be fused, and the first diagonal line comprises: a line passing through the first line segment, a line parallel to the first line segment;
Arranging at least one eighth type pixel point with the center belonging to the same second diagonal line into a line of pixel points of the image according to the sequence of the abscissa from small to large to construct a twenty-second image, wherein the eighth type pixel point comprises the sixth type pixel point in the fourth image to be fused, and the second diagonal line comprises: a straight line passing through the second line segment, a straight line parallel to the second line segment;
sequencing rows in the twenty-first image to obtain the nineteenth image, and sequencing rows in the twenty-second image to obtain the twentieth image; or the like, or, alternatively,
arranging at least one seventh type pixel point with the center belonging to the same first diagonal line into a row of pixel points of an image according to the ascending order of the abscissa, and constructing a twenty-third image, wherein the seventh type pixel point comprises the sixth type pixel point in the third image to be fused, and the first diagonal line comprises: a line passing through the first line segment, a line parallel to the first line segment;
arranging at least one eighth type pixel point with the center belonging to the same second diagonal line into a row of pixel points of an image according to the sequence of horizontal coordinates from small to large to construct a twenty-fourth image, wherein the eighth type pixel point comprises a sixth type pixel point in the fourth image to be fused, and the second diagonal line comprises: a straight line passing through the second line segment, a straight line parallel to the second line segment;
And sequencing columns in the twenty-third image to obtain the nineteenth image, and sequencing the twenty-fourth image to obtain the twentieth image.
14. The method of claim 13, wherein the sorting the rows in the twenty-first image into the nineteenth image and the sorting the rows in the twenty-second image into the twentieth image comprises:
determining a first mean value of the ordinate of each row of pixel points in the twenty-first image, and obtaining a first index according to the first mean value, wherein the first mean value and the first index are in positive correlation or negative correlation;
arranging the rows in the twenty-first image according to the descending order of the first index to obtain the nineteenth image;
determining a second average value of the ordinate of each row of pixel points in the twenty-second image, and obtaining a second index according to the second average value, wherein the second average value and the second index are in positive correlation or negative correlation;
and arranging the rows in the twenty-second image according to the sequence of the second index from big to small to obtain the twentieth image.
15. The method of claim 14, wherein if the first mean is positively correlated with the first indicator, the second mean is positively correlated with the second indicator;
And under the condition that the first average value and the first index are in negative correlation, the second average value and the second index are in negative correlation.
16. The method according to claim 13, wherein the diagonal line of the third image to be fused further comprises a third line segment different from the first line segment, and the diagonal line of the fourth image to be fused further comprises a fourth line segment different from the second line segment;
the sorting of the rows in the twenty-first image to obtain the nineteenth image and the sorting of the rows in the twenty-second image to obtain the twentieth image comprises:
arranging rows in the twenty-first image according to a first sequence to obtain the nineteenth image, wherein the first sequence is a sequence from the great ordinate of the first index pixel point to the small ordinate, the first sequence or the sequence from the small ordinate of the first index pixel point to the large ordinate of the first index pixel point, and the first index pixel point comprises a pixel point of which the center belongs to a first straight line; under the condition that the third line segment passes through the center of the seventh pixel point, the first straight line is a straight line passing through the third line segment; when the third line segment does not exceed the center of the seventh-type pixel point, the first line is a line which is parallel to the third line segment and is closest to the third line segment among lines which pass through the center of the seventh-type pixel point;
Arranging rows in the twenty-second image according to a second sequence to obtain the twentieth image, wherein the second sequence is the sequence from the great ordinate of the second index pixel point to the small ordinate, the second sequence or the sequence from the small ordinate of the second index pixel point to the large ordinate, and the second index pixel point comprises a pixel point of which the center belongs to a second straight line; under the condition that the fourth line segment passes through the center of the eighth pixel point, the second line is a line passing through the third line segment; in a case where the fourth line segment does not exceed the center of the eighth type pixels, the second line is a line closest to the fourth line segment among lines parallel to the fourth line segment and passing through the center of the eighth type pixels.
17. The method according to claim 16, wherein in a case where the first order is an order of the vertical coordinates of the first index pixel points from large to small, the second order is an order of the vertical coordinates of the second index pixel points from large to small;
and under the condition that the first sequence is the sequence from small to large of the vertical coordinates of the first index pixel points, the second sequence is the sequence from small to large of the vertical coordinates of the second index pixel points.
18. The method of claim 14, wherein said sorting columns in said twenty-third image results in said nineteenth image, and said sorting said twenty-fourth image results in said twentieth image, comprising:
determining a third mean value of the ordinate of each row of pixel points in the twenty-third image, and obtaining a third index according to the third mean value, wherein the third mean value and the third index are in positive correlation or negative correlation;
arranging the columns in the twenty-third image according to the descending order of the third index to obtain the nineteenth image;
determining a fourth mean value of the ordinate of each row of pixel points in the twenty-fourth image, and obtaining a fourth index according to the fourth mean value, wherein the fourth mean value and the fourth index are in positive correlation or negative correlation;
and arranging the columns in the twenty-fourth image according to the sequence of the fourth index from large to small to obtain the twentieth image.
19. The method of claim 18, wherein if the third mean is positively correlated with the third indicator, the fourth mean is positively correlated with the fourth indicator;
And under the condition that the third average value and the third index are in negative correlation, the fourth average value and the fourth index are in negative correlation.
20. The method according to claim 13, wherein the diagonal line of the third image to be fused further comprises a third line segment different from the first line segment, and the diagonal line of the fourth image to be fused further comprises a fourth line segment different from the second line segment;
said sorting columns in said twenty-third image results in said nineteenth image, and said sorting said twenty-fourth image results in said twentieth image, comprising:
arranging the rows in the twenty-third image according to a third sequence to obtain the nineteenth image, wherein the third sequence is a sequence from the great ordinate of the third index pixel point to the small ordinate, the third sequence or the sequence from the small ordinate of the third index pixel point to the large ordinate, and the third index pixel point includes a pixel point whose center belongs to a third straight line; under the condition that the third line segment passes through the center of the seventh pixel point, the third line is a line passing through the third line segment; when the third line segment does not exceed the center of the seventh-type pixel point, the third line segment is a line closest to the third line segment among lines parallel to the third line segment and passing through the center of the seventh-type pixel point;
Arranging the rows in the twenty-fourth image according to a fourth sequence to obtain the twentieth image, wherein the fourth sequence is a sequence from the great ordinate of the fourth index pixel point to the small ordinate, the fourth sequence or the sequence from the small ordinate of the fourth index pixel point to the large ordinate, and the fourth index pixel point comprises a pixel point of which the center belongs to a fourth straight line; under the condition that the fourth line segment passes through the center of the eighth pixel point, the fourth line is a line passing through the third line segment; in a case where the fourth line is not more than the center of the eighth type pixels, the fourth line is a line closest to the fourth line among lines parallel to the fourth line and passing through the center of the eighth type pixels.
21. The method according to claim 20, wherein in a case where the third order is an order of the vertical coordinates of the third index pixel points from large to small, the fourth order is an order of the vertical coordinates of the fourth index pixel points from large to small;
and under the condition that the third sequence is the sequence from small to large of the vertical coordinates of the third index pixel points, the fourth sequence is the sequence from small to large of the vertical coordinates of the fourth index pixel points.
22. The method according to any one of claims 10 to 21, wherein the third image to be fused further comprises a fifth channel different from the fourth channel, the method further comprising:
extracting the fifth channel in the third image to be fused to obtain a twenty-fifth image;
performing upsampling processing on the sixteenth image to obtain a twenty-sixth image, wherein the size of the twenty-sixth image is the same as that of the third image to be fused;
and respectively taking the twenty-fifth image and the twenty-sixth image as a channel, and combining the twenty-fifth image and the twenty-sixth image to obtain a twenty-seventh image.
23. The method according to claim 22, wherein in a case where claim 22 includes claim 11, the upsampling the twenty-fifth image to obtain a twenty-sixth image comprises:
rotating the twenty-fifth image by a seventh angle to obtain a twenty-eighth image, wherein the seventh angle is a same angle on a final side of an eighth angle, and the eighth angle and the fifth angle are opposite numbers;
Reducing the coordinate axis scale of an eleventh pixel coordinate system by the factor of m to obtain a twelfth pixel coordinate system, wherein the eleventh pixel coordinate system is the pixel coordinate system of the twenty-eighth image;
and determining the pixel value of each pixel point in the twelfth pixel coordinate system according to the pixel value of the pixel point in the twenty-eighth image to obtain the twenty-sixth image.
24. The method according to claim 22 or 23, wherein before the combining the twenty-fifth image and the twenty-sixth image into a twenty-seventh image by taking the twenty-fifth image and the twenty-sixth image as one channel, respectively, the method further comprises:
extracting the fifth channel in the fourth image to be fused to obtain a twenty-ninth image;
performing fusion processing on the twenty-fifth image and the twenty-ninth image to obtain a thirtieth image;
the combining the twenty-fifth image and the twenty-sixth image to obtain a twenty-seventh image by using the twenty-fifth image and the twenty-sixth image as a channel respectively includes:
and respectively taking the twenty-ninth image and the thirtieth image as a channel, and combining the twenty-ninth image and the thirtieth image to obtain a twenty-seventh image.
25. The method according to any one of claims 10 to 24, wherein the third image to be fused further comprises a sixth channel different from the fourth channel;
the ratio of the number of the ninth pixels to the number of the tenth pixels is equal to the ratio of the number of the eleventh pixels to the number of the twelfth pixels, wherein the ninth pixels include the sixth pixels in the third image to be fused, the tenth pixels include the pixels in the third image to be fused that belong to the sixth channel, the eleventh pixels include the sixth pixels in the fourth image to be fused, and the twelfth pixels include the pixels in the fourth image to be fused that belong to the sixth channel.
26. The method according to any one of claims 10 to 25, wherein before the fusing the fourteenth image and the fifteenth image to obtain the sixteenth image, the method further comprises:
aligning the fourteenth image with the fifteenth image, resulting in a second aligned image;
the fusion processing of the fourteenth image and the fifteenth image to obtain a sixteenth image includes:
And performing fusion processing on the fifteenth image and the second aligned image to obtain the sixteenth image.
27. The method according to any one of claims 10 to 26, wherein the third image to be fused comprises: the ninth pixel point, the tenth pixel point, the eleventh pixel point and the twelfth pixel point, and the fourth image to be fused includes: a thirteenth pixel point, a fourteenth pixel point, a fifteenth pixel point and a sixteenth pixel point;
the coordinates of the ninth pixel point are (p, q), the coordinates of the tenth pixel point are (p +1, q), the coordinates of the eleventh pixel point are (p, q +1), and the coordinates of the twelfth pixel point are (p +1, q + 1); the coordinates of the thirteenth pixel point are (p, q), the coordinates of the fourteenth pixel point are (p +1, q), the coordinates of the fifteenth pixel point are (p, q +1), the coordinates of the sixteenth pixel point are (p +1, q +1), wherein both p and q are positive integers;
under the condition that the ninth pixel point and the thirteenth pixel point are all the sixth pixel point, the tenth pixel point, the eleventh pixel point, the fourteenth pixel point and the fifteenth pixel point are all the sixth pixel point, and the twelfth pixel point and the sixteenth pixel point are all the sixth pixel point; or the like, or, alternatively,
Under the condition that the ninth pixel point is the sixth pixel point and the thirteenth pixel point is not the sixth pixel point, the tenth pixel point, the eleventh pixel point and the sixteenth pixel point are not the sixth pixel point, the twelfth pixel point, the fourteenth pixel point and the fifteenth pixel point are all the sixth pixel point, under the condition that the ninth pixel point is not the sixth pixel point and the thirteenth pixel point is the sixth pixel point, the tenth pixel point, the eleventh pixel point and the sixteenth pixel point are all the sixth pixel point, and the twelfth pixel point, the fourteenth pixel point and the fifteenth pixel point are not the sixth pixel point.
28. The method according to claim 27, wherein the arrangement of the pixel points in the third image to be fused and the arrangement of the pixel points in the fourth image to be fused are both bayer arrays.
29. An image processing apparatus, characterized in that the apparatus comprises:
the image fusion system comprises a first acquisition unit and a second acquisition unit, wherein the first acquisition unit is used for acquiring at least two images to be fused, and the at least two images to be fused comprise: the image fusion method comprises the steps that a first image to be fused and a second image to be fused are obtained, wherein the first image to be fused and the second image to be fused both comprise first type pixel points;
The first processing unit is used for performing downsampling processing on the first image to be fused to obtain a first image and performing downsampling processing on the second image to be fused to obtain a second image, wherein the first image and the second image are both continuous images, the first image and the second image both comprise the first type of pixel points, the ratio of the resolution of the first image to be fused is greater than a first threshold, and the ratio of the resolution of the second image to be fused is greater than the first threshold;
and the second processing unit is used for carrying out fusion processing on the first image and the second image to obtain a third image.
30. An image processing apparatus, characterized in that the apparatus comprises:
the second acquiring unit is used for acquiring at least two images to be fused, wherein the at least two images to be fused comprise: the image fusion method comprises the steps that a third image to be fused and a fourth image to be fused are obtained, wherein the third image to be fused and the fourth image to be fused both comprise sixth-type pixel points;
the second extraction unit is used for extracting a fourth channel in the third image to be fused to obtain a twelfth image and extracting the fourth channel in the fourth image to be fused to obtain a thirteenth image, wherein the sixth pixel belongs to the fourth channel;
A fourth processing unit, configured to perform downsampling on the twelfth image to obtain a fourteenth image, and perform the downsampling on the thirteenth image to obtain a fifteenth image, where the fourteenth image and the fifteenth image are both continuous images, both the fourteenth image and the fifteenth image include the sixth type of pixel point, a ratio of a resolution of the fourteenth image to a resolution of the third image to be fused is greater than a second threshold, and a ratio of a resolution of the fifteenth image to a resolution of the fourth image to be fused is greater than the second threshold;
and the fifth processing unit is used for carrying out fusion processing on the fourteenth image and the fifteenth image to obtain a sixteenth image.
31. An electronic device, comprising: a processor and a memory for storing computer program code comprising computer instructions which, if executed by the processor, the electronic device performs the method of any of claims 1 to 9.
32. A computer-readable storage medium, in which a computer program is stored, which computer program comprises program instructions which, if executed by a processor, cause the processor to carry out the method of any one of claims 1 to 9.
33. An electronic device, comprising: a processor and a memory for storing computer program code comprising computer instructions which, if executed by the processor, cause the electronic device to perform the method of any of claims 10 to 30.
34. A computer-readable storage medium, in which a computer program is stored, which computer program comprises program instructions which, if executed by a processor, cause the processor to carry out the method of any one of claims 10 to 30.
CN202010617432.2A 2020-06-30 2020-06-30 Image processing method and device, electronic device and storage medium Withdrawn CN111815547A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010617432.2A CN111815547A (en) 2020-06-30 2020-06-30 Image processing method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010617432.2A CN111815547A (en) 2020-06-30 2020-06-30 Image processing method and device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN111815547A true CN111815547A (en) 2020-10-23

Family

ID=72856467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010617432.2A Withdrawn CN111815547A (en) 2020-06-30 2020-06-30 Image processing method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN111815547A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798393A (en) * 2020-06-30 2020-10-20 深圳市慧鲤科技有限公司 Image processing method and device, electronic device and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1670765A (en) * 1999-06-16 2005-09-21 西尔弗布鲁克研究股份有限公司 Method of sharpening image using luminance channel
US20070025643A1 (en) * 2005-07-28 2007-02-01 Olivier Le Meur Method and device for generating a sequence of images of reduced size
CN101478692A (en) * 2008-12-25 2009-07-08 昆山锐芯微电子有限公司 Test method and system for image sensor dynamic resolution
CN103093444A (en) * 2013-01-17 2013-05-08 西安电子科技大学 Image super-resolution reconstruction method based on self-similarity and structural information constraint
CN103716606A (en) * 2013-12-30 2014-04-09 上海富瀚微电子有限公司 Bayer domain image downsampling method and device and camera equipment
CN106713877A (en) * 2017-01-23 2017-05-24 上海兴芯微电子科技有限公司 Interpolating method and apparatus of Bayer-format images
CN107590500A (en) * 2017-07-20 2018-01-16 济南中维世纪科技有限公司 A kind of color recognizing for vehicle id method and device based on color projection classification
CN108171657A (en) * 2018-01-26 2018-06-15 上海富瀚微电子股份有限公司 Image interpolation method and device
CN111798497A (en) * 2020-06-30 2020-10-20 深圳市慧鲤科技有限公司 Image processing method and device, electronic device and storage medium
CN111798393A (en) * 2020-06-30 2020-10-20 深圳市慧鲤科技有限公司 Image processing method and device, electronic device and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1670765A (en) * 1999-06-16 2005-09-21 西尔弗布鲁克研究股份有限公司 Method of sharpening image using luminance channel
US20070025643A1 (en) * 2005-07-28 2007-02-01 Olivier Le Meur Method and device for generating a sequence of images of reduced size
CN101478692A (en) * 2008-12-25 2009-07-08 昆山锐芯微电子有限公司 Test method and system for image sensor dynamic resolution
CN103093444A (en) * 2013-01-17 2013-05-08 西安电子科技大学 Image super-resolution reconstruction method based on self-similarity and structural information constraint
CN103716606A (en) * 2013-12-30 2014-04-09 上海富瀚微电子有限公司 Bayer domain image downsampling method and device and camera equipment
CN106713877A (en) * 2017-01-23 2017-05-24 上海兴芯微电子科技有限公司 Interpolating method and apparatus of Bayer-format images
CN107590500A (en) * 2017-07-20 2018-01-16 济南中维世纪科技有限公司 A kind of color recognizing for vehicle id method and device based on color projection classification
CN108171657A (en) * 2018-01-26 2018-06-15 上海富瀚微电子股份有限公司 Image interpolation method and device
CN111798497A (en) * 2020-06-30 2020-10-20 深圳市慧鲤科技有限公司 Image processing method and device, electronic device and storage medium
CN111798393A (en) * 2020-06-30 2020-10-20 深圳市慧鲤科技有限公司 Image processing method and device, electronic device and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HYUN MOOK OH等: "Edge Adaptive Color Demosaicking Based on the Spatial Correlation of the Bayer Color Difference", 《EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING》, no. 9, pages 1 - 15 *
KUO-LIANG CHUNG等: "Novel and Optimal Luma Modification-Based Chroma Downsampling for Bayer Color Filter Array Images", 《 IEEE OPEN JOURNAL OF CIRCUITS AND SYSTEMS》, vol. 1, pages 48 - 59, XP011794018, DOI: 10.1109/OJCAS.2020.2996624 *
李嘉: "面向复用成像的像素设计研究", 《中国博士学位论文全文数据库_信息科技辑》, pages 138 - 65 *
王浩: "箭载可见光成像系统动态范围提高及图像优化算法研究", 《中国博士学位论文全文数据库_工程科技Ⅱ辑》, pages 031 - 10 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798393A (en) * 2020-06-30 2020-10-20 深圳市慧鲤科技有限公司 Image processing method and device, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN110827200B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal
US11537873B2 (en) Processing method and system for convolutional neural network, and storage medium
US7612784B2 (en) Image processor and method, computer program, and recording medium
WO2017023475A1 (en) Method and system of demosaicing bayer-type image data for image processing
CN109816612A (en) Image enchancing method and device, computer readable storage medium
DE112018007730T5 (en) 3D OBJECT DETECTION USING 3D CONVOLUTIONAL NEURAL NETWORKS WITH DEPTH-BASED MULTISCALING FILTERS
CN108242063B (en) Light field image depth estimation method based on GPU acceleration
CN112381711B (en) Training and quick super-resolution reconstruction method for light field image reconstruction model
CN114298900A (en) Image super-resolution method and electronic equipment
CN114627034A (en) Image enhancement method, training method of image enhancement model and related equipment
CN111798497A (en) Image processing method and device, electronic device and storage medium
CN111353955A (en) Image processing method, device, equipment and storage medium
CN110910326B (en) Image processing method and device, processor, electronic equipment and storage medium
CN111882565A (en) Image binarization method, device, equipment and storage medium
CN111815547A (en) Image processing method and device, electronic device and storage medium
CN111563517A (en) Image processing method, image processing device, electronic equipment and storage medium
TW201536029A (en) Image downsampling apparatus and method
CN111798393A (en) Image processing method and device, electronic device and storage medium
WO2023024660A1 (en) Image enhancement method and apparatus
CN113034416A (en) Image processing method and device, electronic device and storage medium
CN112733826B (en) Image processing method and device
CN112237002A (en) Image processing method and apparatus
CN111161204B (en) Image processing method and device, electronic equipment and readable storage medium
CN116777739A (en) Image processing method, game rendering method, device, equipment and storage medium
CN112419146B (en) Image processing method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201023