CN117710202A - Image fusion method, electronic device and storage medium - Google Patents

Image fusion method, electronic device and storage medium Download PDF

Info

Publication number
CN117710202A
CN117710202A CN202311064852.2A CN202311064852A CN117710202A CN 117710202 A CN117710202 A CN 117710202A CN 202311064852 A CN202311064852 A CN 202311064852A CN 117710202 A CN117710202 A CN 117710202A
Authority
CN
China
Prior art keywords
image
image block
extension
extended
overlapping region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311064852.2A
Other languages
Chinese (zh)
Inventor
孙琪
陈成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311064852.2A priority Critical patent/CN117710202A/en
Publication of CN117710202A publication Critical patent/CN117710202A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application relates to an image fusion method, electronic equipment and a storage medium. The method comprises the steps of carrying out extension processing on the peripheral boundary of each image block in an input image according to a preset extension length to obtain an extension image block of each image block, and carrying out pretreatment on each extension image block in the input image. When each extended image block is output in sequence, the method performs edge fusion on each extended image block and the adjacent extended image blocks to obtain a fused output image. According to the technical scheme, the extension image blocks corresponding to each image block in the input image are preprocessed, each extension image block after processing is sequentially output, when each extension image block is output, each extension image block and the extension image block adjacent to each extension image block are subjected to edge fusion, so that a fused output image is obtained, the technical problem that a splicing line exists in the fused output image or the memory consumed by the image blocks during fusion can be solved.

Description

Image fusion method, electronic device and storage medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image fusion method, an electronic device, and a storage medium.
Background
In the related art, the image stitching mainly includes the following two schemes. In the first stitching scheme, all images to be stitched are cached at the same time, and then the overlapping areas of the images to be stitched are fused. However, this stitching scheme occupies a large amount of memory, resulting in a large memory consumption. Another stitching scheme is to cut only the image content of the core area as a stitched image and stitch the cut individual images together. However, this stitching scheme may cause some stitching problems between the individual stitched images.
Disclosure of Invention
In view of the foregoing, it is necessary to provide an image fusion method, an electronic device, and a storage medium to solve the technical problem of a stitching line after image block fusion or the technical problem of excessive memory consumption during image block fusion.
In a first aspect, an embodiment of the present application provides an image fusion method, where the method includes: acquiring an input image, wherein the input image comprises a preset number of image blocks; performing extension processing on the peripheral boundary of each image block in the input image according to the preset extension length to obtain an extension image block corresponding to each image block; preprocessing each extended image block; and carrying out edge fusion on each extended image block and the adjacent extended image blocks to obtain a fused output image. According to the technical scheme, the extension image blocks corresponding to each image block in the input image are preprocessed, each extension image block after being processed is sequentially output, when each extension image block is output, each extension image block and the extension image block adjacent to each extension image block are subjected to edge fusion, so that a fused output image is obtained, the technical problem that a splicing line exists in the fused output image or the memory consumed by the image blocks during fusion can be solved.
In an embodiment of the present application, performing edge fusion on each extended image block and an adjacent extended image block to obtain a fused output image includes: determining a first overlapping region and a first non-overlapping region between each extended image block and an adjacent extended image block; reserving pixel values of the first non-overlapping region; and obtaining the pixel value of the first overlapping region according to the pixel value of the extended image block in the first overlapping region and the pixel value of the adjacent extended image block. In the above technical solution, the pixel value of the first overlapping area is obtained by reserving the pixel value of the first non-overlapping area and according to the pixel value of each extended image block in the first overlapping area, the pixel value of the first overlapping area is obtained by the pixel value of the adjacent extended image block in the first overlapping area, so that the problem that the splicing line appears when the extended image block and the adjacent extended image block are fused can be avoided, and the memory consumed by the image blocks when fusion is performed can be reduced.
In an embodiment of the present application, determining a first overlapping region and a first non-overlapping region between each extended image block and an adjacent extended image block includes: taking a region corresponding to the pixel coordinates, which are not overlapped in each extended image block and the adjacent extended image blocks, as a first non-overlapping region; and taking the region corresponding to the pixel coordinates of each extended image block overlapped with the adjacent extended image blocks as a first overlapping region. In the above technical solution, by using the area corresponding to the pixel coordinates of each extended image block that are not coincident with the adjacent extended image block as the first non-overlapping area and using the area corresponding to the pixel coordinates of each extended image block that are coincident with the adjacent extended image block as the first overlapping area, the overlapping area and the non-overlapping area between the extended image block and the adjacent extended image block can be accurately determined.
In an embodiment of the present application, obtaining the pixel value of the first overlapping area according to the pixel value of the extended image block in the first overlapping area and the pixel value of the adjacent extended image block includes: calculating a pixel value of the first overlapping region according to the formula P (x, y) =alpha 1 (x, y) =patch 1 (x, y) + (1-alpha 1 (x, y)) =patch 2 (x, y), wherein Patch1 (x, y) is a pixel value of each extended image block in the first overlapping region, patch2 (x, y) is a pixel value of an adjacent extended image block in the first overlapping region, alpha1 (x, y) is a weight parameter of each extended image block, and alpha1 (x, y) =1-x/32, wherein x is a width of the first overlapping region. According to the technical scheme, patch1 (x, y) + (1-alpha 1 (x, y)) and Patch2 (x, y) according to the formula P (x, y) =alpha 1 (x, y), the problem that the stitching line appears after the extended image block and the adjacent extended image block are fused can be avoided.
In an embodiment of the present application, performing extension processing on a peripheral boundary of each image block in an input image according to a preset extension length to obtain an extended image block corresponding to each image block, including: extending each image block along the peripheral boundary according to a preset extension length to obtain an extension area corresponding to the image block, and taking the image block including the extension area after extension as an extension image block; according to the positional relationship between the extension area of the extended image block and the image block, the extension area at the upper position of the image block is taken as an upper extension area, the extension area at the lower position of the image block is taken as a lower extension area, the extension area at the left position of the image block is taken as a left extension area, and the extension area at the right position of the image block is taken as a right extension area. In the above technical solution, the extension area of the image block may be divided into four different areas according to the positional relationship between the extension area of the extension image block and the image block.
In an embodiment of the present application, edge fusion is performed on each extended image block and an adjacent extended image block to obtain a fused output image, including: extending image blocks P of the Mth row and the Nth column in a matrix constructed based on the image blocks according to a first preset sequence (M,N) Displaying on a canvas, wherein M is any positive integer greater than or equal to 1 and less than or equal to S, N is any positive integer greater than or equal to 1 and less than or equal to L, S is the number of rows of the matrix, and L is the number of columns of the matrix; retaining extended image block P of Mth row and Nth column (M,N) Image block, right extension area and lower extension area; extending image blocks P of M rows and (n+1) th columns according to a first preset sequence (M,N+1) Display of extended image block P in Mth row and Nth column (M,N) And extending image blocks P of M-th row and (n+1) -th column (M,N+1) Image blocks of (a) and (b) extending image blocks P of (M) th row and (N) th column (M,N) Fusing the right extension regions of the image according to a preset image fusion algorithm; extended image block P of M-th row and (n+1) -th column (M,N+1) Left extension region of (a), and extension image blocks P of M and N rows (M,N) Overlapping area according to preset image fusion algorithmFusion is performed to update the extended image block P of the Mth row and the Nth column (M,N) The method comprises the steps of carrying out a first treatment on the surface of the Extending the (M+1) -th row and the N-th column of the image block P (M+1,N) Display on canvas and extend image block P of (M+1) th row and N th column (M+1,N) Upper extension region of (a) and extension image block (P) of (M) th row and (N) th column (M,N) The overlapping areas are fused according to a preset image fusion algorithm; extending the (M+1) -th row and the N-th column of the image block P (M+1,N) Upper extension image block P of M-th row and (n+1) -th column (M,N+1) And fusing the overlapping areas according to a preset image fusion algorithm. In the above technical scheme, the (M+1) -th row and the N-th column extend image block P (M+1,N) Display on canvas and extend image block P of (M+1) th row and N th column (M+1,N) Upper extension region of (a) and extension image block (P) of (M) th row and (N) th column (M,N) The overlapping areas are fused according to a preset image fusion algorithm, and the (M+1) th row and the N th column extend image blocks P (M+1,N) Upper extension image block P of M-th row and (n+1) -th column (M,N+1) The overlapping area between the two image blocks is fused according to a preset image fusion algorithm, so that the extension image blocks P of the Mth row and the Nth column can be avoided (M,N) Adjacent thereto extended image block P (M,N+1) 、P (M+1,N) The overlapping area between the two is a problem of splicing lines after fusion.
In one embodiment of the present application, the extended image block P of the Mth row and the (n+1) th column (M,N+1) Image blocks of (a) and (b) extending image blocks P of (M) th row and (N) th column (M,N) The fusing according to a preset image fusion algorithm comprises the following steps: determining an extended image block P of an M-th row and an (n+1) -th column (M,N+1) Image blocks of (a) and (b) extending image blocks P of (M) th row and (N) th column (M,N) A second overlapping region and a second non-overlapping region in the right extension region of (a); reserving pixel values of the second non-overlapping region; extended image block P according to Mth row and (n+1) th column in second overlapping region (M,N+1) Pixel values of image blocks in (a), extended image blocks P of M-th and N-th rows in second overlapping region (M,N) And obtaining the pixel value of the second overlapping region.In the above technical solution, the pixel values of the second non-overlapping region are reserved, and the extended image block P of the M-th row and the (n+1) -th column in the second overlapping region is used (M,N+1) Pixel values of image blocks in (a), extended image blocks P of M-th and N-th rows in second overlapping region (M,N) To obtain the pixel value of the second overlapping region, the (n+1) -th column of the extended image block P can be avoided (M,N+1) Image blocks of (a) and (b) extending image blocks P of (M) th row and (N) th column (M,N) The right extension of (2) after fusion presents a problem of stitching.
In an embodiment of the present application, the extended image block P according to the Mth row and the (n+1) th column in the second overlapping region (M,N+1) Pixel values of image blocks in (a), extended image blocks P of M-th and N-th rows in second overlapping region (M,N) The obtaining of the pixel value of the second overlap region includes: calculating pixel values of the second overlapping region according to the formula P (x, y) =alpha 2 (x, y) ×patch3 (x, y) + (1-alpha 2 (x, y)) ×patch4 (x, y), wherein Patch3 (x, y) is an extended image block P of the M-th row and the (n+1) -th column in the second overlapping region (M,N+1) Pixel values of the image blocks in (a) Patch4 (x, y) are the extended image blocks P of the M th row and N th column in the second overlapping region (M,N) Alpha2 (x, y) is a weight parameter, alpha2 (x, y) =1-x 2/32, where x2 is the width of the second overlap region. According to the above technical scheme, the pixel value of the second overlapping region is calculated according to the formula P (x, y) =alpha 2 (x, y) ×patch3 (x, y) + (1-alpha 2 (x, y)) ×patch4 (x, y), so that the extension image block P of the mth row and the (n+1) th column can be avoided (M,N+1) Image blocks in (a) and extended image blocks P of M th row and N th column in second overlapping region (M,N) The right extension region of (a) is fused, and a problem of splice lines occurs.
In one embodiment of the present application, the extended image block P of the Mth row and the (n+1) th column (M,N+1) Left extension region of (a), and extension image blocks P of M and N rows (M,N) Overlapping areas between the two rows are fused according to a preset image fusion algorithm to update the extension image blocks P of the Mth row and the Nth column (M,N) Comprising the following steps: determining an extension map of the M-th row and the (n+1) -th columnImage block P (M,N+1) Left extension region of (a), and extension image blocks P of M and N rows (M,N) A third overlapping region and a third non-overlapping region; reserving pixel values of the third non-overlapping region; extended image block P according to M-th row and (n+1) -th column in third overlapping region (M,N+1) Pixel values in the left extension region of (a), extension image blocks P of the M-th row and the N-th column in the third overlapping region (M,N) And obtaining the pixel value of the third overlapping region. In the above technical solution, the pixel values of the third non-overlapping region are reserved, and the extended image block P of the M-th row and the (n+1) -th column in the third overlapping region is used (M,N+1) Pixel values in the left extension region of (a), extension image blocks P of the M-th row and the N-th column in the third overlapping region (M,N) To obtain the pixel value of the third overlapping region, and avoid the extension image block P of the Mth row and the (n+1) th column (M,N+1) Left extension region of (a), and extension image blocks P of M and N rows (M,N) The overlapping area between the two is a problem of splicing lines after fusion.
In one embodiment of the present application, the extended image block P according to the M-th row and the (n+1) -th column in the third overlapping region (M,N+1) Pixel values in the left extension region of (a), extension image blocks P of the M-th row and the N-th column in the third overlapping region (M,N) The obtaining of the pixel value of the third overlapping region includes: calculating pixel values of the third overlapping region according to the formula P (x, y) =alpha 3 (x, y) ×patch5 (x, y) + (1-alpha 3 (x, y)) ×patch6 (x, y), wherein Patch5 (x, y) is an extended image block P of the M-th row and the (n+1) -th column in the third overlapping region (M,N+1) Pixel values in the left extension region of (a), patch6 (x, y) is the extension image block P of the M-th and N-th rows in the third overlapping region (M,N) Alpha3 (x, y) is a weight parameter, alpha (x, y) =1-x 3/32, where x3 is the width of the third overlap region.
In an embodiment of the present application, the method further comprises: and determining the preset extension length according to the receptive field of the artificial intelligence AI network. According to the technical scheme, the preset extension length is determined according to the receptive field of the artificial intelligence AI network, and the peripheral boundary of the image block is extended according to the preset extension length to obtain the extended image block corresponding to the image block, so that the extended image block can be identified and processed by the AI network.
In an embodiment of the present application, preprocessing each extended image block in the input image includes: the first processor is used for preprocessing the extension image block of each image block in sequence; processing the extended image block corresponding to each image block after the pretreatment by using an AI network through a second processor; and carrying out post-processing on each extended image block processed by the AI network through a third processor. According to the technical scheme, heterogeneous computation of the extended image blocks is achieved, the time of preprocessing the extended image blocks and the time of post-processing the extended image blocks are hidden in the computation time of an AI network, and the processing time of the extended image blocks is reduced and the image processing performance is improved by carrying out heterogeneous computation on the extended image blocks.
In an embodiment of the present application, the first processor is a graphics processor, the second processor is an embedded neural network processor, and the third processor is a central processor.
In a second aspect, embodiments of the present application provide an electronic device, including a memory and a processor: a memory for storing program instructions; and a processor for reading and executing the program instructions stored in the memory, which when executed by the processor, cause the electronic device to perform the image fusion method described above.
In a third aspect, embodiments of the present application provide a computer-readable storage medium storing program instructions that, when executed by a processor, perform the above-described image fusion method.
In addition, the technical effects of the second aspect to the third aspect may be referred to in the description related to the method designed in the method section above, and will not be repeated here.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly describe the drawings in the embodiments, it being understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of stitching lines between images of related stitching.
Fig. 2 is a schematic structural diagram of a terminal device in an embodiment of the present application.
Fig. 3 is a flowchart of an image fusion method according to an embodiment of the present application.
Fig. 4A is a schematic diagram of capturing an image block from an input image according to an embodiment of the present application.
Fig. 4B is a schematic diagram of an image block displayed in an input image in an embodiment of the present application.
Fig. 5 is a schematic diagram of an extension image block obtained by extending an image block in an input image according to an embodiment of the present application.
Fig. 6 is a schematic diagram illustrating extension of an image block according to an embodiment of the present application.
Fig. 7 is a schematic view of image block extension according to another embodiment of the present application.
Fig. 8 is a schematic diagram of heterogeneous computation of an image block according to an embodiment of the present application.
FIG. 9 is a schematic diagram of an image tile displayed on a canvas in an embodiment of the present application.
Fig. 10 is a schematic diagram of edge blending of each extended image block and an adjacent extended image block according to an embodiment of the present application.
FIG. 11 is a flowchart of a method for edge blending an extended image block with an adjacent extended image block according to an embodiment of the present application.
Fig. 12 is a flowchart of an image fusion method according to another embodiment of the present application.
Fig. 13 is a schematic diagram of an output image according to an embodiment of the present application.
Fig. 14 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. It should be understood that, "/" means or, unless otherwise indicated herein. For example, A/B may represent A or B. The term "and/or" in this application is merely an association relationship describing an association object, and means that three relationships may exist. For example, a and/or B may represent: a exists alone, A and B exist simultaneously, and B exists alone. "at least one" means one or more. "plurality" means two or more than two. For example, at least one of a, b or c may represent: seven cases of a, b, c, a and b, a and c, b and c, a, b and c.
In the related art, the image stitching mainly includes the following two schemes. The first splicing scheme is to buffer all the images to be spliced at the same time and then fuse the overlapping areas of the images to be spliced. However, when all the images to be spliced need to be cached at the same time, the splicing scheme occupies a large amount of system memory, which results in large system memory consumption. Another stitching scheme is to cut only the image content of the core area as stitched images and stitch the cut stitched images together. However, the size of the stitched image in this stitching scheme is made up of the sizes of all stitched images, resulting in some stitching line problem between the respective stitched images (refer to fig. 1).
In order to solve the above problems, the present application provides an image fusion method. The method is applied to the terminal equipment. Referring to fig. 2, a schematic structural diagram of a terminal device in an embodiment of the present application is shown. The terminal device 10 includes, but is not limited to, a first processor 11, a second processor 12, and a third processor 13. The first processor 11 is used for preprocessing the image to improve the image quality and the recognition accuracy. The second processor 12 is used to process the image through an artificial intelligence (Artificial Intelligence, AI) network. The AI network includes at least one of an artificial neural network, a convolutional neural network, and a recurrent neural network. The third processor 13 is used for post-processing and edge blending of the images. In an embodiment of the present application, the first processor 11 may be a graphics processor (graphics processing unit, GPU) and the second processor 12 may be an embedded Neural network processor (Neural-network Processing Units, NPU) or a digital signal processor (Digital Signal Processing, DSP). The third processor 13 may be a central processing unit (Central Processing Unit, CPU).
Referring to fig. 3, a flowchart of an image fusion method according to an embodiment of the present application is shown. The image fusion method is applied in the terminal device 10. The method comprises the following steps.
In step S301, an input image is acquired, wherein the input image includes a preset number of image blocks.
In an embodiment of the present application, the terminal device obtains, as an input image, an image selected by a user according to a user selection operation on at least one image stored in a local gallery, so as to fuse image blocks in the image selected by the user in the gallery. In another embodiment of the present application, the terminal device uses an image captured by an imaging device, such as a camera, as an input image, so as to fuse image blocks in the captured image. The input image includes a preset number of tiles, each tile having a different location in the input image, each tile having the same or different size. Referring to fig. 4A, a schematic diagram of capturing an image block from an input image according to an embodiment of the present application is shown. The image blocks are image contents sequentially cut out from the input images according to a preset sequence and a preset size. For example, image blocks are obtained by sequentially cutting out image contents of a preset size from the left boundary of the input image in the order from left to right. When the image blocks are intercepted from the input image according to the sequence from left to right, if the right boundary of the image block exceeds the right boundary of the input image, the right boundary of the image block is aligned with the right boundary of the input image, and then the image content with the preset size is intercepted as the image block. The truncated image blocks are distributed at different positions of the input image, all image blocks constituting the input image. Referring to fig. 4B, in an embodiment of the present application, image blocks are displayed on a canvas in a matrix, the matrix including S rows and L columns, where S and L are positive integers greater than 2.
Step S302, acquiring a receptive field of an AI network, and determining a preset extension length of the image block according to the receptive field.
The Receptive Field (Receptive Field) refers to the area size that the pixels on the feature map output by each layer of the AI network map back onto the input image. In an embodiment of the present application, determining the preset extension length of the image block according to the receptive field includes: the preset extension length of the image block is set to be greater than or equal to a preset proportion of the receptive field, which may be, for example, one-twelfth of the receptive field. The preset extension length of the image block is used to characterize the length extending outwardly from the periphery of the image block. For example, if the area corresponding to the receptive field of the AI network is 96, the preset extension length of the image block is greater than or equal to 96/12=8.
Step S303, performing extension processing on the peripheral boundary of each image block in the input image according to the preset extension length to obtain an extension image block corresponding to each image block.
In an embodiment of the present application, each image block in the input image is extended along a peripheral boundary according to a preset extension length to obtain an extension area corresponding to the image block, and the image block including the extension area after extension is used as an extension image block.
Referring to fig. 5, a schematic diagram of extending an image block in an input image to obtain an extended image block according to an embodiment of the present application is shown. For example, an image block in the input image is a square block of 456 pixels, the preset extension length is 16 pixels, the image block is extended along the periphery of the square block according to the preset extension length of 16 pixels, an extension area of the image block is obtained, and the image block including the extension area is taken as an extension image block, namely, a square block of 488 pixels. Referring to fig. 5, in an embodiment of the present application, according to a positional relationship between an extension area and an image block, an extension area located at an upper position of the image block is taken as an upper extension area of the image block, an extension area located at a lower position of the image block is taken as a lower extension area of the image block, an extension area located at a left position of the image block is taken as a left extension area of the image block, and an extension area located at a right position of the image block is taken as a right extension area of the image block.
In an embodiment of the present application, if the extension area of the extension image block does not exceed the boundary of the input image, the image content corresponding to the extension area of the extension image block on the input image is taken as the image content of the extension area. If the extension area of the extension image block exceeds the boundary of the input image, taking the extension area which does not exceed the boundary of the input image as a first area and taking the extension area which exceeds the boundary of the input image as a second area; and intercepting the image content corresponding to the first area on the input image as the image content of the first area, and taking the mirror image value of the input image inwards by the second area along the boundary of the input image to obtain the image content of the second area. For example, referring to fig. 6, when the image block P1 is extended, if the extension area a of the image block P1 does not exceed the boundary of the input image, the image content corresponding to the extension area a on the input image is cut out as the image content of the extension area a. Referring to fig. 7, when the image block P2 is extended, if the extended area of the image block P2 exceeds the boundary of the input image, the extended area which does not exceed the boundary of the input image is taken as a first area B1, the extended area which exceeds the boundary of the input image is taken as a second area B2, the image content of the first area B1 corresponding to the input image is taken as the image content of the first area B1, and the second area B2 is taken to be the mirror image value of the input image inward along the boundary of the input image to obtain the image content of the second area B2.
In an embodiment of the present application, the input image may be directly extended according to a preset extension length to obtain a target input image, and then the extension image block may be obtained by cutting out from the target input image according to a preset sequence and according to a size of the extension image block. For example, the extended image block is obtained by sequentially cutting out the image contents of the corresponding size from the left boundary of the target input image. When the extended image blocks are cut from the input image in the left-to-right sequence, if the right boundary of the extended image block exceeds the right boundary of the target input image, the right boundary of the extended image block is aligned with the right boundary of the target input image, and then the image content with the corresponding size is cut as the extended image block.
Step S304, the extended image block of each image block is preprocessed in turn.
In an embodiment of the present application, the first processor 11 of the terminal device 10 performs preprocessing on the extended image block corresponding to each image block of the input image, so as to improve the quality and recognition accuracy of the image block. Methods of pretreatment include, but are not limited to: at least one of a smoothing algorithm, an edge detection algorithm, and a histogram equalization algorithm. For example, the first processor 11 of the terminal device 10 performs preprocessing on each extended image block of the input image in accordance with a smoothing algorithm to remove noise in the corresponding extended image block. The smoothing algorithm includes, but is not limited to, a moving average method, an exponential smoothing method, a gaussian filtering method.
In an embodiment of the present application, the first processor 11 of the terminal device 10 performs preprocessing on each extended image block of the input image according to the edge detection algorithm to remove noise in the corresponding extended image block. The edge detection algorithm includes, but is not limited to: at least one of a blurring process, a thresholding process, and a morphological process. In an embodiment of the present application, the first processor 11 of the terminal device 10 pre-processes each extended image block of the input image according to a histogram equalization algorithm to improve the brightness and contrast of the image. The edge detection algorithm includes, but is not limited to: fuzzy processing, thresholding, morphological processing.
Step S305, the AI network is utilized to process the extended image block corresponding to each image block after the preprocessing.
In an embodiment of the present application, the second processor 12 of the terminal device 10 processes the extended image block corresponding to each image block after the pre-processing through the AI network to perform noise reduction or image motion deblurring on the extended image block of each image block. In an embodiment of the present application, the AI network processes the extended image block corresponding to each image block after the preprocessing according to a preset deblurring algorithm to deblur the image motion of the extended image block of each image block. The preset deblurring algorithm comprises at least one of a motion deblurring method based on wavelet transformation, a motion deblurring method based on wiener filtering and a motion deblurring method based on adaptive filtering. In an embodiment of the application, the AI network processes the extended image block corresponding to each image block of the input image in a blocking manner, so that the problem that the AI network needs to normalize the size of the input image when processing the whole input image can be solved, and the problem that the AI network cannot complete reasoning due to overlarge input image when reasoning is performed can be solved.
Step S306, each extended image block after AI network processing is post-processed.
In an embodiment of the present application, the third processor 102 of the terminal device 10 performs post-processing on the extended image block corresponding to each image block after AI network processing. Post-processing includes at least one of image enhancement, denoising, sharpening, edge detection, image segmentation, image fusion. Referring to fig. 8, in the embodiment of the present application, the first processor 11 performs preprocessing on an extended image block corresponding to each image block of the input image; processing each of the pre-processed extended image blocks by the second processor 12 using the AI network; and performing post-processing on each extended image block processed by the AI network through the third processor 102, so as to implement heterogeneous computation on the extended image blocks, conceal the time of performing pre-processing and post-processing on the extended image blocks in the computation time of the AI network, and reduce the processing time of the extended image blocks and improve the image processing performance through performing heterogeneous computation on the extended image blocks.
Step S307, each processed extension image block is output in turn, and when each extension image block is output, edge fusion is performed on each extension image block and the extension image block adjacent to each extension image block, so as to obtain a fused output image.
Referring to FIG. 9, the extended image block is output on the canvas in a matrix comprising S rows and L columns, where S and L are positive integers greater than 2. In an embodiment of the present application, an extended image block adjacent to an extended image block refers to an image block having an overlapping area with the extended image block. For example, referring to fig. 9, if there is an overlapping area between the extended image block P2 and the extended image block P1, the extended image block P2 is an adjacent extended image block of the extended image block P1. In an embodiment of the present application, according to a position of each extended image block in the input image, all the extended image blocks processed according to the AI network are sequentially displayed on a canvas, and when each extended image block is displayed on the canvas, edge fusion is performed on each extended image block and extended image blocks adjacent to each extended image block to obtain an output image. In an embodiment of the present application, according to a position of each image block in the input image, sequentially displaying extended image blocks corresponding to each image block processed by the AI network on a canvas in a sequence from left to right and from top to bottom, and when each extended image block is displayed on the canvas, performing edge fusion on each extended image block and extended image blocks adjacent to each extended image block to obtain an output image, and displaying the output image.
In an embodiment of the present application, according to a preset image fusion algorithm, each extended image block and an adjacent extended image block are subjected to edge fusion to obtain an output image. The preset image fusion algorithm comprises a transparency (Alpha) image fusion algorithm or a Gaussian image fusion algorithm. Referring to fig. 10, a schematic diagram of edge blending of each extended image block and an adjacent extended image block according to an embodiment of the present application is shown. In an embodiment of the present application, according to a preset image fusion algorithm, performing edge fusion on each extended image block P3 and an adjacent extended image block P4 to obtain an output image includes: determining a first overlapping region R1 and a first non-overlapping region R2 in each extended image block P3 and the adjacent extended image block P4; reserving pixel values of the first non-overlapping region R2; according to the pixel value of each extended image block P3 in the first overlapping region R1, the pixel value of the first overlapping region R1 is obtained from the pixel values of the adjacent extended image blocks P4 in the first overlapping region R1.
In an embodiment of the present application, the pixel value of the first overlapping region R1 is calculated according to the formula P (x, y) =alpha 1 (x, y) =patch 1 (x, y) + (1-alpha 1 (x, y)) ×patch2 (x, y), where Patch1 (x, y) is the pixel value of each extended image block P3 in the first overlapping region R1, patch2 (x, y) is the pixel value of the adjacent extended image block P4 in the first overlapping region R1, and alpha1 (x, y) is the weight parameter of each extended image block. In one embodiment of the present application, alpha1 (x, y) =1-x 1/32, where x1 is the width of the first overlapping region R1, and the width of the first overlapping region R1 is twice the preset extension length.
In an embodiment of the present application, the first overlapping region R1 and the first non-overlapping region R2 between each extended image block P3 and the adjacent extended image block P4 are determined according to the pixel coordinates of each extended image block P3 and the adjacent extended image block P4 in the input image, specifically: the region corresponding to the pixel coordinates of each extended image block P3 that do not overlap with those of the adjacent extended image block P4 is taken as a first non-overlapping region R2, and the region corresponding to the pixel coordinates of each extended image block P3 that overlap with those of the adjacent extended image block P4 is taken as a first overlapping region R1.
Referring to fig. 11, a flowchart of a method for edge blending an extended image block with an adjacent extended image block in an embodiment of the present application is shown. The method comprises the following steps.
Step S901, extending image blocks P of the Mth row and the Nth column in the matrix constructed based on the image blocks according to the first preset sequence (M,N) Displayed on a canvas.
In an embodiment of the present application, the first predetermined sequence is a left-to-right orderIn order, or right to left order. Referring to fig. 8, the terminal device sequentially divides the extension image blocks P of the M-th row and the N-th column of the matrix from left to right (M,N) Displayed on a canvas. In an embodiment of the present application, M is a positive integer greater than or equal to 1 and less than or equal to S, N is a positive integer greater than or equal to 1 and less than or equal to L, M, N takes on a value from 1, S is a number of rows of the matrix, and L is a number of columns of the matrix.
Step S902, reserving the extension image blocks P of the Mth row and the Nth column (M,N) Image block, right extension region, and lower extension region.
Referring to fig. 9, in the embodiment of the present application, since the extended image blocks are sequentially output onto the canvas in the order from left to right, the extended image blocks P of the mth row and the nth column are output (M,N) Right and lower extension regions of (2) and the extension image block P of the M-th row and (n+1) -th column of the next output (M,N+1) There is an overlap area of tiles in the image, so that it is necessary to preserve the extended tiles P (M,N) Right extension region and lower extension region of (c) for subsequent extension of picture block P (M,N) With the next output extended image block P (M,N+1) And performing edge fusion to eliminate the stitching line of the output image. In another embodiment of the present application, if the extended image blocks are sequentially output onto the canvas in a right-to-left order, the image blocks, the left extended region, and the lower extended region in the extended image blocks are reserved.
Step S903, the extended image blocks P of M rows and (n+1) th columns are arranged according to a first predetermined order (M,N+1) Display of extended image block P in Mth row and Nth column (M,N) And extending image blocks P of M-th row and (n+1) -th column (M,N+1) Image blocks of (a) and (b) extending image blocks P of (M) th row and (N) th column (M,N) And fusing the right extension regions according to a preset image fusion algorithm.
In one embodiment of the present application, the extended image block P of the Mth row and the (n+1) th column (M,N+1) Image blocks of (a) and (b) extending image blocks P of (M) th row and (N) th column (M,N) The fusing according to a preset image fusion algorithm comprises the following steps: determining the delay of the M-th row and the (n+1) -th columnExtended image block P (M,N+1) Image blocks of (a) and (b) extending image blocks P of (M) th row and (N) th column (M,N) A second overlapping region and a second non-overlapping region in the right extension region of (a); reserving pixel values of the second non-overlapping region; and according to the extended image block P of the M th row and the (n+1) th column in the second overlapping region (M,N+1) Pixel values of image blocks in (a), extending image blocks P of M th row and N th column in the second overlapping region (M,N) And obtaining the pixel value of the second overlapping region. One embodiment of the application calculates pixel values of the second overlapping region according to the formula P (x, y) =alpha 2 (x, y) =patch 3 (x, y) + (1-alpha 2 (x, y)) =patch 4 (x, y), where Patch3 (x, y) is the M-th row and (n+1) -th column of the extended image block P in the second overlapping region (M,N+1) Pixel values of the image blocks in (a) Patch4 (x, y) are the extended image blocks P of the M th row and N th column in the second overlapping region (M,N) Alpha2 (x, y) is a weight parameter, alpha2 (x, y) =1-x 2/32, where x2 is the width of the second overlap region.
Step S904, extending image block P of M row and (n+1) th column (M,N+1) Left extension region of (a), and extension image blocks P of M and N rows (M,N) Overlapping areas between the two rows are fused according to a preset image fusion algorithm to update the extension image blocks P of the Mth row and the Nth column (M,N)
In one embodiment of the present application, an extended image block P of the Mth row and the (n+1) th column is determined (M,N+1) Left extension region of (a), and extension image blocks P of M and N rows (M,N) A third overlapping region and a third non-overlapping region; reserving pixel values of the third non-overlapping region; and according to the extended image block P of the M th row and the (n+1) th column in the third overlapping region (M,N+1) Pixel values in the left extension region of (a), extension image blocks P of the M-th row and the N-th column in the third overlapping region (M,N) And obtaining the pixel value of the third overlapping region. In one embodiment of the present application, the pixel value of the third overlapping region is calculated according to the formula P (x, y) =alpha 3 (x, y) =patch 5 (x, y) + (1-alpha 3 (x, y)) =patch 6 (x, y), where Patch5 (x, y) is the extension of the M-th row and the (n+1) -th column in the third overlapping regionExtended image block P (M,N+1) Pixel values in the left extension region of (a), patch6 (x, y) is the extension image block P of the M-th and N-th rows in the third overlapping region (M,N) Alpha3 (x, y) is a weight parameter, alpha (x, y) =1-x 3/32, where x3 is the width of the third overlap region.
In step S905, N is updated according to the formula N=N+1, and it is determined whether N is less than (L-1). If N is less than (L-1), step S902 is performed. If N is greater than or equal to (L-1), step S906 is performed.
In step S906, N is set to 1, M is updated according to the formula m=m+1, and it is determined whether M is greater than S. If M is less than or equal to S, step S907 is executed, and if M is greater than S, the flow ends.
Step S907, extending the image block P in the Mth row and the Nth column (M,N) Displaying the image blocks P extending from the Mth row and the Nth column on the canvas (M,N) Upper extension region of (C) and (M-1) th row and N th column of extension image block P (M-1,N) The overlapping areas are fused according to a preset image fusion algorithm, and the M-th row and the N-th column extend image blocks P (M,N) An extended picture block P of (M-1) th row and (N+1) th column (M-1,N+1) And fusing the overlapping areas according to a preset image fusion algorithm.
In one embodiment of the present application, the M-th and N-th column extend image blocks P (M,N) Upper extension region of (B) and (M-1) th and N th row extension image blocks P (M-1,N) The method flow for fusing the overlapped areas according to a preset image fusion algorithm comprises the following steps: determining an M-th row and an N-th column extension image block P (M,N) Upper extension region of (B) and (M-1) th and N th row extension image blocks P (M-1,N) A fourth overlapping region and a fourth non-overlapping region in the overlapping region therebetween; reserving pixel values of the fourth non-overlapping region; extending image block P according to M-th row and N-th column in fourth overlapping region (M,N) Pixel values in the upper extension region of the (M-1) th row and N th column of the extended image block P (M-1,N) And obtaining the pixel value of the fourth overlapping area.
In one embodiment of the present application, the Mth lineColumn N extension image block P (M,N) Upper extension region of (C) and (M-1) th row and (N+1) th column of extension image block P (M-1,N+1) An overlap region therebetween, comprising: determining an M-th row and an N-th column extension image block P (M,N) Upper extension region of (C) and (M-1) th row and (N+1) th column of extension image block P (M-1,N+1) A fifth overlapping region and a fifth non-overlapping region in the overlapping region therebetween; reserving pixel values of a fifth non-overlapping region; extending image blocks P according to the Mth and Nth rows in the fifth overlap region (M,N) Pixel values in the upper extension region of (M-1) th row and (N+1) th column of the extended image block P (M-1,N+1) And obtaining the pixel value of the fifth overlapping region.
After the execution of step S907, step S902 is executed until all the extended image blocks in all the rows are updated on the canvas.
In an embodiment of the present application, the terminal device 10 extends the image block P to any one of the matrices (M,N) And extending image block P (M,N) Adjacent extended image block P (M,N+1) P (M+1,N) Performing edge blending, including: extending image blocks P of an Mth row and an Nth column in a matrix constructed based on the image blocks according to a first preset sequence (M,N) Display on canvas; retaining extended image block P of Mth row and Nth column (M,N) Image block, right extension area and lower extension area; extending image blocks P of M rows and (n+1) th columns according to the first preset sequence (M,N+1) Display of extended image block P in Mth row and Nth column (M,N) And extending image blocks P of M-th row and (n+1) -th column (M,N+1) Image blocks of (a) and (b) extending image blocks P of (M) th row and (N) th column (M,N) Fusing the right extension regions of the image according to a preset image fusion algorithm; extended image block P of M-th row and (n+1) -th column (M,N+1) Left extension region of (a), and extension image blocks P of M and N rows (M,N) Overlapping areas between the two rows are fused according to a preset image fusion algorithm to update the extension image blocks P of the Mth row and the Nth column (M,N) The method comprises the steps of carrying out a first treatment on the surface of the Extending the (M+1) -th row and the N-th column of the image block P (M+1,N) Displayed on canvas and will(M+1) -th row and N-th column extended picture block P (M+1,N) Upper extension region of (a) and extension image block (P) of (M) th row and (N) th column (M,N) The overlapping areas are fused according to a preset image fusion algorithm; extending the (M+1) -th row and the N-th column of the image block P (M+1,N) Upper extension image block P of M-th row and (n+1) -th column (M,N+1) And fusing the overlapping areas according to a preset image fusion algorithm.
According to the embodiment of the application, the first processor 11 is used for preprocessing the extended image blocks, the second processor 12 is used for processing the extended image blocks of each preprocessed image block through the AI network, and the third processor 102 is used for post-processing the extended image blocks, so that heterogeneous calculation of the extended image blocks is realized, and the heterogeneous calculation of the extended image blocks is used for reducing the processing time of the extended image blocks and improving the image processing performance. Meanwhile, each processed extension image block is sequentially output, and when each extension image block is output, edge fusion is carried out on each extension image block and the extension image block adjacent to each extension image block, so that a fused output image is obtained, the technical problem that a splicing line exists in the fused output image or the consumed memory of the image blocks during fusion can be solved.
Referring to fig. 12, a flowchart of an image fusion method according to another embodiment of the present application is shown. The image fusion method comprises the following steps.
In step S1201, an input image is acquired, wherein the input image includes a preset number of image blocks.
The details of step S1201 can be described with reference to step S201 in fig. 2.
Step S1202, performing extension processing on the peripheral boundary of each image block in the input image according to the preset extension length to obtain an extension image block corresponding to each image block.
In an embodiment of the present application, a receptive field of the AI network is obtained, and a preset extension length of the image block is determined according to the receptive field. The details of step S2102 may be described with reference to step S303 in fig. 3.
Step S1203 preprocesses each extended image block in the input image.
In an embodiment of the present application, preprocessing each extended image block in the input image includes at least one of preprocessing, processing through an AI network, post-processing each extended image block in the input image. In this embodiment, the specific implementation process of processing each extended image block in the input image through the AI network is referred to the description of step S304 in fig. 3, the specific implementation process of preprocessing each extended image block in the input image is referred to the description of step S305 in fig. 3, and the specific implementation process of post-processing each extended image block in the input image is referred to the description of step S306 in fig. 3.
Step S1204, outputting each processed extended image block in turn, and when outputting each extended image block, performing edge fusion on each extended image block and the extended image block adjacent to each extended image block to obtain a fused output image.
In an embodiment of the present application, according to a position of each image block in the input image, all the processed extended image blocks are sequentially displayed on a canvas, and when each extended image block is displayed on the canvas, edge fusion is performed on each extended image block and an extended image block adjacent to each extended image block to obtain an output image. Referring to fig. 13, in the embodiment of the present disclosure, the image output by performing edge fusion between the extended image block and the adjacent extended image block does not have a problem of stitching. According to the method and the device, the extension image blocks corresponding to each image block in the input image are preprocessed, each extension image block after processing is sequentially output, and when each extension image block is output, edge fusion is conducted on each extension image block and the extension image block adjacent to each extension image block, so that a fused output image is obtained, the technical problem that a splicing line exists in the fused output image or the memory consumed by the image blocks during fusion can be solved.
The electronic device 100 according to the embodiment of the present application is described below. Referring to fig. 14, a hardware structure of an electronic device 100 according to an embodiment of the present application is shown. The electronic device 100 may be the terminal device 10 in fig. 1.
In this embodiment, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a keyboard 190, a motor 191, an indicator 192, a camera 193, a display 194, a user identification module (subscriber identification module, SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device 100 through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (FLED), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (NVM).
The random access memory may include a static random-access memory (SRAM), a dynamic random-access memory (dynamic random access memory, DRAM), a synchronous dynamic random-access memory (synchronous dynamic random access memory, SDRAM), a double data rate synchronous dynamic random-access memory (doubledata rate synchronous dynamic random access memory, DDR SDRAM, such as fifth generation DDR SDRAM is commonly referred to as DDR5 SDRAM), etc.; the nonvolatile memory may include a disk storage device, a flash memory (flash memory).
The FLASH memory may include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. divided according to an operation principle, may include single-level memory cells (SLC), multi-level memory cells (MLC), triple-level memory cells (TLC), quad-level memory cells (QLC), etc. divided according to a storage specification, may include universal FLASH memory (english: universal FLASH storage, UFS), embedded multimedia memory cards (embedded multi media Card, eMMC), etc. divided according to a storage specification.
The random access memory may be read directly from and written to by the processor 110, may be used to store executable programs (e.g., machine instructions) for an operating system or other on-the-fly programs, may also be used to store data for users and applications, and the like.
The nonvolatile memory may store executable programs, store data of users and applications, and the like, and may be loaded into the random access memory in advance for the processor 110 to directly read and write.
The external memory interface 120 may be used to connect external non-volatile memory to enable expansion of the memory capabilities of the electronic device 100. The external nonvolatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and video are stored in an external nonvolatile memory.
The internal memory 121 or the external memory interface 120 is used to store one or more computer programs. One or more computer programs are configured to be executed by the processor 110. The one or more computer programs include a plurality of instructions that when executed by the processor 110, implement the image fusion method performed on the electronic device 100 in the above embodiment to implement the image fusion function of the electronic device 100.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The method can also be used for identifying the gesture of the electronic equipment 100, and can be applied to applications such as horizontal and vertical screen switching, pedometers and the like.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function.
The keyboard 190 includes a power-on key, a volume key, etc. The keyboard 190 may be a mechanical keyboard. Or may be a touch-type keyboard. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The present embodiment also provides a computer storage medium having stored therein computer instructions which, when executed on the electronic device 100, cause the electronic device 100 to perform the above-described related method steps to implement the image fusion method in the above-described embodiments.
The present embodiment also provides a computer program product which, when run on a computer, causes the computer to perform the above-described related steps to implement the image fusion method in the above-described embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component, or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer-executable instructions, and when the device is operated, the processor can execute the computer-executable instructions stored in the memory, so that the chip executes the image fusion method in each method embodiment.
The electronic device 100, the computer storage medium, the computer program product, or the chip provided in this embodiment are used to execute the corresponding methods provided above, so that the advantages achieved by the method can refer to the advantages in the corresponding methods provided above, and will not be described herein.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated unit may be stored in a readable storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present application and not for limiting, and although the present application has been described in detail with reference to the above preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present application may be modified or substituted without departing from the spirit and scope of the technical solution of the present application.

Claims (15)

1. A method of image fusion, the method comprising:
acquiring an input image, wherein the input image comprises a preset number of image blocks;
performing extension processing on the peripheral boundary of each image block in the input image according to a preset extension length to obtain an extension image block corresponding to each image block;
preprocessing each extended image block;
and carrying out edge fusion on each extended image block and the adjacent extended image blocks to obtain a fused output image.
2. The method of image fusion according to claim 1, wherein edge-fusing each extended image block with an adjacent extended image block to obtain a fused output image comprises:
determining a first overlapping region and a first non-overlapping region between each extended image block and an adjacent extended image block;
Reserving pixel values of the first non-overlapping region;
and obtaining the pixel value of the first overlapping region according to the pixel value of the extended image block in the first overlapping region and the pixel value of the adjacent extended image block.
3. The method of image fusion of claim 2, wherein determining the first overlapping region and the first non-overlapping region between each extended image block and an adjacent extended image block comprises:
taking the area corresponding to the pixel coordinates, which are not overlapped in each extended image block and the adjacent extended image blocks, as the first non-overlapping area;
and taking the region corresponding to the pixel coordinates of each extended image block overlapped with the adjacent extended image blocks as the first overlapping region.
4. The image fusion method according to claim 2, wherein the obtaining the pixel value of the first overlapping area according to the pixel value of the extended image block in the first overlapping area and the pixel value of the adjacent extended image block includes:
calculating the pixel value of the first overlapping region according to the formula P (x, y) =alpha 1 (x, y) ×patch1 (x, y) + (1-alpha 1 (x, y))×patch2 (x, y), wherein Patch1 (x, y) is the pixel value of each extended image block in the first overlapping region, patch2 (x, y) is the pixel value of an adjacent extended image block in the first overlapping region, alpha1 (x, y) is the weight parameter of each extended image block, and alpha1 (x, y) =1-x/32, wherein x is the width of the first overlapping region.
5. The image fusion method according to claim 1, wherein the extending the peripheral boundary of each image block in the input image according to the preset extension length to obtain an extended image block corresponding to each image block includes:
extending each image block along the peripheral boundary according to a preset extension length to obtain an extension area corresponding to the image block, and taking the image block including the extension area after extension as an extension image block;
according to the positional relationship between the extension area of the extension image block and the image block, the extension area at the position above the image block is taken as an upper extension area, the extension area at the position below the image block is taken as a lower extension area, the extension area at the position left of the image block is taken as a left extension area, and the extension area at the position right of the image block is taken as a right extension area.
6. The method of image fusion according to claim 5, wherein edge-fusing each extended image block with an adjacent extended image block to obtain a fused output image, comprises:
extending image blocks P of an Mth row and an Nth column in a matrix constructed based on the image blocks according to a first preset sequence (M,N) Displaying on a canvas, wherein M is any positive integer greater than or equal to 1 and less than or equal to S, N is any positive integer greater than or equal to 1 and less than or equal to L, S is the number of rows of the matrix, and L is the number of columns of the matrix;
retaining extended image block P of Mth row and Nth column (M,N) Image block, right extension area and lower extension area;
extending image blocks P of M rows and (n+1) th columns according to the first preset sequence (M,N+1) Display of extended image block P in Mth row and Nth column (M,N) And extending image blocks P of M-th row and (n+1) -th column (M,N+1) Image blocks of (a) and (b) extending image blocks P of (M) th row and (N) th column (M,N) Fusing the right extension regions of the image according to a preset image fusion algorithm;
extended image block P of M-th row and (n+1) -th column (M,N+1) Left extension region of (a), and extension image blocks P of M and N rows (M,N) The overlapping areas are fused according to the preset image fusion algorithm to update the extension image blocks P of the Mth row and the Nth column (M ,N);
Extending the (M+1) -th row and the N-th column of the image block P (M+1,N) Display on canvas and extend image block P of (M+1) th row and N th column (M+1,N) Upper extension region of (a) and extension image block (P) of (M) th row and (N) th column (M,N) The overlapping areas are fused according to the preset image fusion algorithm;
Extending the (M+1) -th row and the N-th column of the image block P (M+1,N) Upper extension region of (2) and M-th row (n+1)Extended image block P of column (M,N+1) And fusing the overlapping areas according to a preset image fusion algorithm.
7. The image fusion method of claim 6, wherein the extending image blocks P of the mth row and the (n+1) th column (M,N+1) Image blocks of (a) and (b) extending image blocks P of (M) th row and (N) th column (M,N) The fusing according to a preset image fusion algorithm comprises the following steps:
determining an extended image block P of an M-th row and an (n+1) -th column (M,N+1) Image blocks of (a) and (b) extending image blocks P of (M) th row and (N) th column (M,N) A second overlapping region and a second non-overlapping region in the right extension region of (a);
reserving pixel values of the second non-overlapping region;
according to the extended image block P of the M th row and the (n+1) th column in the second overlapping region (M,N+1) Pixel values of image blocks in (a), extending image blocks P of M th row and N th column in the second overlapping region (M,N) And obtaining the pixel value of the second overlapping region.
8. The image fusion method of claim 7, wherein the extended image block P according to the M-th row and (n+1) -th column in the second overlap region (M,N+1) Pixel values of image blocks in (a), extending image blocks P of M th row and N th column in the second overlapping region (M,N) The obtaining the pixel value of the second overlapping region includes:
calculating pixel values of the second overlapping region according to the formula P (x, y) =alpha 2 (x, y) ×patch3 (x, y) + (1-alpha 2 (x, y))×patch4 (x, y), wherein Patch3 (x, y) is an extended image block P of the M-th row and (n+1) -th column in the second overlapping region (M,N+1) Pixel values of the image blocks in (a) Patch4 (x, y) are the extended image blocks P of the M th row and N th column in the second overlapping region (M,N) Alpha2 (x, y) is a weight parameter, alpha2 (x, y) =1-x 2/32, where x2 is the pixel value of the second overlap regionWidth of the material.
9. The image fusion method of claim 6, wherein the extending image blocks P of the mth row and the (n+1) th column (M,N+1) Left extension region of (a), and extension image blocks P of M and N rows (M,N) The overlapping areas are fused according to the preset image fusion algorithm to update the extension image blocks P of the Mth row and the Nth column (M,N) Comprising the following steps:
determining an extended image block P of an M-th row and an (n+1) -th column (M,N+1) Left extension region of (a), and extension image blocks P of M and N rows (M,N) A third overlapping region and a third non-overlapping region;
reserving pixel values of the third non-overlapping region;
according to the extended image block P of the M th row and the (n+1) th column in the third overlapping region (M,N+1) Pixel values in the left extension region of the M-th row and N-th column of the extension image block P in the third overlapping region (M,N) And obtaining the pixel value of the third overlapping region.
10. The image fusion method of claim 9, wherein the extended image block P according to the M-th row and (n+1) -th column in the third overlapping region (M,N+1) Pixel values in the left extension region of the M-th row and N-th column of the extension image block P in the third overlapping region (M,N) The obtaining the pixel value of the third overlapping region includes:
calculating pixel values of a third overlapping region according to the formula P (x, y) =alpha 3 (x, y) ×patch5 (x, y) + (1-alpha 3 (x, y)) ×patch6 (x, y), wherein Patch5 (x, y) is an extended image block P of an M-th row and an (n+1) -th column in the third overlapping region (M,N+1) Pixel values in the left extension region of (1), patch6 (x, y) is the extension image block P of the M-th and N-th rows in the third overlapping region (M,N) Alpha3 (x, y) is a weight parameter, alpha (x, y) =1-x 3/32, where x3 is the width of the third overlapping region.
11. The image fusion method of claim 1, wherein the method further comprises:
and determining the preset extension length according to the receptive field of the artificial intelligence AI network.
12. The image fusion method of claim 11, wherein the preprocessing each of the extended image blocks in the input image comprises:
the first processor is used for preprocessing the extended image block of each image block in sequence;
processing the extended image block corresponding to each image block after the preprocessing by using the AI network through a second processor;
and carrying out post-processing on each extended image block processed by the AI network through a third processor.
13. The image fusion method of claim 12, wherein the first processor is a graphics processor, the second processor is an embedded neural network processor, and the third processor is a central processor.
14. An electronic device, the electronic device comprising a memory and a processor:
the memory is used for storing program instructions;
the processor configured to read and execute the program instructions stored in the memory, which when executed by the processor, cause the electronic device to perform the image fusion method according to any one of claims 1 to 13.
15. A computer readable storage medium, characterized in that the computer readable storage medium stores program instructions that, when run on a terminal device, cause an electronic device to perform the image fusion method of any one of claims 1 to 13.
CN202311064852.2A 2023-08-22 2023-08-22 Image fusion method, electronic device and storage medium Pending CN117710202A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311064852.2A CN117710202A (en) 2023-08-22 2023-08-22 Image fusion method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311064852.2A CN117710202A (en) 2023-08-22 2023-08-22 Image fusion method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN117710202A true CN117710202A (en) 2024-03-15

Family

ID=90148646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311064852.2A Pending CN117710202A (en) 2023-08-22 2023-08-22 Image fusion method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN117710202A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210166369A1 (en) * 2019-11-29 2021-06-03 Samsung Electronics Co., Ltd. Electronic apparatus and control method thereof
CN112907451A (en) * 2021-03-26 2021-06-04 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113747145A (en) * 2020-05-29 2021-12-03 Oppo广东移动通信有限公司 Image processing circuit, electronic device, and image processing method
US20220005168A1 (en) * 2020-07-03 2022-01-06 Samsung Electronics Co., Ltd. Image processing apparatus including neural network processor and method of operation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210166369A1 (en) * 2019-11-29 2021-06-03 Samsung Electronics Co., Ltd. Electronic apparatus and control method thereof
CN113747145A (en) * 2020-05-29 2021-12-03 Oppo广东移动通信有限公司 Image processing circuit, electronic device, and image processing method
US20220005168A1 (en) * 2020-07-03 2022-01-06 Samsung Electronics Co., Ltd. Image processing apparatus including neural network processor and method of operation
CN112907451A (en) * 2021-03-26 2021-06-04 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陶小平 等: "一种空间变化PSF图像分块复原的拼接方法", 光学学报, vol. 29, no. 03, 15 March 2009 (2009-03-15), pages 648 - 65 *

Similar Documents

Publication Publication Date Title
WO2020168956A1 (en) Method for photographing the moon and electronic device
CN112348732B (en) Model reasoning method, device and storage medium based on graphic rendering pipeline
CN113538273B (en) Image processing method and image processing apparatus
WO2021078001A1 (en) Image enhancement method and apparatus
CN114168065B (en) Method and device for adjusting memory configuration parameters
CN113170037B (en) Method for shooting long exposure image and electronic equipment
CN114140365B (en) Event frame-based feature point matching method and electronic equipment
WO2022267783A1 (en) Method for determining recommended scene, and electronic device
CN116315667A (en) Data transmission method, device, equipment and storage medium
CN116048831B (en) Target signal processing method and electronic equipment
CN116051351B (en) Special effect processing method and electronic equipment
CN113711123A (en) Focusing method and device and electronic equipment
CN115482143B (en) Image data calling method and system for application, electronic equipment and storage medium
CN114443109B (en) Patch repair method, electronic device and storage medium
CN115484383B (en) Shooting method and related device
CN114827442B (en) Method for generating image and electronic equipment
CN116263971A (en) Image frame prediction method, electronic device, and computer-readable storage medium
CN117710202A (en) Image fusion method, electronic device and storage medium
CN117726929A (en) Image processing method and device
CN116703741B (en) Image contrast generation method and device and electronic equipment
CN116703691B (en) Image processing method, electronic device, and computer storage medium
CN116343247B (en) Form image correction method, device and equipment
CN113704209B (en) Data sharing method, electronic device and storage medium
CN115952564B (en) Data writing method and terminal equipment
CN115802144B (en) Video shooting method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination