CN116563106A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN116563106A
CN116563106A CN202310485527.7A CN202310485527A CN116563106A CN 116563106 A CN116563106 A CN 116563106A CN 202310485527 A CN202310485527 A CN 202310485527A CN 116563106 A CN116563106 A CN 116563106A
Authority
CN
China
Prior art keywords
target
image
spliced
area
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310485527.7A
Other languages
Chinese (zh)
Inventor
焦阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202310485527.7A priority Critical patent/CN116563106A/en
Publication of CN116563106A publication Critical patent/CN116563106A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device and electronic equipment, wherein the method comprises the following steps: carrying out distortion correction on the obtained at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced; performing compensation processing on a target area of the first target image to be spliced to obtain a corresponding second image to be spliced; the first target image to be spliced is a target frame in the at least two frames of first images to be spliced, and the target area comprises an area, in which distortion correction processing is executed, in the first target image to be spliced; and performing stitching processing on the second images to be stitched to obtain a target image.

Description

Image processing method and device and electronic equipment
Technical Field
The embodiment of the application relates to electronic technology, and relates to but is not limited to an image processing method, an image processing device and electronic equipment.
Background
Along with the development of technology, the functions of terminals such as smart phones, tablet computers, cameras and the like are also continuously improved. For example, a smart phone has a separate operating system and a separate running space, and can be used by a user to install an application provided by a third service provider. As smart phones are increasingly powerful, their applications have been limited not only to talking or sending information, but also to taking photos using smart phones.
At present, most mobile phones have a panoramic shooting function, wherein the panoramic shooting is to continuously acquire a current image in the shooting process, and then synthesize the acquired images in an image stitching mode so as to obtain an image with wider field of view.
Disclosure of Invention
In view of this, embodiments of the present application provide an image processing method, an image processing device, and an electronic device.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
carrying out distortion correction on the obtained at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced;
performing compensation processing on a target area of the first target image to be spliced to obtain a corresponding second image to be spliced; the first target image to be spliced is a target frame in the at least two frames of first images to be spliced, and the target area comprises an area, in which distortion correction processing is executed, in the first target image to be spliced;
and performing stitching processing on the second images to be stitched to obtain a target image.
In some embodiments, further comprising: determining a target area of a first target image to be spliced; the method comprises the steps of determining a target area of a first target image to be spliced, wherein the target area comprises at least one of the following steps: determining a splicing area of the first target image to be spliced, and determining a target splicing area in the splicing area as the target area; determining a splicing area of the first target image to be spliced, dividing the splicing area into a plurality of sub-splicing areas, and determining a target sub-splicing area in the sub-splicing areas as the target area.
In some embodiments, the determining a target sub-splice area of the plurality of sub-splice areas as the target area includes: determining pixel change parameters of each sub-splicing area in the distortion correction process; sequencing the sub-splicing areas according to the pixel change parameters to obtain a sequencing result; determining a target sub-stitching region of the plurality of sub-stitching regions as the target region based on the sorting result; and the target sub-splicing area is positioned at a target position in the sequencing result.
In some embodiments, the compensating the target area of the first target image to be stitched to obtain a corresponding second image to be stitched includes: determining a first target image to be spliced, in which a target view finding object is located, from the at least two frames of first images to be spliced; and carrying out pixel compensation processing on a splicing area where the target view finding object is located in the first target image to be spliced to obtain a corresponding second image to be spliced.
In some embodiments, the compensating the target area of the first target image to be stitched to obtain a corresponding second image to be stitched includes at least one of: performing differential pixel compensation processing on the target sub-stitching region and other sub-stitching regions based on pixel change parameters of each sub-stitching region of the first target image to be stitched in the distortion correction process to obtain a corresponding second image to be stitched; and obtaining the position information of the target view finding object in the first target image to be spliced, and carrying out pixel compensation processing on the splicing area of the first image to be spliced or differential pixel compensation processing on the sub-splicing area of the first image to be spliced based on the position information to obtain a corresponding second image to be spliced.
In some embodiments, the performing distortion correction on the obtained at least two frames of images to be stitched to obtain at least two corresponding first frames of images to be stitched includes: obtaining position information of a target view finding object in the obtained image to be spliced; if the position information represents the splicing area of the target view finding object in two adjacent frames of images to be spliced, carrying out distortion correction on the at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced; and if the position information represents that the target view finding object is not in the splicing area of the two adjacent frames of images to be spliced, not executing the distortion correction operation.
In some embodiments, the stitching processing is performed on the second image to be stitched to obtain a target image, including at least one of the following: determining an overlapping region in the second image to be spliced, and carrying out splicing processing on the second image to be spliced based on the overlapping region to obtain the target image; obtaining target view finding object information in the second image to be spliced, and carrying out splicing processing on the second image to be spliced based on the target view finding object information to obtain the target image.
In some embodiments, at least one of the following is also included: generating a first image aiming at a target view finding object based on the second image to be spliced, processing the first image into a control capable of being triggered to be displayed by a target operation acting on the target image, and displaying and outputting the target image comprising the control to a target display area; acquiring configuration information of at least two image acquisition modules for acquiring the at least two frames of images to be spliced, and splicing the second images to be spliced based on the configuration information to acquire the target image; the configuration information of the at least two image acquisition modules is the same or different.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the correcting unit is used for carrying out distortion correction on the obtained at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced;
the compensation unit is used for carrying out compensation processing on the target area of the first target image to be spliced to obtain a corresponding second image to be spliced; the first target image to be spliced is a target frame in the at least two frames of first images to be spliced, and the target area comprises an area, in which distortion correction processing is executed, in the first target image to be spliced;
And the splicing unit is used for carrying out splicing processing on the second images to be spliced to obtain a target image.
In a third aspect, embodiments of the present application provide an electronic device comprising a memory and a processor, the memory storing a computer program executable on the processor, the processor implementing steps in the above method when the program is executed.
Drawings
Fig. 1 is a schematic diagram of an implementation flow of an image processing method according to an embodiment of the present application;
fig. 2 is a second schematic implementation flow chart of the image processing method in the embodiment of the present application;
fig. 3 is a schematic diagram of an implementation flow of an image processing method according to an embodiment of the present application;
FIG. 4A is a diagram showing the result of image processing according to an embodiment of the present application;
FIG. 4B is a second schematic diagram of the result of image processing according to the embodiment of the present application;
fig. 5 is a schematic diagram of the composition structure of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the present application are further described in detail below with reference to the drawings and examples. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, suffixes such as "module", "component", or "unit" for representing elements are used only for facilitating the description of the present application, and are not of specific significance per se. Thus, "module," "component," or "unit" may be used in combination.
It should be noted that the term "first\second\third" in relation to the embodiments of the present application is merely to distinguish similar objects and does not represent a specific ordering for the objects, it being understood that the "first\second\third" may be interchanged in a specific order or sequence, where allowed, to enable the embodiments of the present application described herein to be practiced in an order other than that illustrated or described herein.
Panoramic cameras of a multi-lens splicing scheme, such as 180-degree cameras spliced by 2 lenses or 360-degree cameras spliced by 3 lenses, all use wide-angle lenses larger than 100 degrees, are limited by lens distortion and splicing algorithm, and have larger distortion at the edges of four corners of each lens sub-picture, so that spliced complete images have larger distortion, particularly the upper edges, the lower edges and the vicinity of splicing lines of the images. One existing solution is to add more cameras and cut out areas with larger distortion at four corners of each camera to make up for incomplete spliced pictures caused by reduced horizontal viewing angles. But this solution results in a loss of vertical viewing angle and a significant increase in cost due to the added camera modules. Another existing solution is to correct the deformity of the sub-picture shot by each lens, and then splice the pictures. However, the images near the stitching line may have a local resolution decrease due to the influence of the correction algorithm, resulting in blurred images. Particularly when people appear in the area, the people can be blurred and clear along with the change of the horizontal position, and the overall impression experience is affected.
Based on this, the embodiment of the application provides an image processing method, and the function implemented by the method may be implemented by invoking program codes by a processor in an electronic device, and the program codes may be stored in a storage medium of the electronic device. Fig. 1 is a schematic flow chart of an implementation of an image processing method according to an embodiment of the present application, as shown in fig. 1, where the method includes:
step S101, performing distortion correction on at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced;
here, the electronic device may be various types of devices having information processing capability, such as a smart phone, a conference integration machine, a navigator, a tablet computer, a wearable device, a laptop, a floor sweeping robot, a smart kitchen and bathroom, a smart home, an automobile, and the like. The electronic device may include a plurality of image acquisition modules, such as a plurality of wide-angle cameras.
In this embodiment of the present application, the at least two frames of images to be stitched may be images acquired by an image acquisition module in an electronic device, and the images to be stitched have distortion. For example, the FOV (Field of Vision) of a wide-angle camera is generally not less than 110 ° (degrees), which is a unique advantage in shooting scenes and scene representation, as compared to conventional cameras that can acquire a wider Field of view. But on the other hand, the image acquired by the wide-angle camera is larger in distortion, and the visual perception is that the image edge is bent. Aiming at the phenomenon, internal parameters can be obtained by calibrating a camera, and then the acquired image is subjected to distortion correction to obtain a corrected image, so that the distortion can be eliminated.
Therefore, the at least two frames of images to be spliced may be images acquired by the wide-angle camera, and may, of course, also be images acquired by other types of cameras. In the embodiment of the application, distortion correction processing is performed on the at least two frames of images to be spliced to obtain at least two corresponding processed first frames of images to be spliced, and distortion in the at least two frames of first frames of images to be spliced is eliminated completely or partially.
Step S102, performing compensation processing on a target area of a first target image to be spliced to obtain a corresponding second image to be spliced; the first target image to be spliced is a target frame in the at least two frames of first images to be spliced, and the target area comprises an area, in which distortion correction processing is executed, in the first target image to be spliced;
here, the target frame may be determined from at least two first images to be stitched after eliminating distortion, and in this embodiment of the present application, the number of target frames is not limited, and the target frame may include one frame of the at least two first images to be stitched, or may be a multi-frame first image to be stitched of the at least two first images to be stitched. And, the target frame may be determined based on a specified framing object, for example, a face of the specified framing object, and if a face exists in a certain first image to be stitched, the first image to be stitched is the target frame. For another example, if a face exists in a certain first image to be stitched and the face is located in an area where distortion correction processing is performed, the first image to be stitched is a target frame. In this way, the imaging quality of the target viewfinder object in the stitched target image is better than that in the image to be stitched.
In this embodiment of the present application, the target area includes an area in the first target image to be stitched, where distortion correction processing is performed, for example, four corner edges of the first target image to be stitched.
The current most distortion correction methods are to obtain module distortion parameters through calibration, and perform indifferent transformation on the acquired images so as to restore the 'straight' characteristic of the images. For example, grid interpolation operations may be performed, where very high run frame rates may be obtained on a CPU (Central Processing Unit ) through SIMD (Single Instruction Multiple Data, single instruction stream multiple data stream) multithreading operations, or interpolation rendering may be performed using a GPU (graphics processing unit, graphics processor). However, this process causes the picture at the image edge to be widened by the distortion correction operation such as "stretching", and the picture scale to be disordered and the picture to be blurred. Therefore, the embodiment of the application performs compensation processing on the target area of the first target image to be spliced to obtain the compensated second image to be spliced. That is, the embodiment of the present application compensates the region in the first target image to be stitched, where the distortion correction processing is performed, so as to compensate the loss of the image frame caused by the distortion correction processing.
And step S103, performing stitching processing on the second images to be stitched to obtain a target image.
Here, the stitching process of the second image to be stitched of at least two frames may be performed because the first image to be stitched is at least two frames, which are generally two frames of images near the stitching line where the target framing object is located. Of course, there is also a target viewfinder object that does not reach a stitching line near the region to be stitched, where there is only one frame of the second image to be stitched.
In the embodiment of the application, the compensated second image to be spliced is spliced to obtain the target image. The target image may be a final panoramic image or may be a part of images in the panoramic image.
Here, by the image processing method in the above steps S101 to S103, the loss of the image picture caused by the distortion correction of the image can be compensated, thereby achieving the purpose of improving the visual effect.
In some embodiments, the step S102 performs compensation processing on the target area of the first target image to be stitched to obtain a corresponding second image to be stitched, including at least one of the following:
the first method comprises the steps of performing differential pixel compensation processing on a target sub-stitching region and other sub-stitching regions based on pixel change parameters of each sub-stitching region of a first target image to be stitched in a distortion correction process to obtain a corresponding second image to be stitched;
In this embodiment of the present application, a splicing area of a first target image to be spliced may be divided into a plurality of sub-splicing areas. Wherein the stitching region is a region involved in performing a stitching process, and includes, but is not limited to, an edge region of an image.
Furthermore, the pixel compensation process of differentiating the target sub-stitching region and other sub-stitching regions can be performed based on the pixel variation parameters of each sub-stitching region in the distortion correction process, so that the image quality of the whole stitching region is consistent with that of the non-stitching region. Here, the pixel variation parameters include, but are not limited to: the position change parameter of the pixel, the color change parameter of the pixel, the size change parameter of the pixel and the color bit depth change parameter of the pixel. For example, if the pixel density becomes smaller due to the "stretching" operation in the distortion correction process, the target sub-stitching region and other sub-stitching regions are compensated for differentiated pixel density and/or pixel color based on the pixel variation parameters in the distortion correction process, so as to obtain a corresponding second image to be stitched. The target sub-stitching region may be a sub-stitching region in which the pixel variation parameter in the sub-stitching region meets a preset condition.
Secondly, obtaining the position information of a target view finding object in the first target image to be spliced, and carrying out pixel compensation processing on the splicing area of the first image to be spliced or carrying out differentiated pixel compensation processing on the sub-splicing area of the first image to be spliced based on the position information to obtain a corresponding second image to be spliced.
Here, the target viewing object includes, but is not limited to, a face, an object identification (such as a license plate number), and the like. The face may be any face, or may be a designated face. And determining the position information of the target view finding object in the first target image to be spliced, and performing pixel compensation processing or differential pixel compensation processing on the sub-splicing area of the first image to be spliced based on the position information. The target view object may be in a stitching region or the target view object may not be in a stitching region, such as in a center region. For example, if it is determined that the face is located in a certain area of the image edge in the first target image to be stitched, pixel compensation is performed on stitching of the first image to be stitched based on the area, or differential pixel compensation is performed on sub-stitching areas of the first image to be stitched based on the area, so as to obtain a compensated second image to be stitched.
Based on the foregoing embodiments, embodiments of the present application further provide an image processing method, where the image processing method is applied to an electronic device, and the method includes:
step S111, performing distortion correction on the obtained at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced;
step S112, determining a target area of the first target image to be spliced; the first target image to be spliced is a target frame in the at least two frames of first images to be spliced, and the target area comprises an area, in which distortion correction processing is executed, in the first target image to be spliced;
for example, the first target image to be stitched is a first image to be stitched including a face, and the target area of the first target image to be stitched is an area where the face is located and is subjected to distortion correction processing.
Here, the target area of the first target image to be stitched may be determined by at least one of:
firstly, determining a splicing area of the first target image to be spliced, and determining a target splicing area in the splicing area as the target area;
in general, the stitched area of the images is an area on which distortion correction processing is performed, and thus a target stitched area (for example, a stitched area including a target framing object, which may be a human face) among the stitched areas may be determined as a target area. Of course, the stitching region may also be determined directly as the target region.
Secondly, determining a splicing area of the first target image to be spliced, dividing the splicing area into a plurality of sub-splicing areas, and determining a target sub-splicing area in the sub-splicing areas as the target area;
here, the splicing region may be divided into a plurality of sub-splicing regions, and a target sub-splicing region of the plurality of sub-splicing regions is determined as a target region. For example, the target sub-stitching region may be a sub-stitching region in which the pixel variation parameter meets a preset condition.
Step S113, performing compensation processing on a target area of the first target image to be spliced to obtain a corresponding second image to be spliced;
and step S114, performing stitching processing on the second images to be stitched to obtain a target image.
Here, by the image processing method in the above steps S111 to S114, the lost area of the image picture caused by the correction of the image distortion can be compensated more specifically, thereby achieving the purpose of improving the visual effect.
In some embodiments, the step S113 of performing compensation processing on the target area of the first target image to be stitched to obtain a corresponding second image to be stitched includes:
Step S1131, determining a first target image to be stitched where a target viewfinder object is located from the at least two frames of first images to be stitched;
and step 1132, performing pixel compensation processing on the stitching region where the target view finding object is located in the first target image to be stitched, to obtain a corresponding second image to be stitched.
Based on the foregoing embodiments, embodiments of the present application further provide an image processing method, where the image processing method is applied to an electronic device, and the method includes:
step S121, performing distortion correction on at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced;
step S122, determining a splicing area of a first target image to be spliced, and dividing the splicing area into a plurality of sub-splicing areas;
for example, the number of sub-splice areas may be determined based on the size of the splice area, and then the splice area may be divided into a plurality of sub-splice areas based on the number.
Step S123, determining pixel change parameters of each sub-splicing area in the distortion correction process;
for example, the pixel variation parameter may be a pixel stretching degree (vertical stretching degree, diagonal stretching degree, etc.); as another example, the pixel variation parameter may be a degree of pixel shift.
Step S124, sorting the sub-splicing areas according to the pixel change parameters to obtain a sorting result;
step S125, determining a target sub-splicing area in the sub-splicing areas as a target area based on the sorting result; wherein the target sub-splicing area is positioned at a target position in the sequencing result; the first target image to be spliced is a target frame in the at least two frames of first images to be spliced, and the target area comprises an area, in which distortion correction processing is performed, in the first target image to be spliced;
for example, the sub-stitching regions may be ranked according to the pixel stretching degree from high to low, so as to obtain a ranking result. And then determining the target sub-splicing area at the target position (for example, the first 5 sorting positions) in the sorting result as the target area.
Step S126, performing compensation processing on the target area of the first target image to be spliced to obtain a corresponding second image to be spliced;
in the embodiment of the application, compensation processing can be performed on the target sub-stitching areas in the sub-stitching areas to obtain the corresponding second image to be stitched.
And S127, performing stitching processing on the second images to be stitched to obtain a target image.
Here, by the image processing method in the above steps S121 to S127, the loss of the picture of the target area caused by the correction of the image distortion can be compensated, thereby achieving the purpose of improving the visual effect accurately.
Based on the foregoing embodiments, the embodiments of the present application further provide an image processing method, where the image processing method is applied to an electronic device, fig. 2 is a second schematic implementation flow diagram of the image processing method of the embodiments of the present application, and as shown in fig. 2, the method includes:
step S201, performing distortion correction on at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced;
step S202, determining a first target image to be spliced, in which a target view finding object is located, from the at least two frames of first images to be spliced;
for example, if the at least two frames of the first images to be stitched each include the target viewfinder object, the at least two frames of the first images to be stitched are the first target images to be stitched. For another example, if only one frame of the image to be stitched includes the target object, the frame of the image to be stitched is the first target image to be stitched. For another example, if the two frames of the first image to be stitched each include the target view object, but only the target view object in one frame is located in a specific area (such as a stitching area), the frame of the first image to be stitched is the first target image to be stitched.
In this embodiment, the number of target objects in the image to be stitched Shan Zhendi is not limited, and if there are multiple target objects in a first image to be stitched, the multiple target objects may be located in the same stitching region or may be located in different stitching regions. And if the target view finding objects are positioned in the same splicing area, performing pixel compensation processing on the plurality of target view finding objects in the splicing area, and if the target view finding objects are positioned in different splicing areas, performing pixel compensation processing on the target view finding objects in different areas, so as to obtain a processed second image to be spliced.
Step 203, performing pixel compensation processing on a stitching region where the target view finding object is located in the first target image to be stitched, to obtain a corresponding second image to be stitched;
for example, pixel compensation is performed on a stitching region where a face in the image to be stitched of the first target is located, so that compensation processing can be performed for the problem that the distortion correction algorithm causes stretching of the face, and an improved image is obtained.
Here, the pixel compensation process includes, but is not limited to: compensation processing for pixel density, compensation processing for pixel color, compensation processing for pixel arrangement rule, and the like.
In this embodiment of the present application, if the target viewfinder object is not unique, the second image to be stitched is also not unique. And, one target view finding object may correspond to two frames of the second image to be stitched.
And step S204, performing stitching processing on the second images to be stitched to obtain a target image.
Here, by the image processing method in the above steps S201 to S204, the loss of the picture of the target framing object caused by the distortion correction of the image can be compensated, thereby achieving the purpose of improving the visual effect accurately.
Based on the foregoing embodiments, embodiments of the present application further provide an image processing method, where the image processing method is applied to an electronic device, and the method includes:
step S211, obtaining position information of a target view finding object in the obtained image to be spliced;
step S212, if the position information represents the splicing area of the target view finding object in two adjacent frames of images to be spliced, performing distortion correction on the at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced;
for example, if four images to be stitched exist, wherein faces exist in two images to be stitched, and the faces are located in stitching areas of two adjacent images to be stitched (for example, half of the faces are located in stitching areas of one image to be stitched, and the other half of the faces are located in stitching areas of another image to be stitched), distortion correction is performed on the two images to be stitched, so as to obtain at least two corresponding first images to be stitched.
Step S213, if the position information represents that the target view finding object is not in the splicing area of two adjacent frames of images to be spliced, the distortion correction operation is not executed;
for example, if the target object is located in a central region of a certain image to be stitched, the distortion correcting operation is not performed.
Step S214, performing compensation processing on a target area of the first target image to be spliced to obtain a corresponding second image to be spliced; the first target image to be spliced is a target frame in the at least two frames of first images to be spliced, and the target area comprises an area, in which distortion correction processing is executed, in the first target image to be spliced;
and step S215, performing stitching processing on the second images to be stitched to obtain a target image.
Based on the foregoing embodiments, the embodiments of the present application further provide an image processing method, where the image processing method is applied to an electronic device, fig. 3 is a schematic diagram of an implementation flow of the image processing method of the embodiments of the present application, and as shown in fig. 3, the method includes:
step S301, performing distortion correction on at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced;
Step S302, performing compensation processing on a target area of a first target image to be spliced to obtain a corresponding second image to be spliced; the first target image to be spliced is a target frame in the at least two frames of first images to be spliced, and the target area comprises an area, in which distortion correction processing is executed, in the first target image to be spliced;
step 303, determining an overlapping area in the second image to be stitched, and stitching the second image to be stitched based on the overlapping area to obtain the target image.
For example, after the image quality loss caused by distortion correction is compensated by an AI (Artificial Intelligence) processor, the ISP (Image Signal Processing ) performs stitching processing on the overlapping portion of the multiple pictures, so that a panoramic image without distortion and image quality loss can be finally generated.
Based on the foregoing embodiments, the embodiments of the present application further provide an image processing method, where the image processing method is applied to an electronic device, fig. 3 is a schematic diagram of an implementation flow of the image processing method of the embodiments of the present application, and as shown in fig. 3, the method includes:
step S311, performing distortion correction on the obtained at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced;
Step S312, performing compensation processing on a target area of the first target image to be spliced to obtain a corresponding second image to be spliced; the first target image to be spliced is a target frame in the at least two frames of first images to be spliced, and the target area comprises an area, in which distortion correction processing is executed, in the first target image to be spliced;
step S313, obtaining target view finding object information in the second image to be stitched, and stitching the second image to be stitched based on the target view finding object information to obtain the target image.
Here, the target view object information includes, but is not limited to: location information of the target viewing object, number information of the target viewing object, and the like. Further, the stitching process may be performed based on the position information and/or the number information.
In some embodiments, the image processing method further comprises at least one of:
the method comprises the steps of firstly, generating a first image aiming at a target view finding object based on the second image to be spliced, processing the first image into a control capable of being triggered to be displayed by target operation acting on the target image, and displaying and outputting the target image comprising the control to a target display area;
Here, the control may be a thumbnail image, or an entire image may be used as the control, or additional controls may be added, so long as the control is generated based on the first image and can be triggered to be displayed by a target operation acting on the target image.
Secondly, configuration information of at least two image acquisition modules for acquiring the at least two frames of images to be spliced is obtained, and splicing processing is carried out on the second images to be spliced based on the configuration information, so that the target image is obtained; the configuration information of the at least two image acquisition modules is the same or different.
In this embodiment of the application, different images to be spliced can be acquired by different image acquisition modules, and the configuration information of different image acquisition modules can be the same or different. If the configuration information is different, more complex pixel conversion, pixel compensation and other operations can be performed to achieve the purpose of stitching.
Based on the foregoing embodiments, the embodiments of the present application further provide an image processing method, where the image processing method can compensate for the image quality loss of the image edge portion after distortion correction only by using the AI algorithm without increasing the number of physical cameras, and the method mainly includes the following three portions:
(1) The existing multi-lens spliced camera consists of a plurality of lenses, a plurality of image sensors, an image signal processor ISP and other main components. The ISP is responsible for processing pixel information acquired by each image sensor, executing a stitching algorithm to stitch the pictures of a plurality of images and outputting a complete panoramic picture.
(2) On the basis of the above, the embodiment of the application adds an AI processor, calculates a region range with larger stretching after correction according to the distortion correction algorithm of the ISP, and compensates the lost picture precision back by using the AI algorithm for the image region within the range, so that the picture quality in each picture is basically consistent.
Fig. 4A is a schematic diagram of the result of image processing in the embodiment of the present application, as shown in fig. 4A, the picture 41 is an original image captured by two cameras, and vertical stretching is required at the dashed frame portion to correct distortion. The picture 42 is an image of the picture 41 after the distortion correction, and at this time, the pixel density becomes low near the stitching line, so that the B is unclear. The picture 43 is an image obtained by compensating the stitching region of the picture 42 using the AI algorithm, and the compensated B becomes clear.
(3) After the image quality loss caused by distortion correction is compensated by the AI processor, the ISP performs splicing processing on the overlapped part of the multiple paths of pictures, and finally, a panoramic image without distortion and image quality loss can be generated.
Here, the AI algorithm belongs to one of image enhancement techniques, and is compensated to a larger resolution such as color addition, straight line addition, and the like (belongs to compensation of image quality) by algorithm learning. If only optical compensation (such as spherical transformation) is adopted, the effective pixels in the same area can be uniformly spread to a larger area, and the pixel density is reduced, and the deformation is solved, but the image quality is still in a lost state.
Based on the foregoing embodiments, the embodiments of the present application further provide an image processing method, which can perform software correction on the intercepted head portrait portion without increasing the number of physical cameras. Taking a 180-degree camera formed by splicing 2 lenses as an example, the implementation method is introduced:
(1) When a person appears in a center area of each lens and distortion is relatively small, correction processing is not required when capturing a picture of the head image area of the person.
(2) If the person appears near the splicing line, the image correction module of the camera can correct the deformity of the area of the head portrait of the person in the area, and the screenshot is output. That is, the image correction module corrects the deformity of the partial screenshot only when receiving the instruction of the function of framing the head portrait of the person and the person appears in a certain area around the spelling line.
FIG. 4B is a second schematic diagram of the result of image processing according to the embodiment of the present application, as shown in FIG. 4B, the image 44 is an image that is not AI-compensated when the ABC region needs to be scratched; the picture 45 is an AI-compensated image when the ABC region needs to be scratched.
Based on the foregoing embodiments, the embodiments of the present application provide an image processing apparatus, which includes each module included, and each unit included in each module, and each component included in each unit, and may be implemented by a processor in an electronic device; of course, the method can also be realized by a specific logic circuit; in an implementation, the processor may be a CPU (Central Processing Unit ), MPU (Microprocessor Unit, microprocessor), DSP (Digital Signal Processing, digital signal processor), or FPGA (Field Programmable Gate Array ), or the like.
Fig. 5 is a schematic diagram of the composition structure of an image processing apparatus according to an embodiment of the present application, as shown in fig. 5, the apparatus 500 includes:
the correcting unit 501 is configured to correct distortion of the obtained at least two frames of images to be stitched, so as to obtain at least two corresponding frames of first images to be stitched;
The compensation unit 502 is configured to perform compensation processing on a target area of the first target image to be stitched, so as to obtain a corresponding second image to be stitched; the first target image to be spliced is a target frame in the at least two frames of first images to be spliced, and the target area comprises an area, in which distortion correction processing is executed, in the first target image to be spliced;
and the stitching unit 503 is configured to perform stitching processing on the second image to be stitched, so as to obtain a target image.
In some embodiments, further comprising:
the determining unit is used for determining a target area of the first target image to be spliced;
wherein the determining unit includes at least one of:
the first determining module is used for determining a splicing area of the first target image to be spliced and determining a target splicing area in the splicing area as the target area;
the second determining module is used for determining a splicing area of the first target image to be spliced, dividing the splicing area into a plurality of sub-splicing areas, and determining a target sub-splicing area in the sub-splicing areas as the target area.
In some embodiments, the second determining module includes:
The second determining submodule is used for determining pixel change parameters of each sub-splicing area in the distortion correction process;
the second determining submodule is further used for sequencing the sub-splicing areas according to the pixel change parameters to obtain a sequencing result;
the second determining submodule is further configured to determine a target sub-splicing region of the plurality of sub-splicing regions as the target region based on the sorting result; and the target sub-splicing area is positioned at a target position in the sequencing result.
In some embodiments, the compensation unit 502 includes:
the compensation subunit is used for determining a first target image to be spliced, in which a target view finding object is located, from the at least two frames of first images to be spliced;
and the compensation subunit is further configured to perform pixel compensation processing on a stitching region where the target viewfinder object is located in the first target image to be stitched, so as to obtain a corresponding second image to be stitched.
In some embodiments, the compensation unit 502 includes at least one of:
the first compensation module is used for carrying out differentiated pixel compensation processing on the target sub-stitching area and other sub-stitching areas based on pixel change parameters of each sub-stitching area of the first target image to be stitched in the distortion correction process to obtain a corresponding second image to be stitched;
The second compensation module is used for obtaining the position information of the target view finding object in the first target image to be spliced, carrying out pixel compensation processing on the splicing area of the first image to be spliced or carrying out differentiated pixel compensation processing on the sub-splicing area of the first image to be spliced based on the position information, and obtaining a corresponding second image to be spliced.
In some embodiments, the correction unit 501 includes:
the correction subunit is used for obtaining the position information of the target view finding object in the obtained image to be spliced;
the correction subunit is further configured to, if the position information characterizes a splicing area of the target viewfinder object in two adjacent frames of images to be spliced, correct distortion of the at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced;
and the correction subunit is further configured to not perform the distortion correction operation if the location information indicates that the target viewfinder object is not in a stitching region of two adjacent frames of images to be stitched.
In some embodiments, the stitching unit 503 includes at least one of:
the first stitching module is used for determining an overlapping area in the second image to be stitched, and stitching the second image to be stitched based on the overlapping area to obtain the target image;
The second stitching module is used for obtaining target view finding object information in the second image to be stitched, and stitching the second image to be stitched based on the target view finding object information to obtain the target image.
In some embodiments, at least one of the following is also included:
the first processing unit is used for generating a first image aiming at a target view finding object based on the second image to be spliced, processing the first image into a control capable of being triggered to be displayed by a target operation acting on the target image, and displaying and outputting the target image comprising the control to a target display area;
the second processing unit is used for obtaining configuration information of at least two image acquisition modules for acquiring the at least two frames of images to be spliced, and splicing the second images to be spliced based on the configuration information to obtain the target image; the configuration information of the at least two image acquisition modules is the same or different.
The description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the device embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
In the embodiment of the present application, if the image processing method is implemented in the form of a software functional module and sold or used as a separate product, the image processing method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in essence or in a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing an electronic device (which may be a personal computer, a server, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a ROM (Read Only Memory), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, the embodiment of the application provides an electronic device, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor realizes the steps in the image processing method provided in the embodiment when executing the program.
Correspondingly, the embodiment of the application provides a readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the above-mentioned image processing method.
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus of the present application, please refer to the description of the method embodiments of the present application for understanding.
It should be noted that fig. 6 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the present application, as shown in fig. 6, the hardware entity of the electronic device 600 includes: a processor 601, a communication interface 602 and a memory 603, wherein
The processor 601 generally controls the overall operation of the electronic device 600.
The communication interface 602 may enable the electronic device 600 to communicate with other electronic devices or servers or platforms over a network.
The memory 603 is configured to store instructions and applications executable by the processor 601, and may also cache data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or processed by each module in the processor 601 and the electronic device 600, and may be implemented by FLASH (FLASH) or RAM (Random Access Memory ).
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units. Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware associated with program instructions, where the foregoing program may be stored in a computer readable storage medium, and when executed, the program performs steps including the above method embodiments; and the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
The methods disclosed in the several method embodiments provided in the present application may be arbitrarily combined without collision to obtain a new method embodiment.
The features disclosed in the several product embodiments provided in the present application may be combined arbitrarily without conflict to obtain new product embodiments.
The features disclosed in the several method or apparatus embodiments provided in the present application may be arbitrarily combined without conflict to obtain new method embodiments or apparatus embodiments.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method, comprising:
carrying out distortion correction on the obtained at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced;
performing compensation processing on a target area of the first target image to be spliced to obtain a corresponding second image to be spliced; the first target image to be spliced is a target frame in the at least two frames of first images to be spliced, and the target area comprises an area, in which distortion correction processing is executed, in the first target image to be spliced;
And performing stitching processing on the second images to be stitched to obtain a target image.
2. The method of claim 1, further comprising:
determining a target area of a first target image to be spliced;
the method comprises the steps of determining a target area of a first target image to be spliced, wherein the target area comprises at least one of the following steps:
determining a splicing area of the first target image to be spliced, and determining a target splicing area in the splicing area as the target area;
determining a splicing area of the first target image to be spliced, dividing the splicing area into a plurality of sub-splicing areas, and determining a target sub-splicing area in the sub-splicing areas as the target area.
3. The method of claim 2, the determining a target sub-splice area of the plurality of sub-splice areas as the target area comprising:
determining pixel change parameters of each sub-splicing area in the distortion correction process;
sequencing the sub-splicing areas according to the pixel change parameters to obtain a sequencing result;
determining a target sub-stitching region of the plurality of sub-stitching regions as the target region based on the sorting result; and the target sub-splicing area is positioned at a target position in the sequencing result.
4. The method according to claim 1 or 2, wherein the compensating the target area of the first target image to be stitched to obtain the corresponding second image to be stitched includes:
determining a first target image to be spliced, in which a target view finding object is located, from the at least two frames of first images to be spliced;
and carrying out pixel compensation processing on a splicing area where the target view finding object is located in the first target image to be spliced to obtain a corresponding second image to be spliced.
5. The method according to claim 1 or 2, wherein the compensating the target area of the first target image to be stitched to obtain the corresponding second image to be stitched includes at least one of:
performing differential pixel compensation processing on the target sub-stitching region and other sub-stitching regions based on pixel change parameters of each sub-stitching region of the first target image to be stitched in the distortion correction process to obtain a corresponding second image to be stitched;
and obtaining the position information of the target view finding object in the first target image to be spliced, and carrying out pixel compensation processing on the splicing area of the first image to be spliced or differential pixel compensation processing on the sub-splicing area of the first image to be spliced based on the position information to obtain a corresponding second image to be spliced.
6. The method of claim 1, wherein the performing distortion correction on the obtained at least two frames of images to be stitched to obtain at least two corresponding frames of first images to be stitched, includes:
obtaining position information of a target view finding object in the obtained image to be spliced;
if the position information represents the splicing area of the target view finding object in two adjacent frames of images to be spliced, carrying out distortion correction on the at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced;
and if the position information represents that the target view finding object is not in the splicing area of the two adjacent frames of images to be spliced, not executing the distortion correction operation.
7. The method according to claim 1, wherein the stitching the second image to be stitched to obtain a target image includes at least one of:
determining an overlapping region in the second image to be spliced, and carrying out splicing processing on the second image to be spliced based on the overlapping region to obtain the target image;
obtaining target view finding object information in the second image to be spliced, and carrying out splicing processing on the second image to be spliced based on the target view finding object information to obtain the target image.
8. The method of claim 1, further comprising at least one of:
generating a first image aiming at a target view finding object based on the second image to be spliced, processing the first image into a control capable of being triggered to be displayed by a target operation acting on the target image, and displaying and outputting the target image comprising the control to a target display area;
and acquiring configuration information of at least two image acquisition modules for acquiring the at least two frames of images to be spliced, and performing splicing processing on the second images to be spliced based on the configuration information to acquire the target image, wherein the configuration information of the at least two image acquisition modules is the same or different.
9. An image processing apparatus comprising:
the correcting unit is used for carrying out distortion correction on the obtained at least two frames of images to be spliced to obtain at least two corresponding frames of first images to be spliced;
the compensation unit is used for carrying out compensation processing on the target area of the first target image to be spliced to obtain a corresponding second image to be spliced; the first target image to be spliced is a target frame in the at least two frames of first images to be spliced, and the target area comprises an area, in which distortion correction processing is executed, in the first target image to be spliced;
And the splicing unit is used for carrying out splicing processing on the second images to be spliced to obtain a target image.
10. An electronic device comprising a memory and a processor, the memory storing a computer program executable on the processor, the processor implementing the steps in the image processing method of any one of claims 1 to 8 when the program is executed.
CN202310485527.7A 2023-04-28 2023-04-28 Image processing method and device and electronic equipment Pending CN116563106A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310485527.7A CN116563106A (en) 2023-04-28 2023-04-28 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310485527.7A CN116563106A (en) 2023-04-28 2023-04-28 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116563106A true CN116563106A (en) 2023-08-08

Family

ID=87485462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310485527.7A Pending CN116563106A (en) 2023-04-28 2023-04-28 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116563106A (en)

Similar Documents

Publication Publication Date Title
JP6263623B2 (en) Image generation method and dual lens apparatus
CN109194876B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US9870602B2 (en) Method and apparatus for fusing a first image and a second image
CN110730296B (en) Image processing apparatus, image processing method, and computer readable medium
CN111563552B (en) Image fusion method, related device and apparatus
EP2328125A1 (en) Image splicing method and device
US10489885B2 (en) System and method for stitching images
CN107911682B (en) Image white balance processing method, device, storage medium and electronic equipment
JP6308748B2 (en) Image processing apparatus, imaging apparatus, and image processing method
CN112017222A (en) Video panorama stitching and three-dimensional fusion method and device
CN110866486B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
WO2020011112A1 (en) Image processing method and system, readable storage medium, and terminal
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
US20220182595A1 (en) Optical flow based omnidirectional stereo video processing method
US20090059018A1 (en) Navigation assisted mosaic photography
CN110278366B (en) Panoramic image blurring method, terminal and computer readable storage medium
CN114401362A (en) Image display method and device and electronic equipment
CN112017242B (en) Display method and device, equipment and storage medium
CN109257540A (en) Take the photograph photography bearing calibration and the camera of lens group more
CN115174878B (en) Projection picture correction method, apparatus and storage medium
CN116563106A (en) Image processing method and device and electronic equipment
US11106042B2 (en) Image processing apparatus, head-mounted display, and image displaying method
CN113014811A (en) Image processing apparatus, image processing method, image processing device, and storage medium
CN115514895B (en) Image anti-shake method, apparatus, electronic device, and computer-readable storage medium
CN114339029B (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination