CN117437121A - Image stitching method and device, electronic equipment, medium and product - Google Patents

Image stitching method and device, electronic equipment, medium and product Download PDF

Info

Publication number
CN117437121A
CN117437121A CN202311443333.7A CN202311443333A CN117437121A CN 117437121 A CN117437121 A CN 117437121A CN 202311443333 A CN202311443333 A CN 202311443333A CN 117437121 A CN117437121 A CN 117437121A
Authority
CN
China
Prior art keywords
image
eye image
optical flow
processed
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311443333.7A
Other languages
Chinese (zh)
Inventor
田明哲
张东波
焦少慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202311443333.7A priority Critical patent/CN117437121A/en
Publication of CN117437121A publication Critical patent/CN117437121A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure discloses an image stitching method, an image stitching device, electronic equipment, a storage medium and a product, wherein the method comprises the following steps: acquiring a group of images to be processed acquired by a preset imaging device array, and performing bidirectional optical flow estimation on the images to be processed corresponding to every two adjacent imaging devices in the preset imaging device array to obtain a first optical flow estimation result; image mapping is carried out on the basis of the first optical flow estimation result and the second optical flow estimation result of the empty mirror background image pair corresponding to the image to be processed, and the image mapping results are fused to obtain a left-eye image segment to be spliced and a right-eye image segment to be spliced, which correspond to the image to be processed; and performing image stitching based on each left-eye image fragment to be stitched and each right-eye image fragment to be stitched so as to obtain a group of target omnidirectional stereoscopic views corresponding to the images to be processed. According to the technical scheme, the splicing effect of the splicing area of the panoramic stereo image can be more natural, and the visual effect is improved.

Description

Image stitching method and device, electronic equipment, medium and product
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, in particular to an image stitching method, an image stitching device, electronic equipment, a medium and a product.
Background
Panoramic images can present photographed scene information with a wider viewing angle, effectively and comprehensively expressing information of imaging objects. Currently, a fisheye camera array is mostly adopted to capture a wide-angle image and perform image stitching to obtain a final Omni-directional stereoscopic view (Omni-directional stereo image, ODS) of a captured scene.
However, the spliced part of the panoramic image obtained through splicing is excessively unnatural, and the situation that the adjacent part of the foreground and the background of the image is deformed may occur, so that the image effect needs to be further optimized.
Disclosure of Invention
The disclosure provides an image stitching method, an image stitching device, an electronic device, a medium and a product, which can reduce interference between foreground and background pixels in an image stitching region, enable stitching effect of a stitching region of a panoramic stereo image to be more natural, and improve visual effect of the image.
In a first aspect, an embodiment of the present disclosure provides an image stitching method, including:
acquiring a group of images to be processed acquired by a preset imaging device array, and performing bidirectional optical flow estimation on the images to be processed corresponding to every two adjacent imaging devices in the preset imaging device array to obtain a first optical flow estimation result;
Performing image mapping on the second optical flow estimation result of the corresponding empty mirror background image pair based on the first optical flow estimation result and the image to be processed, and fusing the image mapping result to obtain a left-eye image fragment to be spliced and a right-eye image fragment to be spliced, which correspond to the image to be processed;
and performing image stitching based on each left-eye image fragment to be stitched and each right-eye image fragment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed.
In a second aspect, an embodiment of the present disclosure further provides an image stitching apparatus, including:
the image acquisition module is used for acquiring a group of images to be processed acquired by a preset imaging device array, and carrying out bidirectional optical flow estimation on the image pairs to be processed corresponding to every two adjacent imaging devices in the preset imaging device array to obtain a first optical flow estimation result;
the image segment processing module is used for carrying out image mapping on the basis of the first optical flow estimation result and the second optical flow estimation result of the image pair to be processed, corresponding to the empty mirror background image pair, and carrying out fusion on the image mapping result to obtain a left-eye image segment to be spliced and a right-eye image segment to be spliced, which correspond to the image pair to be processed;
And the image segment stitching module is used for stitching images based on each left-eye image segment to be stitched and each right-eye image segment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed.
In a third aspect, embodiments of the present disclosure further provide an electronic device, including:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the image stitching method as described in any of the embodiments of the present disclosure.
In a fourth aspect, the presently disclosed embodiments also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing the image stitching method according to any of the presently disclosed embodiments.
In a fifth aspect, the disclosed embodiments also provide a computer program product comprising a computer program which, when executed by a processor, implements the image stitching method according to any of the embodiments of the present invention.
According to the embodiment of the disclosure, a group of images to be processed acquired by a preset imaging device array is acquired, and bidirectional optical flow estimation is performed on pairs of images to be processed corresponding to every two adjacent imaging devices in the preset imaging device array, so that a first optical flow estimation result is obtained; performing image mapping on the first optical flow estimation result and the second optical flow estimation result of the image pair corresponding to the empty mirror background, namely correcting optical flow information of foreground and background pixel points at a splicing position in the image mapping process, and further fusing the image mapping results to obtain a left-eye image fragment to be spliced and a right-eye image fragment to be spliced, which correspond to the image pair to be processed; and performing image stitching based on each left-eye image fragment to be stitched and each right-eye image fragment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed. According to the technical scheme, the problem that the splicing part is not natural enough in the current panoramic stereo view is solved, the situation that the distortion of the foreground and the background edge and the like possibly occurs after the images are spliced can be avoided, the splicing effect of the splicing area of the panoramic stereo image is more natural, and the image effect is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of an image stitching method according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of an array of imaging devices according to an embodiment of the present disclosure;
fig. 3 is a flowchart of an image stitching method according to an embodiment of the present disclosure;
fig. 4 is a flowchart of an image stitching method according to an embodiment of the present disclosure;
fig. 5 is a flowchart of an image stitching method according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an image stitching device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
It will be appreciated that the data (including but not limited to the data itself, the acquisition or use of the data) involved in the present technical solution should comply with the corresponding legal regulations and the requirements of the relevant regulations.
Fig. 1 is a schematic flow chart of an image stitching method provided by an embodiment of the present disclosure, where the embodiment of the present disclosure is suitable for generating a scene of a panoramic view, and particularly, a situation that an omnidirectional stereoscopic view is obtained by stitching images of a large field of view acquired by a plurality of fisheye camera arrays, where the method may be performed by an image stitching device, and the device may be implemented in a software and/or hardware form, and optionally, the image stitching device may be implemented by an electronic device, where the electronic device may be a mobile terminal, a PC end, a server, or the like.
As shown in fig. 1, the image stitching method includes:
s110, acquiring a group of to-be-processed images acquired by a preset imaging device array, and performing bidirectional optical flow estimation on to-be-processed image pairs corresponding to every two adjacent imaging devices in the preset imaging device array to obtain a first optical flow estimation result.
Wherein the preset imaging device array is a group of imaging devices which are uniformly arranged on the same plane. The overlapping area exists between the angles of view of the imaging devices, and the angles of view of all the imaging devices can be greater than or equal to 360 °, so the number of imaging devices in the preset imaging device array is not limited. By way of example, the preset imaging device array may be an annular fisheye camera array comprising 8 imaging devices as shown in fig. 2.
In the image stitching process, a group of images to be processed, which are acquired by a preset imaging device array at the same time, are taken as image processing objects, and stitching is performed based on overlapping areas of images acquired by every two adjacent imaging devices, so that a target panoramic image is obtained. The image to be processed may be an original image acquired by the imaging device, or an image of an effective area subjected to preprocessing such as image segment extraction based on the original image. Typically, it will be appreciated that the original image acquired by the imaging device is an image calibrated by internal and external parameters and distortion parameters of the imaging device.
In an alternative embodiment, the image is acquired using an imaging device as shown in FIG. 2. Calibrating through a preset calibration model, and converting the acquired fisheye image of the fisheye camera into a panoramic image; then, taking an image fragment in the middle of the panoramic image within the effective view angle range as an image to be processed. And if the longitude and latitude diagram corresponding to the panoramic image is obtained, taking one half of the longitude to obtain the semi-panoramic image. Correspondingly, a group of images to be processed is 8 semi-panoramic images corresponding to 8 fisheye cameras.
In the image stitching process, images to be processed acquired by every two adjacent imaging devices are taken as a group, and an image segment is generated. In this step, as a basic step of the subsequent step, bidirectional optical flow estimation may be performed first for each pair of to-be-processed images corresponding to two adjacent imaging devices, to obtain a first optical flow estimation result. In the following description of the technical solution, the left camera and the right camera may be referred to as a left camera and a right camera, respectively, according to the relative positional relationship of two adjacent imaging devices. Correspondingly, the two images in the image pair to be processed may be referred to as a left camera image and a right camera image, respectively.
Illustratively, the 8 fisheye cameras in fig. 2 have eight adjacent sets of cameras corresponding to 8 pairs of images to be processed. Each image pair to be processed corresponds to a first optical flow estimation result in a group. The first optical flow estimation result is a bidirectional optical flow estimation result, including an optical flow estimation result from the left camera image to the right camera image, and an optical flow estimation result from the right camera image to the left camera image.
The optical flow estimation may be implemented by a neural network or an image block matching algorithm (patch-match). For example, their overlapping (overlap) regions may be first cropped in the image pair to be processed. Then, the overlapped region image pair is used for constructing a multi-layer image pyramid, a patch-match algorithm is used for searching for the best match according to the designated direction (horizontal direction) and the range (preset range of optical flow), and the optical flow estimation result under high resolution is obtained from thick to thin by combining the matching result of the multi-layer pyramid.
S120, performing image mapping on the second optical flow estimation result of the corresponding empty mirror background image pair based on the first optical flow estimation result and the to-be-processed image, and fusing the image mapping result to obtain a left-eye to-be-spliced image segment and a right-eye to-be-spliced image segment which correspond to the to-be-processed image pair.
In this step is a process of generating a binocular new viewpoint image based on the pair of images to be processed. The method comprises the steps of selecting image fragments, updating image optical flow, mapping the image and fusing image mapping results.
In the first step, in the first optical flow estimation result, optical flow information of a preset area is selected, and image segment selection is achieved. The location and dimensions of the preset area may be further determined according to the resolution requirements of the finally generated panoramic image, as well as the number of preset imaging device arrays.
Specifically, according to the resolution requirement of the finally generated panoramic image and the number of preset imaging device arrays, the resolution of the image segments to be spliced, which are based on the images of the two adjacent imaging devices, of the corresponding view angle range can be known correspondingly. For example, a total of 8 fisheye imaging devices are employed with the annular fisheye array shown in fig. 2. Assuming that the resolution of the omni-directional stereoscopic view to be generated by stitching is Height Width (3840×7680, and the resolution of the dual view is 7680×7680), each two cameras need to interpolate and fuse to obtain an image segment with Width of Width/8 (slice) within the corresponding view angle range.
If the monocular panoramic image is generated, only the image segments corresponding to the area optical flow estimation result with Width of Width/8 at the most central position in the overlapping area of the left camera image and the right camera image are needed to be intercepted and fused. In this embodiment, when generating a binocular stereoscopic stitched panoramic view, it is necessary to shift according to the left and right eyes, and then cut out a segment. For example, the Offset amount is set to Offset, the left eye needs to cut out an Offset left region from the center of the overlapping region of the left and right camera images, and the right eye needs to cut out an Offset right region from the center of the overlapping region of the left and right camera images. And finally obtaining a left eye image segment and a right eye image segment corresponding to the left camera image, and a left eye image segment and a right eye image segment corresponding to the right camera image.
And secondly, carrying out image optical flow updating and image mapping on the image fragments selected in the first step according to the second optical flow estimation result of the image pair to be processed and the corresponding empty mirror background image pair.
The empty mirror background image is understood as a scene of shooting without dynamic people or objects, and only shooting a landscape or building without people, namely, shooting the dynamic background of people. Each image to be processed corresponds to a null mirror background image.
And the second optical flow estimation result of the empty mirror background image pair also comprises a bidirectional optical flow result which is used for replacing optical flow information of the background pixel point which is changed into the background pixel point after the foreground pixel point is mapped by the image, thereby realizing dynamic update of the optical flow information.
The dynamic update of the optical flow is performed in consideration of the influence of the introduction of the foreground on the information of the surrounding foreground and the background blocked by the foreground (the distortion/deletion of the optical flow field) in the optical flow mapping process. At this time, it is necessary to repair the information to obtain a correct mapping result. For example, if the pixel point a in one image segment is a foreground pixel point, the mapped pixel point position is a background pixel point; then, the optical flow information of the background pixel point mapped by the point a in the image mapping result is also corresponding to the optical flow information of the foreground pixel point, which may cause problems such as distortion or holes of the image foreground and background edges in the subsequent image stitching. By replacing the optical flow information of the pixel point similar to the pixel point a, the occurrence of these problems affecting the image effect can be avoided.
After image mapping, a left-eye image segment and a right-eye image segment corresponding to the left camera image and a mapping segment of the left-eye image segment and the right-eye image segment corresponding to the right camera image in a corresponding view angle range can be obtained respectively.
And thirdly, fusing image mapping results.
And fusing the left eye mapping fragment of the left camera image with the left eye mapping fragment of the right camera image to obtain the left eye image fragment to be spliced of the corresponding target image fragment to be spliced. And similarly, fusing the right eye mapping fragment of the left camera image with the right eye mapping fragment of the right camera image to obtain the right eye image fragment to be spliced of the corresponding target image fragment to be spliced.
And S130, performing image stitching based on each left-eye image fragment to be stitched and each right-eye image fragment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed.
Splicing each left-eye image segment to be spliced to obtain a left-eye panoramic image, and splicing each right-eye image segment to be spliced to obtain a right-eye panoramic image; and obtaining a target omnidirectional stereoscopic view based on the left-eye panoramic image and the right-eye panoramic image.
In an alternative embodiment, in order to further improve the viewing angle effect of the panoramic image, color and/or brightness consistency processing may be performed on each left-eye image segment to be stitched and each right-eye image segment to be stitched before image stitching; and then, splicing each processed left-eye image fragment to be spliced to obtain a left-eye panoramic image, and splicing each processed right-eye image fragment to be spliced to obtain a right-eye panoramic image.
According to the technical scheme, a group of images to be processed acquired by a preset imaging device array is acquired, and bidirectional optical flow estimation is carried out on pairs of images to be processed corresponding to every two adjacent imaging devices in the preset imaging device array, so that a first optical flow estimation result is obtained; performing image mapping on the first optical flow estimation result and the second optical flow estimation result of the image pair corresponding to the empty mirror background, namely correcting optical flow information of foreground and background pixel points at a splicing position in the image mapping process, and further fusing the image mapping results to obtain a left-eye image fragment to be spliced and a right-eye image fragment to be spliced, which correspond to the image pair to be processed; and performing image stitching based on each left-eye image fragment to be stitched and each right-eye image fragment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed. According to the technical scheme, the problem that the splicing part is not natural enough in the current panoramic stereo view is solved, the situation that the distortion of the foreground and the background edge and the like possibly occurs after the images are spliced can be avoided, the splicing effect of the splicing area of the panoramic stereo image is more natural, and the image effect is improved.
Fig. 3 is a flowchart of another image stitching method according to an embodiment of the present disclosure, which further illustrates a process of dynamic update of optical flow based on the above embodiment. The method may be performed by an image stitching device, which may be implemented in software and/or hardware, and optionally by an electronic device, which may be a mobile terminal, a PC-side or a server, etc.
As shown in fig. 3, the image stitching method includes:
s210, acquiring a group of to-be-processed images acquired by a preset imaging device array, and performing bidirectional optical flow estimation on to-be-processed image pairs corresponding to every two adjacent imaging devices in the preset imaging device array to obtain a first optical flow estimation result.
S220, respectively intercepting corresponding left-eye image fragments and right-eye image fragments in each image to be processed of the image pair to be processed according to a preset offset parameter.
The object of image stitching in this embodiment is to generate a binocular stereoscopic stitched panoramic view, requiring the generation of a left eye panoramic image and a right eye panoramic image, respectively. The dimensions of each image segment are the same, and for the left-eye panoramic image and the right-eye panoramic image to have corresponding parallaxes, the positions selected in the images to be processed are different, and corresponding offsets are required. The preset offset parameter may include an offset amount and an offset direction, indicating the selection of the image segment. For example, when corresponding left-eye image segments and right-eye image segments are truncated in one image to be processed, the left-eye image segments may be selected with a left shift from the middle position of the overlapping area of the image pair to be processed, and the right-eye image segments may be selected with a right shift, the shift amounts being the same but the shift directions being opposite.
Wherein the width of each image segment is fixed, and the specific width value is determined according to the resolution requirement of the finally generated panoramic image and the number of preset imaging device arrays. For example, a total of 8 fisheye imaging devices are employed with the annular fisheye array shown in fig. 2. Assuming that the resolution of the omni-directional stereoscopic view to be generated by stitching is Height Width (3840×7680, and the resolution of the dual view is 7680×7680), each two cameras need to interpolate and fuse to obtain an image segment with Width of Width/8 (slice) within the corresponding view angle range.
Finally, a left-eye image segment and a right-eye image segment corresponding to the left camera image in each image pair to be processed, and a left-eye image segment and a right-eye image segment corresponding to the right camera image can be obtained.
And S230, performing image mapping based on the first optical flow estimation results corresponding to the left-eye image segments and the right-eye image segments to obtain target left-eye image segments and target right-eye image segments of the images to be processed in a target visual angle range.
When the image mapping is performed in this step, the left eye image segments and the right eye image segments may be mapped backward to obtain corresponding target left eye image segments and target right eye image segments.
Wherein the target viewing angle range is determined based on the position of the imaging device in the preset imaging device array. The pixel interpolation coefficient in the image mapping process can be determined according to the relative position relation between each pixel point in the image fragment and the middle position of the preset visual angle range, and the closer the relative position is to the interpolation coefficient, the larger the relative position is.
In an alternative embodiment, the relative positional relationship between the original pixel point in each of the left eye image segment and the right eye image segment and the preset reference position in the target viewing angle range may be determined first; then, determining a mapping matrix of each left eye image segment and each right eye image segment according to the relative position relation; and further, obtaining a target left-eye image segment and a target right-eye image segment of each image to be processed in a target visual angle range through optical flow mapping calculation between the first optical flow estimation result corresponding to each left-eye image segment and the right-eye image segment and the corresponding mapping matrix.
The mapping matrix may be expressed as:
mapping[row,col,0]=x+flowX*t,
mapping[row,col,1]=y+flowY*t。
wherein the mapping matrix (mapping) is the same size as slice. row and col represent the ordinate and abscissa, respectively, within slice, and x and y represent the abscissa and ordinate, respectively, of the point within the overlapping region of the image pair to be processed. t represents the interpolated coefficient, and t is associated with the positional relationship of the pixel points for which the mapping calculation is performed, for example, when mapping from left to right, the value of t is gradually changed from 1 to 0. And respectively obtaining mapping results from the middle area of the images to be processed of the two cameras to the two eyes by using the constructed mapping matrix.
S240, updating the optical flow of background pixel points in each target left-eye image segment and each target right-eye image segment based on a second optical flow estimation result of a null mirror background image pair corresponding to the image pair to be processed.
The empty mirror background image is understood as a scene of shooting without dynamic people or objects, and only shooting a landscape or building without people, namely, shooting the dynamic background of people. Each image to be processed corresponds to a null mirror background image.
The second optical flow estimation result of the empty mirror background image pair also comprises a bidirectional optical flow result, and the bidirectional optical flow result is used for updating the optical flow of the background pixel point in each target left eye image segment and each target right eye image segment.
The dynamic update of the optical flow is performed in consideration of the influence of the introduction of the foreground on the information of the surrounding foreground and the background blocked by the foreground (the distortion/deletion of the optical flow field) in the optical flow mapping process. At this time, it is necessary to repair the information to obtain a correct mapping result.
In a possible implementation manner, image segmentation may be performed on each image to be processed in the image to be processed to obtain a corresponding foreground image and a background image, that is, determining whether each pixel point in the image belongs to a foreground pixel or a background pixel. And then, according to the background image, determining the pixel points to be updated, which need to be subjected to optical flow updating, in each target left-eye image segment and each target right-eye image segment. It can be understood that the position mapping relationship between the pixel points in each target left-eye image segment and each target right-eye image segment and each pixel point in the background image determines that the background pixel points in each target left-eye image segment and each target right-eye image segment are the pixel points to be updated. And finally, updating the first optical flow estimation result of the pixel point to be updated into a corresponding second optical flow estimation result, namely replacing the optical flow value of the background pixel point which is obtained after mapping with the optical flow value of the background pixel point without interference in the empty mirror background image. And the edge between the background image and the foreground image is optimized, so that the problems of distortion and the like are avoided.
S250, performing image fusion on each target left-eye image segment updated by the optical flow, and performing image fusion on each target right-eye image segment updated by the optical flow to obtain the left-eye image segment to be spliced and the right-eye image segment to be spliced, which correspond to the image pair to be processed.
In this step, the parameters of the image fusion are updated first, since for the final image segments to be stitched for the left or right eye, they are fused from two parts, which may generate ghost images, so that further updating of the fusion coefficients at the time of fusion is required.
The update process may dynamically update the result based on the optical flow in step S240. Specifically, after the optical flow value of a certain pixel point of the image segment is replaced, the fusion coefficient of the mapping result (the mapped optical flow value) of the pixel point is 1, and the fusion coefficient of the pixel point where the adjacent camera image is fused is updated to be 0. This avoids the problem of "double image of fusion results due to occlusion".
And then, respectively carrying out image fusion on each target left eye image segment and each target right eye image segment based on the updated pixel fusion parameters to obtain a left eye image segment to be spliced and a right eye image segment to be spliced which correspond to the image pair to be processed.
And S260, performing image stitching based on each left-eye image fragment to be stitched and each right-eye image fragment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed.
Splicing each left-eye image segment to be spliced to obtain a left-eye panoramic image, and splicing each right-eye image segment to be spliced to obtain a right-eye panoramic image; and obtaining a target omnidirectional stereoscopic view based on the left-eye panoramic image and the right-eye panoramic image.
According to the technical scheme, a group of images to be processed acquired by a preset imaging device array is acquired, and bidirectional optical flow estimation is carried out on pairs of images to be processed corresponding to every two adjacent imaging devices in the preset imaging device array, so that a first optical flow estimation result is obtained; according to a preset offset parameter, respectively intercepting corresponding left eye image fragments and right eye image fragments in each image to be processed of the image pair to be processed; performing image mapping based on the first optical flow estimation results corresponding to the left-eye image segments and the right-eye image segments to obtain target left-eye image segments and target right-eye image segments of the images to be processed in a target visual angle range; updating the optical flow of background pixel points in each target left-eye image segment and each target right-eye image segment based on a second optical flow estimation result of an empty mirror background image pair corresponding to the image pair to be processed; image fusion is carried out on each target left eye image segment updated by the optical flow, and image fusion is carried out on each target right eye image segment updated by the optical flow, so that the left eye image segment to be spliced and the right eye image segment to be spliced which correspond to the image to be processed are obtained; and performing image stitching based on each left-eye image fragment to be stitched and each right-eye image fragment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed. According to the technical scheme, the problem that the splicing part is not natural enough in the current panoramic stereo view is solved, the situation that the foreground and the background edge are distorted after the image is spliced can be avoided, the situation that the foreground shields the background is avoided, the splicing effect of the splicing area of the panoramic stereo image is more natural, and the image effect is improved.
Fig. 4 is a schematic flow chart of an image stitching method according to an embodiment of the present disclosure, and on the basis of the foregoing embodiment, in a process of implementing the image stitching method, a process of time sequence smoothing of an optical flow is further performed, so as to avoid a jump between consecutive multiple groups of panoramic images. The method may be performed by an image stitching device, which may be implemented in software and/or hardware, and optionally by an electronic device, which may be a mobile terminal, a PC-side or a server, etc.
As shown in fig. 4, the image stitching method includes:
s310, acquiring a group of to-be-processed images acquired by a preset imaging device array, and performing bidirectional optical flow estimation on to-be-processed image pairs corresponding to every two adjacent imaging devices in the preset imaging device array to obtain a first optical flow estimation result.
S320, determining a historical bidirectional optical flow estimation result which is related to the first optical flow estimation result in time sequence, and performing time sequence smoothing on the first optical flow estimation result of each image pair to be processed based on the historical bidirectional optical flow estimation result.
The time sequence can be understood as the sequence of the times at which the images were acquired. Assume that the set of to-be-processed images acquired in step S310 is an nth set of to-be-processed images, and the first optical flow estimation result of the to-be-processed image pair corresponding to each two adjacent imaging devices in the nth set of to-be-processed images is the nth set of optical flow estimation result. Wherein the value of N may be expressed as the number of successive groups of images to be processed for which the image optical flow timing process is performed. It can be understood that when each group of images to be processed is subjected to optical flow processing, the processing needs to be performed on the continuous N groups of images to be processed, so as to realize the stability of the continuous N groups of images to be processed in time sequence. Specifically, after optical flow information of the nth group of images to be processed is acquired, a first N-1 group of images to be processed corresponding to the nth group of images to be processed may be determined. And then optical flow information of the front N-1 groups of to-be-processed images, namely the front N-1 groups of historical optical flow estimation results which are related with the first optical flow estimation result in time sequence, can be acquired.
Further, optical flow timing smoothing may be performed by using a neural network. That is, a neural network with a time sequence smoothing function is trained in advance, and smoothed optical flow information is output. Or, the time association coefficient and the pixel association coefficient of the N group of images to be processed and the previous N-1 group of images to be processed can be calculated to respectively determine the corresponding optical flow smoothing weight parameters so as to obtain a final optical flow smoothing result.
Therefore, the optical flow stability of the image to be processed acquired at any moment relative to a plurality of images to be processed which are continuous in time sequence can be improved, larger optical flow jump between adjacent image frames is prevented, the image jump and jitter conditions in subsequent image splicing processing are further reduced, and therefore the dynamic image effect is improved.
S330, performing image mapping on the first optical flow estimation result subjected to time sequence smoothing processing and the second optical flow estimation result of the corresponding empty mirror background image pair based on the image to be processed, and fusing the image mapping results to obtain a left-eye image segment to be spliced and a right-eye image segment to be spliced, which correspond to the image to be processed.
And S340, carrying out color and/or brightness consistency processing on each left-eye image segment to be spliced and each right-eye image segment to be spliced.
S350, stitching each processed left-eye image segment to be stitched to obtain a left-eye panoramic image, and stitching each processed right-eye image segment to be stitched to obtain a right-eye panoramic image.
S360, obtaining a target omnidirectional stereoscopic view based on the left-eye panoramic image and the right-eye panoramic image.
According to the technical scheme, a group of images to be processed acquired by a preset imaging device array is acquired, and bidirectional optical flow estimation is carried out on pairs of images to be processed corresponding to every two adjacent imaging devices in the preset imaging device array, so that a first optical flow estimation result is obtained; determining historical bidirectional optical flow estimation results of to-be-processed image pairs corresponding to every two adjacent imaging devices in a preset number of groups of historical to-be-processed images which are continuous in time sequence of the group of to-be-processed images; based on the corresponding historical bidirectional optical flow estimation results, respectively carrying out time sequence smoothing on the bidirectional optical flow estimation results of each image pair to be processed; performing image mapping on the first optical flow estimation result subjected to time sequence smoothing processing and the second optical flow estimation result of the corresponding empty mirror background image pair based on the image to be processed, and fusing the image mapping result to obtain a left-eye image fragment to be spliced and a right-eye image fragment to be spliced, which correspond to the image to be processed; performing color and/or brightness consistency processing on each left-eye image segment to be spliced and each right-eye image segment to be spliced; splicing each processed left-eye image fragment to be spliced to obtain a left-eye panoramic image, and splicing each processed right-eye image fragment to be spliced to obtain a right-eye panoramic image; and obtaining a target omnidirectional stereoscopic view based on the left-eye panoramic image and the right-eye panoramic image. According to the technical scheme, the problem that the splicing part is not natural enough in the current panoramic stereo view is solved, the situation that the distortion of the foreground and the background edge and the like possibly occurs after the images are spliced can be avoided, the splicing effect of the splicing area of the panoramic stereo image is more natural, and the image effect is improved. In addition, in this embodiment, the first optical flow estimation result is smoothed in time sequence, so that the optical flow stability of the to-be-processed image acquired at any time relative to a plurality of to-be-processed images continuous in time sequence can be improved, larger optical flow jump between adjacent image frames is prevented, and further image jump and jitter conditions in subsequent image stitching processing are reduced, so that the effect of the dynamic panoramic stereo image is improved.
Fig. 5 is a schematic flow chart of an image stitching method according to an embodiment of the present disclosure, and on the basis of the foregoing embodiment, a full flow chart for implementing the image stitching method is further explained. The method may be performed by an image stitching device, which may be implemented in software and/or hardware, and optionally by an electronic device, which may be a mobile terminal, a PC-side or a server, etc.
As shown in fig. 5, the image stitching method includes:
s410, acquiring a group of empty mirror background images acquired by a preset imaging device array, and extracting empty mirror background image pairs of overlapping areas corresponding to every two adjacent imaging devices from the group of empty mirror background images.
The empty mirror background image is understood as a scene of shooting without dynamic people or objects, and only shooting a landscape or building without people, namely, shooting the dynamic background of people. Preparation is made for subsequent dynamic optical flow updates.
S420, performing bidirectional optical flow estimation on each group of empty mirror background image pairs to obtain optical flow estimation results of all the empty mirror background image pairs.
The optical flow estimation result of the empty mirror background image is an original optical flow estimation result which is not mapped by the image, and the state of the background pixel can be accurately expressed.
S430, acquiring a group of to-be-processed images acquired by a preset imaging device array, and performing bidirectional optical flow estimation on to-be-processed image pairs corresponding to every two adjacent imaging devices in the preset imaging device array to obtain a first optical flow estimation result.
S440, determining a historical bidirectional optical flow estimation result which is related with the first optical flow estimation result in time sequence, and performing time sequence smoothing on the first optical flow estimation result of each image pair to be processed respectively based on the historical bidirectional optical flow estimation result.
S450, performing image mapping based on the first optical flow estimation result subjected to time sequence smoothing processing and the optical flow estimation result of the corresponding empty mirror background image pair, and fusing the image mapping result to obtain the left-eye image fragment to be spliced and the right-eye image fragment to be spliced, which correspond to the image pair to be processed.
S460, carrying out color and/or brightness consistency processing on each left-eye image segment to be spliced and each right-eye image segment to be spliced.
And S470, stitching each processed left-eye image segment to be stitched to obtain a left-eye panoramic image, and stitching each processed right-eye image segment to be stitched to obtain a right-eye panoramic image.
And S480, obtaining a target omnidirectional stereoscopic view based on the left-eye panoramic image and the right-eye panoramic image.
The technical scheme of the embodiment of the disclosure is that; acquiring a group of empty mirror background images acquired by a preset imaging device array, and extracting empty mirror background image pairs of overlapping areas corresponding to every two adjacent imaging devices from the group of empty mirror background images; performing bidirectional optical flow estimation on each group of empty mirror background image pairs to obtain optical flow estimation results of all the empty mirror background image pairs; acquiring a group of images to be processed acquired by a preset imaging device array, and performing bidirectional optical flow estimation on the images to be processed corresponding to every two adjacent imaging devices in the preset imaging device array to obtain a first optical flow estimation result; determining historical bidirectional optical flow estimation results of to-be-processed image pairs corresponding to every two adjacent imaging devices in a preset number of groups of historical to-be-processed images which are continuous in time sequence of the group of to-be-processed images; based on the corresponding historical bidirectional optical flow estimation results, respectively carrying out time sequence smoothing on the bidirectional optical flow estimation results of each image pair to be processed; performing image mapping based on a first optical flow estimation result subjected to time sequence smoothing processing and an optical flow estimation result of the corresponding empty mirror background image pair, and fusing the image mapping result to obtain a left-eye image fragment to be spliced and a right-eye image fragment to be spliced, which correspond to the image pair to be processed; performing color and/or brightness consistency processing on each left-eye image segment to be spliced and each right-eye image segment to be spliced; splicing each processed left-eye image fragment to be spliced to obtain a left-eye panoramic image, and splicing each processed right-eye image fragment to be spliced to obtain a right-eye panoramic image; and obtaining a target omnidirectional stereoscopic view based on the left-eye panoramic image and the right-eye panoramic image. According to the technical scheme, the problem that the splicing part is not natural enough in the current panoramic stereo view is solved, the situation that the distortion of the foreground and the background edge and the like possibly occurs after the images are spliced can be avoided, the splicing effect of the splicing area of the panoramic stereo image is more natural, and the image effect is improved. In addition, in this embodiment, the first optical flow estimation result is smoothed in time sequence, so that the optical flow stability of the to-be-processed image acquired at any time relative to a plurality of to-be-processed images continuous in time sequence can be improved, larger optical flow jump between adjacent panoramic image frames is prevented, and further image jump and jitter conditions in subsequent image stitching processing are reduced, so that the effect of the dynamic panoramic stereoscopic image is improved.
Fig. 6 is a schematic diagram of an image stitching device according to an embodiment of the present disclosure, where the image stitching device is applicable to a panoramic image stitching scene, and the image stitching device may be implemented in software and/or hardware, and may be configured in an electronic device, where the electronic device may be a mobile terminal, a PC terminal, a server, or the like.
As shown in fig. 6, the image stitching apparatus includes: an image acquisition module 510, an image segment processing module 520, and an image segment stitching module 530.
The image obtaining module 510 is configured to obtain a set of to-be-processed images acquired by a preset imaging device array, and perform bidirectional optical flow estimation on to-be-processed image pairs corresponding to each two adjacent imaging devices in the preset imaging device array, so as to obtain a first optical flow estimation result; the image segment processing module 520 is configured to perform image mapping on the basis of the first optical flow estimation result and the to-be-processed image, and fuse the image mapping results to obtain a left-eye to-be-stitched image segment and a right-eye to-be-stitched image segment corresponding to the to-be-processed image pair; the image segment stitching module 530 is configured to stitch images based on each of the left-eye image segments to be stitched and each of the right-eye image segments to be stitched, so as to obtain a target omnidirectional stereoscopic view corresponding to the set of images to be processed.
According to the technical scheme, a group of images to be processed acquired by a preset imaging device array is acquired, and bidirectional optical flow estimation is carried out on pairs of images to be processed corresponding to every two adjacent imaging devices in the preset imaging device array, so that a first optical flow estimation result is obtained; performing image mapping on the first optical flow estimation result and the second optical flow estimation result of the image pair corresponding to the empty mirror background, namely correcting optical flow information of foreground and background pixel points at a splicing position in the image mapping process, and further fusing the image mapping results to obtain a left-eye image fragment to be spliced and a right-eye image fragment to be spliced, which correspond to the image pair to be processed; and performing image stitching based on each left-eye image fragment to be stitched and each right-eye image fragment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed. According to the technical scheme, the problem that the splicing part is not natural enough in the current panoramic stereo view is solved, the situation that the distortion of the foreground and the background edge and the like possibly occurs after the images are spliced can be avoided, the splicing effect of the splicing area of the panoramic stereo image is more natural, and the image effect is improved.
In an alternative embodiment, the image segment processing module 520 specifically includes:
the image segment selecting unit is used for respectively intercepting corresponding left eye image segments and right eye image segments from each image to be processed of the image pair to be processed according to preset offset parameters;
an image segment mapping unit, configured to perform image mapping based on the first optical flow estimation results corresponding to the left-eye image segments and the right-eye image segments, so as to obtain a target left-eye image segment and a target right-eye image segment of each image to be processed in a target viewing angle range;
an image segment map correction unit configured to update optical flows of background pixel points in each of the target left-eye image segments and each of the target right-eye image segments based on the second optical flow estimation result;
and the image segment fusion unit is used for carrying out image fusion on each target left eye image segment updated by the optical flow, and carrying out image fusion on each target right eye image segment updated by the optical flow, so as to obtain the image segments to be spliced of the left eye and the right eye corresponding to the image segments to be spliced.
In an alternative embodiment, the image segment map correction unit is specifically configured to:
Image segmentation is carried out on each image to be processed in the pair of images to be processed, so as to obtain a corresponding foreground image and a background image;
determining pixel points to be updated, which need to be subjected to optical flow updating, in each target left-eye image segment and each target right-eye image segment according to the background image;
and updating the first optical flow estimation result of the pixel point to be updated into a corresponding second optical flow estimation result.
In an alternative embodiment, the image segment fusion unit is specifically configured to:
updating pixel fusion parameters aiming at the pixel points which are updated by the optical flow in each target left eye image segment and each target right eye image segment;
and respectively carrying out image fusion on each target left-eye image segment and each target right-eye image segment based on the updated pixel fusion parameters to obtain the left-eye image segment to be spliced and the right-eye image segment to be spliced corresponding to the image pair to be processed.
In an alternative embodiment, the image segment mapping unit is specifically configured to:
determining the relative position relation between original pixel points in each left eye image segment and each right eye image segment and a preset reference position in the target visual angle range;
Determining a mapping matrix of each left-eye image segment and each right-eye image segment according to the relative position relation;
and obtaining a target left-eye image segment and a target right-eye image segment of each image to be processed in a target visual angle range through optical flow mapping calculation between the first optical flow estimation result corresponding to each left-eye image segment and the right-eye image segment and the corresponding mapping matrix.
In an optional embodiment, the image stitching apparatus further includes an optical flow estimation result updating module configured to:
after obtaining a first optical flow estimation result, determining a historical bidirectional optical flow estimation result that is temporally associated with the first optical flow estimation result;
and based on the historical bidirectional optical flow estimation results, respectively performing time sequence smoothing processing on the first optical flow estimation results of each image pair to be processed.
In an alternative embodiment, the image acquisition module 510 may be further configured to:
before a group of images to be processed acquired by a preset imaging device array is acquired, acquiring a group of empty mirror background images acquired by the preset imaging device array;
extracting empty mirror background image pairs of overlapping areas corresponding to every two adjacent imaging devices from the group of empty mirror background images;
And carrying out bidirectional optical flow estimation on each group of empty mirror background image pairs to obtain the second optical flow estimation result.
In an alternative embodiment, the image segment stitching module 530 is specifically configured to:
image stitching is carried out on the basis of each left-eye image segment to be stitched and each right-eye image segment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed, and the method comprises the following steps:
performing color and/or brightness consistency processing on each left-eye image segment to be spliced and each right-eye image segment to be spliced;
splicing each processed left-eye image fragment to be spliced to obtain a left-eye panoramic image, and splicing each processed right-eye image fragment to be spliced to obtain a right-eye panoramic image;
and obtaining a target omnidirectional stereoscopic view based on the left-eye panoramic image and the right-eye panoramic image.
The image stitching device provided by the embodiment of the disclosure can execute the image stitching method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
It should be noted that each unit and module included in the above apparatus are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for convenience of distinguishing from each other, and are not used to limit the protection scope of the embodiments of the present disclosure.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Referring now to fig. 7, a schematic diagram of an electronic device (e.g., a terminal device or server in fig. 7) 600 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 7, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An edit/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 7 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 601.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The electronic device provided by the embodiment of the present disclosure and the image stitching method provided by the foregoing embodiment belong to the same inventive concept, and technical details not described in detail in the present embodiment may be referred to the foregoing embodiment, and the present embodiment has the same beneficial effects as the foregoing embodiment.
The embodiment of the present disclosure also provides a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the image stitching method provided by the above embodiment.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
acquiring a group of images to be processed acquired by a preset imaging device array, and performing bidirectional optical flow estimation on the images to be processed corresponding to every two adjacent imaging devices in the preset imaging device array to obtain a first optical flow estimation result;
performing image mapping on the second optical flow estimation result of the corresponding empty mirror background image pair based on the first optical flow estimation result and the image to be processed, and fusing the image mapping result to obtain a left-eye image fragment to be spliced and a right-eye image fragment to be spliced, which correspond to the image to be processed;
And performing image stitching based on each left-eye image fragment to be stitched and each right-eye image fragment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The disclosed embodiments also provide a computer program product comprising a computer program which, when executed by a processor, implements an image stitching method as provided by any of the embodiments of the present disclosure.
Computer program product in an implementation, computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
According to one or more embodiments of the present disclosure, there is provided an image stitching method, the method comprising:
acquiring a group of images to be processed acquired by a preset imaging device array, and performing bidirectional optical flow estimation on the images to be processed corresponding to every two adjacent imaging devices in the preset imaging device array to obtain a first optical flow estimation result;
performing image mapping on the second optical flow estimation result of the corresponding empty mirror background image pair based on the first optical flow estimation result and the image to be processed, and fusing the image mapping result to obtain a left-eye image fragment to be spliced and a right-eye image fragment to be spliced, which correspond to the image to be processed;
and performing image stitching based on each left-eye image fragment to be stitched and each right-eye image fragment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed.
According to one or more embodiments of the present disclosure, there is provided an image stitching method [ example two ] further comprising:
in some optional implementations, performing image mapping on the first optical flow estimation result and the to-be-processed image to the second optical flow estimation result of the corresponding empty mirror background image pair, and fusing the image mapping results to obtain a left-eye to-be-stitched image segment and a right-eye to-be-stitched image segment corresponding to the to-be-processed image pair, including:
According to a preset offset parameter, respectively intercepting corresponding left eye image fragments and right eye image fragments in each image to be processed of the image pair to be processed;
performing image mapping based on the first optical flow estimation results corresponding to the left-eye image segments and the right-eye image segments to obtain target left-eye image segments and target right-eye image segments of the images to be processed in a target visual angle range;
updating the optical flow of the background pixel point in each target left eye image segment and each target right eye image segment based on the second optical flow estimation result;
and carrying out image fusion on each target left eye image segment updated by the optical flow, and carrying out image fusion on each target right eye image segment updated by the optical flow to obtain the left eye image segment to be spliced and the right eye image segment to be spliced, which correspond to the image to be processed.
According to one or more embodiments of the present disclosure, there is provided an image stitching method, including:
in some optional implementations, updating the optical flow of the background pixel point in each of the target left-eye image segments and each of the target right-eye image segments based on the second optical flow estimation result includes:
Image segmentation is carried out on each image to be processed in the pair of images to be processed, so as to obtain a corresponding foreground image and a background image;
determining pixel points to be updated, which need to be subjected to optical flow updating, in each target left-eye image segment and each target right-eye image segment according to the background image;
and updating the first optical flow estimation result of the pixel point to be updated into a corresponding second optical flow estimation result. According to one or more embodiments of the present disclosure, there is provided an image stitching method [ example four ], further comprising:
in some optional implementations, image fusion is performed on each target left-eye image segment and each target right-eye image segment updated by optical flow, so as to obtain the image to be processed corresponding to the left-eye image segment to be stitched and the right-eye image segment to be stitched, including:
updating pixel fusion parameters aiming at the pixel points which are updated by the optical flow in each target left eye image segment and each target right eye image segment;
and respectively carrying out image fusion on each target left-eye image segment and each target right-eye image segment based on the updated pixel fusion parameters to obtain the left-eye image segment to be spliced and the right-eye image segment to be spliced corresponding to the image pair to be processed.
According to one or more embodiments of the present disclosure, there is provided an image stitching method [ example five ]:
in some optional implementations, performing image mapping based on the first optical flow estimation result corresponding to each of the left-eye image segment and the right-eye image segment to obtain a target left-eye image segment and a target right-eye image segment of each of the images to be processed in a target viewing angle range, where the method includes:
determining the relative position relation between original pixel points in each left eye image segment and each right eye image segment and a preset reference position in the target visual angle range;
determining a mapping matrix of each left-eye image segment and each right-eye image segment according to the relative position relation;
and obtaining a target left-eye image segment and a target right-eye image segment of each image to be processed in a target visual angle range through optical flow mapping calculation between the first optical flow estimation result corresponding to each left-eye image segment and the right-eye image segment and the corresponding mapping matrix.
According to one or more embodiments of the present disclosure, there is provided an image stitching method [ example six ], further comprising:
in some alternative implementations, after obtaining the first optical flow estimation result, the method further includes:
Determining a historical bidirectional optical flow estimate that is temporally associated with the first optical flow estimate;
and based on the historical bidirectional optical flow estimation results, respectively performing time sequence smoothing processing on the first optical flow estimation results of each image pair to be processed.
According to one or more embodiments of the present disclosure, there is provided an image stitching method [ example seventh ], further comprising:
in some optional implementations, before acquiring the set of images to be processed acquired by the preset imaging device array, the method further includes:
acquiring a group of empty mirror background images acquired by a preset imaging device array;
extracting empty mirror background image pairs of overlapping areas corresponding to every two adjacent imaging devices from the group of empty mirror background images;
and carrying out bidirectional optical flow estimation on each group of empty mirror background image pairs to obtain the second optical flow estimation result.
According to one or more embodiments of the present disclosure, there is provided an image stitching method [ example eight ], further comprising:
in some optional implementations, image stitching is performed based on each of the left-eye image segments to be stitched and each of the right-eye image segments to be stitched, so as to obtain a target omnidirectional stereoscopic view corresponding to the set of images to be processed, including:
Performing color and/or brightness consistency processing on each left-eye image segment to be spliced and each right-eye image segment to be spliced;
splicing each processed left-eye image fragment to be spliced to obtain a left-eye panoramic image, and splicing each processed right-eye image fragment to be spliced to obtain a right-eye panoramic image;
and obtaining a target omnidirectional stereoscopic view based on the left-eye panoramic image and the right-eye panoramic image.
According to one or more embodiments of the present disclosure, there is provided an image stitching apparatus, including:
the image acquisition module is used for acquiring a group of images to be processed acquired by a preset imaging device array, and carrying out bidirectional optical flow estimation on the image pairs to be processed corresponding to every two adjacent imaging devices in the preset imaging device array to obtain a first optical flow estimation result;
the image segment processing module is used for carrying out image mapping on the basis of the first optical flow estimation result and the second optical flow estimation result of the image pair to be processed, corresponding to the empty mirror background image pair, and carrying out fusion on the image mapping result to obtain a left-eye image segment to be spliced and a right-eye image segment to be spliced, which correspond to the image pair to be processed;
And the image segment stitching module is used for stitching images based on each left-eye image segment to be stitched and each right-eye image segment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed.
According to one or more embodiments of the present disclosure, there is provided an image stitching apparatus [ example ten ], further comprising:
in an alternative embodiment, the image segment processing module specifically includes:
the image segment selecting unit is used for respectively intercepting corresponding left eye image segments and right eye image segments from each image to be processed of the image pair to be processed according to preset offset parameters;
an image segment mapping unit, configured to perform image mapping based on the first optical flow estimation results corresponding to the left-eye image segments and the right-eye image segments, so as to obtain a target left-eye image segment and a target right-eye image segment of each image to be processed in a target viewing angle range;
an image segment map correction unit configured to update optical flows of background pixel points in each of the target left-eye image segments and each of the target right-eye image segments based on the second optical flow estimation result;
And the image segment fusion unit is used for carrying out image fusion on each target left eye image segment updated by the optical flow, and carrying out image fusion on each target right eye image segment updated by the optical flow, so as to obtain the image segments to be spliced of the left eye and the right eye corresponding to the image segments to be spliced.
According to one or more embodiments of the present disclosure, there is provided an image stitching apparatus [ example eleven ], further comprising:
in an alternative embodiment, the image segment map correction unit is specifically configured to:
image segmentation is carried out on each image to be processed in the pair of images to be processed, so as to obtain a corresponding foreground image and a background image;
determining pixel points to be updated, which need to be subjected to optical flow updating, in each target left-eye image segment and each target right-eye image segment according to the background image;
and updating the first optical flow estimation result of the pixel point to be updated into a corresponding second optical flow estimation result.
According to one or more embodiments of the present disclosure, there is provided an image stitching apparatus [ example twelve ], further comprising:
in an alternative embodiment, the image segment fusion unit is specifically configured to:
Updating pixel fusion parameters aiming at the pixel points which are updated by the optical flow in each target left eye image segment and each target right eye image segment;
and respectively carrying out image fusion on each target left-eye image segment and each target right-eye image segment based on the updated pixel fusion parameters to obtain the left-eye image segment to be spliced and the right-eye image segment to be spliced corresponding to the image pair to be processed.
According to one or more embodiments of the present disclosure, there is provided an image stitching apparatus [ example thirteenth ], further comprising:
in an alternative embodiment, the image segment mapping unit is specifically configured to:
determining the relative position relation between original pixel points in each left eye image segment and each right eye image segment and a preset reference position in the target visual angle range;
determining a mapping matrix of each left-eye image segment and each right-eye image segment according to the relative position relation;
and obtaining a target left-eye image segment and a target right-eye image segment of each image to be processed in a target visual angle range through optical flow mapping calculation between the first optical flow estimation result corresponding to each left-eye image segment and the right-eye image segment and the corresponding mapping matrix.
According to one or more embodiments of the present disclosure, there is provided an image stitching apparatus [ example fourteen ], further comprising:
in an optional embodiment, the image stitching apparatus further includes an optical flow estimation result updating module configured to:
determining a historical bidirectional optical flow estimate that is temporally associated with the first optical flow estimate;
and based on the historical bidirectional optical flow estimation results, respectively performing time sequence smoothing processing on the first optical flow estimation results of each image pair to be processed.
According to one or more embodiments of the present disclosure, there is provided an image stitching apparatus [ example fifteen ], further comprising:
in an alternative embodiment, the image acquisition module is further operable to:
before a group of images to be processed acquired by a preset imaging device array is acquired, acquiring a group of empty mirror background images acquired by the preset imaging device array;
extracting empty mirror background image pairs of overlapping areas corresponding to every two adjacent imaging devices from the group of empty mirror background images;
and carrying out bidirectional optical flow estimation on each group of empty mirror background image pairs to obtain the second optical flow estimation result.
According to one or more embodiments of the present disclosure, there is provided an image stitching apparatus [ example sixteen ], further comprising:
in an alternative embodiment, the image segment stitching module is specifically configured to:
image stitching is carried out on the basis of each left-eye image segment to be stitched and each right-eye image segment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed, and the method comprises the following steps:
performing color and/or brightness consistency processing on each left-eye image segment to be spliced and each right-eye image segment to be spliced;
splicing each processed left-eye image fragment to be spliced to obtain a left-eye panoramic image, and splicing each processed right-eye image fragment to be spliced to obtain a right-eye panoramic image;
and obtaining a target omnidirectional stereoscopic view based on the left-eye panoramic image and the right-eye panoramic image.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (12)

1. An image stitching method, comprising:
acquiring a group of images to be processed acquired by a preset imaging device array, and performing bidirectional optical flow estimation on the images to be processed corresponding to every two adjacent imaging devices in the preset imaging device array to obtain a first optical flow estimation result;
Performing image mapping on the second optical flow estimation result of the corresponding empty mirror background image pair based on the first optical flow estimation result and the image to be processed, and fusing the image mapping result to obtain a left-eye image fragment to be spliced and a right-eye image fragment to be spliced, which correspond to the image to be processed;
and performing image stitching based on each left-eye image fragment to be stitched and each right-eye image fragment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed.
2. The method of claim 1, wherein image mapping the second optical flow estimation result of the corresponding empty mirror background image pair based on the first optical flow estimation result and the image to be processed, and fusing the image mapping results to obtain the left eye image segment to be stitched and the right eye image segment to be stitched corresponding to the image to be processed, comprises:
according to a preset offset parameter, respectively intercepting corresponding left eye image fragments and right eye image fragments in each image to be processed of the image pair to be processed;
performing image mapping based on the first optical flow estimation results corresponding to the left-eye image segments and the right-eye image segments to obtain target left-eye image segments and target right-eye image segments of the images to be processed in a target visual angle range;
Updating the optical flow of the background pixel point in each target left eye image segment and each target right eye image segment based on the second optical flow estimation result;
and carrying out image fusion on each target left eye image segment updated by the optical flow, and carrying out image fusion on each target right eye image segment updated by the optical flow to obtain the left eye image segment to be spliced and the right eye image segment to be spliced, which correspond to the image to be processed.
3. The method of claim 2, wherein updating optical flow for background pixels in each of the target left-eye image segments and each of the target right-eye image segments based on the second optical flow estimation result comprises:
image segmentation is carried out on each image to be processed in the pair of images to be processed, so as to obtain a corresponding foreground image and a background image;
determining pixel points to be updated, which need to be subjected to optical flow updating, in each target left-eye image segment and each target right-eye image segment according to the background image;
and updating the first optical flow estimation result of the pixel point to be updated into a corresponding second optical flow estimation result.
4. The method according to claim 2, wherein image fusion is performed on each target left-eye image segment and each target right-eye image segment updated by optical flow, so as to obtain the image to be processed for the corresponding left-eye image segment to be stitched and the right-eye image segment to be stitched, including:
Updating pixel fusion parameters aiming at the pixel points which are updated by the optical flow in each target left eye image segment and each target right eye image segment;
and respectively carrying out image fusion on each target left-eye image segment and each target right-eye image segment based on the updated pixel fusion parameters to obtain the left-eye image segment to be spliced and the right-eye image segment to be spliced corresponding to the image pair to be processed.
5. The method of claim 2, wherein performing image mapping based on the first optical flow estimation result corresponding to each of the left-eye image segment and the right-eye image segment to obtain a target left-eye image segment and a target right-eye image segment of each of the images to be processed in a target view angle range, comprises:
determining the relative position relation between original pixel points in each left eye image segment and each right eye image segment and a preset reference position in the target visual angle range;
determining a mapping matrix of each left-eye image segment and each right-eye image segment according to the relative position relation;
and obtaining a target left-eye image segment and a target right-eye image segment of each image to be processed in a target visual angle range through optical flow mapping calculation between the first optical flow estimation result corresponding to each left-eye image segment and the right-eye image segment and the corresponding mapping matrix.
6. The method of any of claims 1-5, wherein after obtaining the first optical flow estimation result, the method further comprises:
determining a historical bidirectional optical flow estimate that is temporally associated with the first optical flow estimate; the method comprises the steps of carrying out a first treatment on the surface of the
And based on the historical bidirectional optical flow estimation results, respectively performing time sequence smoothing processing on the first optical flow estimation results of each image pair to be processed.
7. The method according to any one of claims 1-5, wherein prior to acquiring a set of images to be processed acquired by a preset array of imaging devices, the method further comprises:
acquiring a group of empty mirror background images acquired by a preset imaging device array;
extracting empty mirror background image pairs of overlapping areas corresponding to every two adjacent imaging devices from the group of empty mirror background images;
and carrying out bidirectional optical flow estimation on each group of empty mirror background image pairs to obtain the second optical flow estimation result.
8. The method of claim 1, wherein image stitching based on each of the left-eye image segments to be stitched and each of the right-eye image segments to be stitched to obtain a target omnidirectional stereoscopic view corresponding to the set of images to be processed, comprises:
Performing color and/or brightness consistency processing on each left-eye image segment to be spliced and each right-eye image segment to be spliced;
splicing each processed left-eye image fragment to be spliced to obtain a left-eye panoramic image, and splicing each processed right-eye image fragment to be spliced to obtain a right-eye panoramic image;
and obtaining a target omnidirectional stereoscopic view based on the left-eye panoramic image and the right-eye panoramic image.
9. An image stitching device, comprising:
the image acquisition module is used for acquiring a group of images to be processed acquired by a preset imaging device array, and carrying out bidirectional optical flow estimation on the image pairs to be processed corresponding to every two adjacent imaging devices in the preset imaging device array to obtain a first optical flow estimation result;
the image segment processing module is used for carrying out image mapping on the basis of the first optical flow estimation result and the second optical flow estimation result of the image pair to be processed, corresponding to the empty mirror background image pair, and carrying out fusion on the image mapping result to obtain a left-eye image segment to be spliced and a right-eye image segment to be spliced, which correspond to the image pair to be processed;
And the image segment stitching module is used for stitching images based on each left-eye image segment to be stitched and each right-eye image segment to be stitched so as to obtain a target omnidirectional stereoscopic view corresponding to the group of images to be processed.
10. An electronic device, the electronic device comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the image stitching method of any of claims 1-8.
11. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the image stitching method according to any of claims 1-8.
12. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the image stitching method according to any one of claims 1-8.
CN202311443333.7A 2023-11-01 2023-11-01 Image stitching method and device, electronic equipment, medium and product Pending CN117437121A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311443333.7A CN117437121A (en) 2023-11-01 2023-11-01 Image stitching method and device, electronic equipment, medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311443333.7A CN117437121A (en) 2023-11-01 2023-11-01 Image stitching method and device, electronic equipment, medium and product

Publications (1)

Publication Number Publication Date
CN117437121A true CN117437121A (en) 2024-01-23

Family

ID=89549527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311443333.7A Pending CN117437121A (en) 2023-11-01 2023-11-01 Image stitching method and device, electronic equipment, medium and product

Country Status (1)

Country Link
CN (1) CN117437121A (en)

Similar Documents

Publication Publication Date Title
CN107945112B (en) Panoramic image splicing method and device
US9870602B2 (en) Method and apparatus for fusing a first image and a second image
CN111062881A (en) Image processing method and device, storage medium and electronic equipment
CN112686824A (en) Image correction method, image correction device, electronic equipment and computer readable medium
CN111294580B (en) Camera video projection method, device and equipment based on GPU and storage medium
CN114022662A (en) Image recognition method, device, equipment and medium
WO2022166868A1 (en) Walkthrough view generation method, apparatus and device, and storage medium
US20220182595A1 (en) Optical flow based omnidirectional stereo video processing method
CN114782659A (en) Image processing method, device, equipment and storage medium
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
CN115002345B (en) Image correction method, device, electronic equipment and storage medium
CN117115267A (en) Calibration-free image processing method and device, electronic equipment and storage medium
US10701286B2 (en) Image processing device, image processing system, and non-transitory storage medium
CN115937291B (en) Binocular image generation method and device, electronic equipment and storage medium
CN115002442B (en) Image display method and device, electronic equipment and storage medium
CN117437121A (en) Image stitching method and device, electronic equipment, medium and product
CN111915532B (en) Image tracking method and device, electronic equipment and computer readable medium
CN113538318A (en) Image processing method, image processing device, terminal device and readable storage medium
KR20170088623A (en) Method for generating multi-view image by using a plurality of mobile terminal
CN114827482B (en) Image brightness adjusting method and device, electronic equipment and medium
CN112668474B (en) Plane generation method and device, storage medium and electronic equipment
CN117764883A (en) Image processing method, device, electronic equipment and storage medium
CN115564879A (en) Image processing method, image processing device, electronic equipment and storage medium
CN116563106A (en) Image processing method and device and electronic equipment
CN116301530A (en) Virtual scene processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination