CN106162143B - parallax fusion method and device - Google Patents
parallax fusion method and device Download PDFInfo
- Publication number
- CN106162143B CN106162143B CN201610522270.8A CN201610522270A CN106162143B CN 106162143 B CN106162143 B CN 106162143B CN 201610522270 A CN201610522270 A CN 201610522270A CN 106162143 B CN106162143 B CN 106162143B
- Authority
- CN
- China
- Prior art keywords
- image
- flow field
- overlapping area
- field motion
- direction flow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 16
- 230000009466 transformation Effects 0.000 claims abstract description 106
- 238000000034 method Methods 0.000 claims abstract description 16
- 230000004927 fusion Effects 0.000 claims description 35
- 238000012937 correction Methods 0.000 claims description 34
- 230000007547 defect Effects 0.000 claims description 34
- 230000007704 transition Effects 0.000 claims description 22
- 239000011800 void material Substances 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 208000007356 Fracture Dislocation Diseases 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 2
- 208000010392 Bone Fractures Diseases 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/293—Generating mixed stereoscopic images; Generating mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/361—Reproducing mixed stereoscopic images; Reproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0088—Synthesising a monoscopic image signal from stereoscopic images, e.g. synthesising a panoramic or high resolution monoscopic image
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to a kind of parallax fusion method and devices.The method includes:Obtain the first direction flow field movement relation and second direction flow field movement relation of the first image and the second image overlay region;First direction flow field movement relation is modified to obtain revised first direction flow field movement relation, and second direction flow field movement relation is modified to obtain revised second direction flow field movement relation;Using revised first direction flow field movement relation and second direction flow field movement relation, forward and backward deformation transformation is done in the overlay region to the first image and the overlay region of the second image respectively;The overlay region of the overlay region of the first image after forward and backward deformation transformation and the second image is merged to obtain the final image of the first image and the second image overlay region.Above-mentioned parallax fusion method and device avoid the appearance fracture of continuous lines, ghost image void side, ghost etc. caused by parallax and splice flaw.
Description
Technical Field
The present invention relates to the field of image processing, and in particular, to a parallax fusion method and apparatus.
Background
The 360-degree panoramic video is gradually one of the main contents in the field of virtual reality due to the capability of providing users with more realistic immersive viewing experience different from the traditional limited-field video. Because the single-lens system for acquiring panoramic video is rare at present, the panoramic video is generally formed by splicing videos acquired by a plurality of cameras or a plurality of lens systems. However, in terms of the principle of lens optical perspective geometry, the two camera systems with different optical centers always have a certain parallax in the part of the two-dimensional imaging sensor where the two camera systems are imaged, and the parallax is different at different depth planes, which eventually causes the spliced video content to have visually unacceptable defects in the area where the parallax exists, such as ghosting, ghost, broken continuous lines, and the like.
For the problem, the traditional solution is to extract significant feature points in the overlapping region of the spliced left and right images, perform left and right matching, and then solve a deformation function, so that the feature points matched with the left and right images are overlapped, and the image deformation cost is minimum. Although the main objects in the overlapping area can be approximately overlapped, the problems of parallax flaws, ghost virtual edges, dislocation and breakage of continuous lines and the like can still occur.
Disclosure of Invention
Accordingly, it is necessary to provide a parallax fusion method and device for solving the problems of ghost, ghost virtual edge, and continuous line dislocation fracture in the overlapping area of the conventional image stitching due to parallax, so as to eliminate stitching defects such as ghost, ghost virtual edge, and continuous line dislocation fracture in the overlapping area of the images due to parallax.
A parallax fusion method, comprising:
acquiring a first direction flow field motion relation and a second direction flow field motion relation of a first image and a second image overlapping area;
correcting the first direction flow field motion relation to obtain a corrected first direction flow field motion relation, and correcting the second direction flow field motion relation to obtain a corrected second direction flow field motion relation;
respectively carrying out forward and backward deformation transformation on the overlapping area of the first image and the overlapping area of the second image by adopting the corrected first direction flow field motion relation and the corrected second direction flow field motion relation;
and fusing the overlapping area of the first image and the overlapping area of the second image after the forward and backward deformation transformation to obtain a final image of the overlapping area of the first image and the second image.
A parallax fusion apparatus comprising:
the motion relation acquisition module is used for acquiring a first direction flow field motion relation and a second direction flow field motion relation of an overlapping area of the first image and the second image;
the correction module is used for correcting the motion relation of the first direction flow field to obtain a corrected first direction flow field motion relation and correcting the motion relation of the second direction flow field to obtain a corrected second direction flow field motion relation;
the deformation module is used for respectively carrying out forward and backward deformation transformation on the overlapping area of the first image and the overlapping area of the second image by adopting the corrected first direction flow field motion relation and the corrected second direction flow field motion relation;
and the fusion module is used for fusing the overlapping area of the first image and the overlapping area of the second image after the forward and backward deformation transformation to obtain a final image of the overlapping area of the first image and the second image.
According to the parallax fusion method and device, the first direction flow field motion relation and the second direction flow field motion relation of the first image and the second image overlapping area are obtained, the first direction flow field motion relation and the second direction flow field motion relation are adopted to carry out forward and backward deformation transformation on the overlapping area of the first image and the second image overlapping area, the overlapping area of the first image and the overlapping area of the second image which are subjected to the forward and backward deformation transformation are fused to obtain the final image of the overlapping area of the first image and the second image, the shielding areas which comprise transition of different depth surfaces in flow field data are utilized to carry out deformation transformation and then fusion, the obtained final image is obtained, and splicing defects such as fracture of continuous lines, ghost virtual edges, ghost images and the like caused by parallax are avoided.
Drawings
FIG. 1 is a schematic diagram showing an internal structure of an electronic apparatus according to an embodiment;
FIG. 2 is a flow diagram of a disparity fusion method in one embodiment;
FIG. 3 is a block diagram of a parallax fusion apparatus in one embodiment;
FIG. 4 is a block diagram of the internal structure of a modification module in one embodiment;
FIG. 5 is a block diagram of the internal structure of a morphing module in one embodiment;
FIG. 6 is a block diagram of the internal structure of the fusion module in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present invention. Both the first client and the second client are clients, but they are not the same client.
Fig. 1 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 1, the electronic apparatus includes a processor, a nonvolatile storage medium, an internal memory, a network interface, a display screen, and an input device, which are connected by a system bus. The non-volatile storage medium of the electronic device stores an operating system and further comprises a parallax fusion device, and the parallax fusion device is used for realizing the parallax fusion method. The processor is used for providing calculation and control capability and supporting the operation of the whole terminal. An internal memory in the electronic device provides an environment for operation of the disparity fusion apparatus in the non-volatile storage medium, and the internal memory may store computer-readable instructions, which, when executed by the processor, may cause the processor to perform a disparity fusion method. The network interface is used for communication with other devices, and the like. The display screen of the electronic device may be a liquid crystal display screen or an electronic ink display screen, and the input device may be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a housing of the electronic device, or an external keyboard, a touch pad or a mouse. The electronic device may be a cell phone, personal computer, tablet computer, personal digital assistant, wearable device or server, or the like. Those skilled in the art will appreciate that the architecture shown in fig. 1 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the electronic devices to which the subject application may be applied, and that a particular electronic device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Fig. 2 is a flowchart of a disparity fusion method in an embodiment. As shown in fig. 2, a parallax fusion method, executed on an electronic device, includes:
step 202, acquiring a first direction flow field motion relation and a second direction flow field motion relation of an overlapping area of the first image and the second image.
The images of the acquired panoramic video are generally spliced by video acquired by a plurality of cameras or a plurality of lens systems, that is, the first image and the second image are spliced. When the first image and the second image are spliced, a part of overlapping exists, namely, the overlapping area of the first image and the second image is formed. A portion of the overlapping area of the first image and the second image that belongs to the first image is referred to as an overlapping area of the first image, and a portion of the overlapping area of the first image and the second image that belongs to the second image is referred to as an overlapping area of the second image. The coordinates of each pixel point in the overlapping area of the first image and the second image, the coordinates of each pixel point in the overlapping area of the first image and the coordinates of each pixel point in the overlapping area of the second image are the same.
The first direction flow field motion relation of the first image and the second image overlapping region is the flow field motion relation from the first image to the second image direction.
The second direction flow field motion relation of the first image and the second image overlapping region is the flow field motion relation from the second image to the first image direction.
For example, if the overlapping area of the first image and the second image is overlapped in the horizontal direction, the Flow field motion relationship from left to right of the mark is Flowl2rThe Flow field motion relationship from right to left is Flowr2l. l2r is an abbreviation for left2right, labeled left to right direction, and r2l is an abbreviation for right2left, labeled right to left direction.
The pixel-by-pixel dense matching relationship of the overlapping region of the first image and the overlapping region of the second image, i.e., the first direction flow field motion relationship and the second direction flow field motion relationship, can be calculated according to a classical flow field algorithm.
And 204, correcting the first direction flow field motion relation to obtain a corrected first direction flow field motion relation, and correcting the second direction flow field motion relation to obtain a corrected second direction flow field motion relation.
In this embodiment, the modifying the first-direction flow field motion relationship to obtain a modified first-direction flow field motion relationship includes: and performing transition joint correction on the first direction flow field motion relation and a non-overlapping area of the first image to obtain a corrected first direction flow field motion relation.
Specifically, the non-overlapping area of the first image refers to an area of the first image that does not overlap with the second image. The transition linking correction is to smoothly transition the images in the overlapped area and the non-overlapped area after deformation.
Correcting the second direction flow field motion relation to obtain a corrected second direction motion flow field motion relation, comprising: and performing transition joint correction on the second direction flow field motion relation and a non-overlapping area of the second image to obtain a corrected second direction motion flow field motion relation.
Specifically, the non-overlapping area of the second image refers to an area of the second image that does not overlap with the first image.
And step 206, performing forward and backward deformation transformation on the overlapping area of the first image and the overlapping area of the second image respectively by using the corrected first direction flow field motion relation and the corrected second direction flow field motion relation.
In this embodiment, the corrected first direction flow field motion relationship and the second direction flow field motion relationship are respectively used to perform forward and backward transformation on the overlapping area of the first image and the overlapping area of the second image to obtain four transformation images, namely, a forward transformation image of the overlapping area of the first image, a backward transformation image of the overlapping area of the first image, a forward transformation image of the overlapping area of the second image, and a backward transformation image of the overlapping area of the second image.
And if the modified first-direction flow field motion relation is adopted for deformation transformation, the first image is a reference image, and the second image is a target image.
And if the modified flow field motion relation in the second direction is adopted for deformation transformation, the second image is a reference image, and the first image is a target image.
The forward deformation is the deformation transformation of the image from a reference image to a target image according to the input flow field data, and the backward deformation is the deformation transformation of the image from the target image to the reference image.
And step 208, fusing the overlapping area of the first image and the overlapping area of the second image after the forward and backward deformation transformation to obtain a final image of the overlapping area of the first image and the second image.
In this embodiment, the four transformed images are fused to obtain a final image of the overlapping area of the first image and the second image.
According to the parallax fusion method, the first direction flow field motion relation and the second direction flow field motion relation of the first image and the second image overlapping area are obtained, the first direction flow field motion relation and the second direction flow field motion relation are adopted to carry out forward and backward deformation transformation on the overlapping area of the first image and the second image overlapping area, the overlapping area of the first image and the overlapping area of the second image which are subjected to the forward and backward deformation transformation are fused to obtain the final image of the overlapping area of the first image and the second image, the shielding areas which comprise transition of different depth surfaces in flow field data are utilized to carry out deformation transformation and then fusion, the obtained final image is obtained, and splicing defects such as fracture, ghost virtual edges, ghost images and the like of continuous lines caused by parallax are avoided.
In one embodiment, if the first image and the second image are overlapped in the horizontal direction, performing transition joint correction on the first-direction flow field motion relation and a non-overlapped area of the first image to obtain a corrected first-direction flow field motion relation, including: then, the horizontal direction component of the first direction flow field motion relation is multiplied by a first coefficient factor containing the horizontal coordinate of the pixel point of the overlapping area and the horizontal pixel width relation of the overlapping area to obtain a corrected horizontal direction component of the first direction flow field motion relation, and the vertical direction component of the first direction flow field motion relation is used as the corrected vertical direction component of the first direction flow field motion relation.
In this embodiment, the first coefficient factor including the relationship between the horizontal coordinate of the pixel in the overlap area and the horizontal pixel width of the overlap area may be a ratio between the horizontal coordinate of the pixel in the overlap area and the horizontal pixel width of the overlap area, or a ratio between the horizontal coordinate of the pixel in the overlap area and the horizontal pixel width of the overlap area minus 1.
The first coefficient factor is the ratio of the horizontal coordinate of the pixel point in the overlapping area to the horizontal pixel width minus 1 in the overlapping areaAnd (3) calculating by adopting the formula (1).
In the formula (1), the first and second groups,a horizontal direction component representing the corrected first direction flow field motion relationship,a vertical direction component representing the corrected first direction flow field motion relationship,a horizontal direction component representing the relationship of the first direction flow field motion before correction,a vertical direction component representing the relationship of the flow field motion in the first direction before correction, N is the horizontal pixel width of the overlapping area of the first image and the second image, x is the horizontal coordinate of the pixel point of the image, y is the vertical coordinate of the pixel point of the image, the superscript h represents the horizontal direction component, the superscript v represents the vertical direction component, and l2r is an abbreviation of left2right, marking the direction from the first image to the second image (i.e., the left-to-right direction).
In other embodiments, the first coefficient factor is a ratio between a horizontal coordinate of a pixel of the overlap region and a horizontal pixel width of the overlap regionAnd (3) calculating by adopting the formula (2).
In one embodiment, if the first image and the second image are overlapped in the horizontal direction, performing transition joint correction on the second-direction flow field motion relation and a non-overlapped area of the second image to obtain a corrected second-direction motion flow field motion relation, including: and multiplying the horizontal direction component of the second direction flow field motion relation by a second coefficient factor containing the horizontal coordinate of the pixel point of the overlapping area and the horizontal pixel width relation of the overlapping area to obtain a corrected horizontal direction component of the second direction flow field motion relation, and taking the vertical direction component of the second direction flow field motion relation as the corrected vertical direction component of the second direction flow field motion relation.
In this embodiment, the second coefficient factor including the relationship between the horizontal coordinate of the pixel in the overlapping area and the horizontal pixel width of the overlapping area may be a difference relationship between a predetermined constant and a ratio between the horizontal coordinate of the pixel in the overlapping area and the horizontal pixel width of the overlapping area, or a difference relationship between a predetermined constant and a ratio between the horizontal coordinate of the pixel in the overlapping area and the horizontal pixel width of the overlapping area minus 1.
The second coefficient factor is the difference relation between the predetermined constant and the ratio of the horizontal coordinate of the pixel point in the overlapping area to the horizontal pixel width minus 1 in the overlapping areaAnd (4) calculating by adopting a formula (3).
In the formula (3), the first and second groups,indicating a modified second direction flow field motion switchThe component of the system in the horizontal direction,a vertical direction component representing the modified second direction flow field motion relationship,a horizontal direction component representing the relationship of the first direction flow field motion before correction,a vertical direction component representing the motion relation of the flow field in the second direction before correction, N is the horizontal pixel width of the overlapping area of the first image and the second image, x is the horizontal coordinate of the pixel point of the image, y is the vertical coordinate of the pixel point of the image, the superscript h represents the horizontal direction component, the superscript v represents the vertical direction component, and r2l is an abbreviation of right2left, and marks the direction from the second image to the first image (i.e., the right-to-left direction).
In other embodiments, the second coefficient factor is a difference relationship between a predetermined constant and a ratio between a horizontal coordinate of a pixel of the overlap region and a horizontal pixel width of the overlap region, i.e.Then, the calculation is performed by using the formula (4).
In other embodiments, the first coefficient factor isThe second coefficient factor isAlternatively, the first coefficient factor may beThe second coefficient factor isNot limited thereto.
In one embodiment, if the first image and the second image are overlapped in the vertical direction, performing transition joint correction on the first-direction flow field motion relation and a non-overlapped area of the first image to obtain a corrected first-direction flow field motion relation, including: and multiplying the vertical direction component of the first direction flow field motion relation by a third coefficient factor containing the vertical coordinate of the pixel point of the overlapping area and the vertical pixel height relation of the overlapping area to obtain a corrected vertical direction component of the first direction flow field motion relation, and taking the horizontal direction component of the first direction flow field motion relation as the corrected horizontal direction component of the first direction flow field motion relation.
In this embodiment, the third coefficient factor may be a ratio relationship between the vertical coordinate of the pixel in the overlapping region and the vertical pixel height of the overlapping region, or a ratio relationship between the vertical coordinate of the pixel in the overlapping region and the vertical pixel height of the overlapping region minus 1.
The third coefficient factor can be a ratio relation between the vertical coordinate of the pixel point in the overlapping region and the vertical pixel height minus 1 in the overlapping regionAnd (5) calculating by adopting the formula.
In the formula (5), the first and second groups,a horizontal direction component representing the corrected first direction flow field motion relationship,a vertical direction component representing the corrected first direction flow field motion relationship,a horizontal direction component representing the relationship of the first direction flow field motion before correction,the vertical direction component representing the motion relation of the first direction flow field before correction, M is the vertical pixel height of the overlapping area of the first image and the second image, x is the horizontal coordinate of the image pixel, y is the vertical coordinate of the image pixel, the superscript h represents the horizontal direction component, the superscript v represents the vertical direction component, and u2d is the abbreviation of up2down, and marks the direction from the first image to the second image (i.e. the up-down direction).
The third coefficient factor can be a ratio relation between the vertical coordinate of the pixel point in the overlapping region and the vertical pixel height of the overlapping regionAnd (4) calculating by using the formula (6).
In one embodiment, if the first image and the second image are overlapped in the vertical direction, performing transition joint correction on the second-direction flow field motion relation and a non-overlapped region of the second image to obtain a corrected second-direction motion flow field motion relation, includes: and multiplying the vertical direction component of the second direction flow field motion relation by a fourth coefficient factor containing the vertical coordinate of the pixel point of the overlapping area and the vertical pixel height relation of the overlapping area to obtain a corrected vertical direction component of the second direction flow field motion relation, and taking the horizontal direction component of the second direction flow field motion relation as the corrected horizontal direction component of the second direction flow field motion relation.
In this embodiment, the fourth coefficient factor may be a difference relationship between a predetermined constant and a ratio between the vertical coordinate of the pixel point in the overlap region and the vertical pixel height of the overlap region, or may be a difference relationship between a predetermined constant and a ratio between the vertical coordinate of the pixel point in the overlap region and the vertical pixel height of the overlap region minus 1.
The fourth coefficient factor is the difference value relation between the preset constant and the ratio of the vertical coordinate of the pixel point in the overlapping area to the height of the vertical pixel in the overlapping area minus 1Then, the calculation is performed by using the formula (7).
In the formula (7), the first and second groups,a horizontal direction component representing the corrected first direction flow field motion relationship,a vertical direction component representing the corrected first direction flow field motion relationship,a horizontal direction component representing the relationship of the first direction flow field motion before correction,vertical square for representing motion relation of flow field in first direction before correctionThe directional component, M being the vertical pixel height of the overlapping region of the first image and the second image, x being the horizontal coordinate of the image pixel, y being the vertical coordinate of the image pixel, superscript h indicating the horizontal directional component, superscript v indicating the vertical directional component, d2u being the abbreviation of down2up, marks the direction from the second image to the first image (i.e. the down to up direction).
The fourth coefficient factor is the difference value relation between the preset constant and the ratio of the vertical coordinate of the pixel point in the overlapping area to the vertical pixel height of the overlapping areaThen, the calculation is performed by using the formula (8).
In other embodiments, the third coefficient factor may beThe fourth coefficient factor may beAlternatively, the third coefficient factor may beThe fourth coefficient factor may beNot limited thereto.
In one embodiment, the performing forward and backward transformation on the overlapping region of the first image and the overlapping region of the second image respectively by using the corrected first direction flow field motion relationship and the corrected second direction flow field motion relationship comprises:
(1) performing forward deformation transformation on the overlapping area of the first image by adopting the corrected first direction flow field motion relation to obtain a forward deformation transformation image of the overlapping area of the first image;
(2) carrying out backward deformation transformation on the overlapping area of the first image by adopting the corrected second-direction flow field motion relation to obtain a backward deformation transformation image of the overlapping area of the first image;
(3) performing forward deformation transformation on the overlapping area of the second image by adopting the corrected second-direction flow field motion relationship to obtain a forward deformation transformation image of the overlapping area of the second image;
(4) and performing backward deformation transformation on the overlapping area of the second image by adopting the corrected first direction flow field motion relation to obtain a backward deformation transformation image of the overlapping area of the second image.
If the modified first-direction flow field motion relationship is adopted for carrying out deformation transformation, the first image is a reference image, and the second image is a target image; and if the modified flow field motion relation in the second direction is adopted for deformation transformation, the second image is a reference image, and the first image is a target image.
The forward warping transformation is a warping transformation from the reference image to the target image, and the backward warping transformation is a warping transformation from the target image to the reference image.
Taking the first image and the second image overlapped in the horizontal direction as an example, the overlapped area of the first image is marked as ILThe overlap region of the second image is marked as IRAccording to the corrected first direction flow field motion relation rFlow (x, y)l2rAnd a corrected second direction flow field motion relationship rFlow (x, y)r2lTo ILAnd IRThe following data are obtained by performing deformation transformation:
R'=FL2R=Forwardwarp(rFlowl2r,IL)
R”=BL2R=Backwardwarp(rFlowr2l,IL)
L'=FR2L=Forwardwarp(rFlowr2l,IR)
L”=BR2L=Backwardwarp(rFlowl2r,IR)
wherein R' is the overlapping region I of the first imageLR' is the overlap region I of the first imageLL' is the overlap region I of the second imageRL' is the overlap region I of the second imageRForward represents forward warping and backwarp represents backward warping.
Forwardwarp (I) refers to the forward warping transformation of image I from a reference image to a target image according to input flow field data flow. Because the flow field data flow comprises the shielding areas with different depth surface transitions, R 'and L' obtained by Forwardwarp conversion can generate first-class defect areas such as defects of a hollow area and an overlapping area, and the first-class defect areas are marked as regionshole。
Backwardwarp (flow, I) refers to the step of carrying out backward deformation transformation on the image I from a target image to a reference image according to the flow of input flow field data. Because the flow field data flow comprises the shielding areas due to transition of different depth surfaces, R 'and L' obtained by Backwardwarp conversion have no hole defects any more, but ghost defects can occur.
The four overlapped region images obtained through deformation transformation have R ', L', R 'and L' and respectively have flaws such as void regions, overlapped regions and the like, and because flaw regions obtained through Backward transformation and Forwardwarp transformation have complementarity, the flaw regions are fused by taking the flaw regions as sources to obtain a final image of the overlapped region.
In one embodiment, the merging the overlapping region of the first image and the overlapping region of the second image after the forward and backward transformation to obtain the final image of the overlapping region of the first image and the second image includes:
fusing the forward deformation transformation image and the backward deformation transformation image of the overlapping area of the first image to obtain a first fused image;
and fusing the forward deformation transformation image and the backward deformation transformation image of the overlapping area of the second image to obtain a second fused image.
The calculation formula is as formula (9).
Wherein,in order to be the first fused image,is the second fused image. fusion (x, y) is a fusion function.
Further, when the horizontal coordinate value of the pixel point in the overlapping area is smaller than the horizontal coordinate value of the middle pixel point in the overlapping area, the final image of the overlapping area of the first image and the second image is the first fused image;
and when the horizontal coordinate value of the pixel point in the overlapping area is equal to or greater than the horizontal coordinate value of the middle pixel point in the overlapping area, the final image of the overlapping area of the first image and the second image is the second fused image.
Specifically, the calculation formula is as shown in formula (10).
In the formula (10), IoverlapRepresenting the final image. mid represents the middle separation position of the overlapping region (the horizontal coordinate value of the middle pixel point of the overlapping region), and can be obtained by adopting a central line and other methods.
In one embodiment, fusing the forward warped transformed image and the backward warped transformed image of the overlapping region of the first image to obtain a first fused image comprises:
if the pixel point of the overlapping area belongs to a first type of defect area, the first fused image is a forward deformation transformation image of the overlapping area of the first image;
and if the pixel points of the overlapping area do not belong to the first type of defect area, the first fusion image is a backward deformation transformation image of the overlapping area of the first image.
In this embodiment, an implementation method of fusion (x, y) function may be as follows:
in formula (11), R '(x, y) is R' and R "(x, y) is R".
In one embodiment, fusing the forward warped transformed image and the backward warped transformed image of the overlapping region of the second image to obtain a second fused image comprises:
if the pixel point of the overlapping area belongs to the first type of defect area, the second fused image is a forward deformation transformation image of the overlapping area of the second image;
and if the pixel points of the overlapping area do not belong to the first type of defect area, the second fused image is a backward deformation transformation image of the overlapping area of the second image.
In this embodiment, one implementation method of fusion (x, y) function may be as follows formula (12):
in the formula (12), L '(x, y) is L', and L "(x, y) is L".
The first-class defect area and the non-first-class defect area are distinguished from each other for pixel points, the pixel points belonging to the first-class defect area are replaced by adopting a backward deformation transformation image, the first-class defects can be eliminated because the backward deformation transformation image does not have the first-class defect area, the pixel points not belonging to the first-class defect area are replaced by adopting a forward deformation transformation image, the ghost defects can be eliminated because the forward deformation transformation image does not have the ghost defects, and the final image obtained by fusing the image does not have splicing defects such as ghosts, ghost virtual edges, dislocation fracture of continuous lines and the like.
It should be noted that, in the above example, the process of overlapping and fusing the first image and the second image in the horizontal direction is adopted, and the process may also be used for overlapping and fusing the first image and the second image in the vertical direction, and the fusing process is the same.
Fig. 3 is a block diagram of a parallax fusion device in one embodiment. As shown in fig. 3, a parallax fusion apparatus constructed to implement the parallax fusion method includes a motion relation obtaining module 310, a modification module 320, a deformation module 330, and a fusion module 340. Wherein:
the motion relation obtaining module 310 is configured to obtain a first direction flow field motion relation and a second direction flow field motion relation of an overlapping area of the first image and the second image.
The correcting module 320 is configured to correct the first-direction flow field motion relationship to obtain a corrected first-direction flow field motion relationship, and correct the second-direction flow field motion relationship to obtain a corrected second-direction flow field motion relationship.
The deformation module 330 is configured to perform forward and backward deformation transformation on the overlapping area of the first image and the overlapping area of the second image respectively by using the corrected first direction flow field motion relationship and the corrected second direction flow field motion relationship.
The fusion module 340 is configured to fuse the overlapping region of the first image and the overlapping region of the second image after the forward and backward transformation to obtain a final image of the overlapping region of the first image and the second image.
FIG. 4 is a block diagram of the internal structure of the correction module in one embodiment. As shown in fig. 4, the modification module 320 includes a first modification unit 3202 and a second modification unit 3204. Wherein:
the first correction unit 3202 is configured to perform transition joint correction on the first direction flow field motion relation and a non-overlapping region of the first image to obtain a corrected first direction flow field motion relation; and
the second correcting unit 3204 is configured to perform transition joint correction on the second-direction flow field motion relation and a non-overlapping region of the second image, so as to obtain a corrected second-direction flow field motion relation.
In an embodiment, if the first image and the second image are overlapped in the horizontal direction, the first correcting unit 3202 is further configured to multiply the horizontal direction component of the first direction flow field motion relationship by a first coefficient factor including a relationship between a horizontal coordinate of a pixel point of the overlapping region and a horizontal pixel width of the overlapping region to obtain a corrected horizontal direction component of the first direction flow field motion relationship, and use the vertical direction component of the first direction flow field motion relationship as the corrected vertical direction component of the first direction flow field motion relationship; and
the second correcting unit 3204 is further configured to multiply the horizontal direction component of the second direction flow field motion relationship by a second coefficient factor including a relationship between a horizontal coordinate of a pixel in the overlapping region and a horizontal pixel width of the overlapping region to obtain a corrected horizontal direction component of the second direction flow field motion relationship, and use the vertical direction component of the second direction flow field motion relationship as the corrected vertical direction component of the second direction flow field motion relationship.
In an embodiment, if the first image and the second image are overlapped in the vertical direction, the first correcting unit 3202 is further configured to multiply the vertical direction component of the first direction flow field motion relationship by a third coefficient factor including a relationship between a vertical coordinate of a pixel point of the overlapping region and a vertical pixel height of the overlapping region to obtain a corrected vertical direction component of the first direction flow field motion relationship, and to use the horizontal direction component of the first direction flow field motion relationship as the corrected horizontal direction component of the first direction flow field motion relationship;
the second correcting unit 3204 is further configured to multiply the vertical direction component of the second direction flow field motion relationship by a fourth coefficient factor including a relationship between the vertical coordinate of the pixel in the overlapping region and the vertical pixel height of the overlapping region to obtain a corrected vertical direction component of the second direction flow field motion relationship, and use the horizontal direction component of the second direction flow field motion relationship as the corrected horizontal direction component of the second direction flow field motion relationship.
Fig. 5 is a block diagram of the internal structure of the morphing module in one embodiment. As shown in fig. 5, the deformation module 330 includes a first deformation unit 3302, a second deformation unit 3304, a third deformation unit 3306, and a fourth deformation unit 3308. Wherein:
the first deformation unit 3302 is configured to perform forward deformation transformation on the overlapping area of the first image by using the corrected first-direction flow field motion relationship, so as to obtain a forward deformation transformation image of the overlapping area of the first image;
the second deforming unit 3304 is configured to perform backward deformation transformation on the overlapping area of the first image by using the corrected second-direction flow field motion relationship, so as to obtain a backward deformation transformed image of the overlapping area of the first image;
the third deforming unit 3306 is configured to perform forward deforming transformation on the overlapping area of the second image by using the corrected second-direction flow field motion relationship, so as to obtain a forward deforming transformation image of the overlapping area of the second image;
the fourth deforming unit 3308 is configured to perform backward deformation transformation on the overlapping area of the second image by using the corrected first-direction flow field motion relationship, so as to obtain a backward deformation transformed image of the overlapping area of the second image.
FIG. 6 is a block diagram of the internal structure of the fusion module in one embodiment. As shown in fig. 6, the fusion module 340 includes a first fusion unit 3402 and a second fusion unit 3404. Wherein:
the first fusion unit 3402 is configured to fuse the forward transformed image and the backward transformed image in the overlapping area of the first image to obtain a first fused image;
the second fusion unit 3404 is configured to fuse the forward transformed image and the backward transformed image in the overlapping area of the second image to obtain a second fusion image;
when the horizontal coordinate value of the pixel point of the overlapping area is smaller than the horizontal coordinate value of the middle pixel point of the overlapping area, the final image of the overlapping area of the first image and the second image is the first fusion image;
and when the horizontal coordinate value of the pixel point in the overlapping area is equal to or greater than the horizontal coordinate value of the middle pixel point in the overlapping area, the final image of the overlapping area of the first image and the second image is the second fused image.
In one embodiment, if the pixel point of the overlap region belongs to a first type of defect region, the first fused image is a forward transformed image of the overlap region of the first image, and if the pixel point of the overlap region does not belong to the first type of defect region, the first fused image is a backward transformed image of the overlap region of the first image;
and if the pixel points of the overlapping area belong to the first type of defect area, the second fused image is a forward deformation transformation image of the overlapping area of the second image, and if the pixel points of the overlapping area do not belong to the first type of defect area, the second fused image is a backward deformation transformation image of the overlapping area of the second image.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (12)
1. A parallax fusion method, comprising:
acquiring a first direction flow field motion relation and a second direction flow field motion relation of a first image and a second image overlapping area;
performing transition joint correction on the first direction flow field motion relation and a non-overlapping area of a first image to obtain a corrected first direction flow field motion relation, and performing transition joint correction on the second direction flow field motion relation and a non-overlapping area of a second image to obtain a corrected second direction motion flow field motion relation;
respectively carrying out forward and backward deformation transformation on the overlapping area of the first image and the overlapping area of the second image by adopting the corrected first direction flow field motion relation and the corrected second direction flow field motion relation;
and fusing the overlapping area of the first image and the overlapping area of the second image after the forward and backward deformation transformation to obtain a final image of the overlapping area of the first image and the second image.
2. The method according to claim 1, wherein if the first image and the second image are horizontally overlapped, performing transition joint correction on the first direction flow field motion relationship with the non-overlapped region of the first image to obtain the corrected first direction flow field motion relationship, includes:
multiplying the horizontal direction component of the first direction flow field motion relation by a first coefficient factor containing the horizontal coordinate of the pixel point of the overlapping area and the horizontal pixel width relation of the overlapping area to obtain a corrected horizontal direction component of the first direction flow field motion relation, and taking the vertical direction component of the first direction flow field motion relation as the corrected vertical direction component of the first direction flow field motion relation; and
the performing transition joint correction on the flow field motion relation in the second direction and the non-overlapping area of the second image to obtain a corrected flow field motion relation in the second direction includes:
and multiplying the horizontal direction component of the second direction flow field motion relation by a second coefficient factor containing the horizontal coordinate of the pixel point of the overlapping area and the horizontal pixel width relation of the overlapping area to obtain a corrected horizontal direction component of the second direction flow field motion relation, and taking the vertical direction component of the second direction flow field motion relation as the corrected vertical direction component of the second direction flow field motion relation.
3. The method according to claim 1, wherein if the first image and the second image are vertically overlapped, performing transition joint correction on the first direction flow field motion relationship with the non-overlapped region of the first image to obtain the corrected first direction flow field motion relationship, comprises:
multiplying the vertical direction component of the motion relation of the first direction flow field by a third coefficient factor containing the vertical coordinate of the pixel point of the overlapping area and the vertical pixel height relation of the overlapping area to obtain a corrected vertical direction component of the motion relation of the first direction flow field, and taking the horizontal direction component of the motion relation of the first direction flow field as the corrected horizontal direction component of the motion relation of the first direction flow field;
the performing transition joint correction on the flow field motion relation in the second direction and the non-overlapping area of the second image to obtain a corrected flow field motion relation in the second direction includes:
and multiplying the vertical direction component of the second direction flow field motion relation by a fourth coefficient factor containing the vertical coordinate of the pixel point of the overlapping area and the vertical pixel height relation of the overlapping area to obtain a corrected vertical direction component of the second direction flow field motion relation, and taking the horizontal direction component of the second direction flow field motion relation as the corrected horizontal direction component of the second direction flow field motion relation.
4. The method according to claim 1, wherein the performing forward and backward transformation on the overlapped region of the first image and the overlapped region of the second image by using the modified first direction flow field motion relationship and the modified second direction flow field motion relationship respectively comprises:
performing forward deformation transformation on the overlapping area of the first image by adopting the corrected first direction flow field motion relation to obtain a forward deformation transformation image of the overlapping area of the first image;
carrying out backward deformation transformation on the overlapping area of the first image by adopting the corrected second-direction flow field motion relation to obtain a backward deformation transformation image of the overlapping area of the first image;
performing forward deformation transformation on the overlapping area of the second image by adopting the corrected second-direction flow field motion relationship to obtain a forward deformation transformation image of the overlapping area of the second image;
and performing backward deformation transformation on the overlapping area of the second image by adopting the corrected first direction flow field motion relation to obtain a backward deformation transformation image of the overlapping area of the second image.
5. The method of claim 4, wherein fusing the overlapping region of the first image and the overlapping region of the second image after the forward and backward transformation to obtain a final image of the overlapping region of the first image and the second image comprises:
fusing the forward deformation transformation image and the backward deformation transformation image of the overlapping area of the first image to obtain a first fused image;
fusing the forward deformation transformation image and the backward deformation transformation image of the overlapping area of the second image to obtain a second fused image;
when the horizontal coordinate value of the pixel point of the overlapping area is smaller than the horizontal coordinate value of the middle pixel point of the overlapping area, the final image of the overlapping area of the first image and the second image is the first fusion image;
and when the horizontal coordinate value of the pixel point in the overlapping area is equal to or greater than the horizontal coordinate value of the middle pixel point in the overlapping area, the final image of the overlapping area of the first image and the second image is the second fused image.
6. The method according to claim 5, wherein fusing the forward warped transform image and the backward warped transform image of the overlapping region of the first image to obtain a first fused image comprises:
if the pixel point of the overlapping area belongs to a first type of defect area, the first fused image is a forward deformation transformation image of the overlapping area of the first image;
if the pixel point of the overlapping area does not belong to the first type of defect area, the first fused image is a backward deformation transformation image of the overlapping area of the first image;
the fusing the forward deformation transformation image and the backward deformation transformation image of the overlapping area of the second image to obtain a second fused image comprises:
if the pixel point of the overlapping area belongs to the first type of defect area, the second fused image is a forward deformation transformation image of the overlapping area of the second image;
and if the pixel points of the overlapping area do not belong to the first type of defect area, the second fused image is a backward deformation transformation image of the overlapping area of the second image.
7. A parallax fusion apparatus, comprising:
the motion relation acquisition module is used for acquiring a first direction flow field motion relation and a second direction flow field motion relation of an overlapping area of the first image and the second image;
the correction module is used for performing transition joint correction on the first direction flow field motion relation and a non-overlapping area of a first image through the first correction unit to obtain a corrected first direction flow field motion relation, and performing transition joint correction on the second direction flow field motion relation and a non-overlapping area of a second image through the second correction unit to obtain a corrected second direction motion flow field motion relation;
the deformation module is used for respectively carrying out forward and backward deformation transformation on the overlapping area of the first image and the overlapping area of the second image by adopting the corrected first direction flow field motion relation and the corrected second direction flow field motion relation;
and the fusion module is used for fusing the overlapping area of the first image and the overlapping area of the second image after the forward and backward deformation transformation to obtain a final image of the overlapping area of the first image and the second image.
8. The apparatus according to claim 7, wherein if the first image and the second image are overlapped in a horizontal direction, the first correction unit is further configured to multiply a horizontal direction component of the motion relationship of the first direction flow field by a first coefficient factor including a relationship between a horizontal coordinate of a pixel of the overlap region and a horizontal pixel width of the overlap region to obtain a corrected horizontal direction component of the motion relationship of the first direction flow field, and to use a vertical direction component of the motion relationship of the first direction flow field as a corrected vertical direction component of the motion relationship of the first direction flow field; and
the second correction unit is further configured to multiply the horizontal direction component of the second direction flow field motion relationship by a second coefficient factor including a relationship between a horizontal coordinate of a pixel in the overlap region and a horizontal pixel width of the overlap region to obtain a corrected horizontal direction component of the second direction flow field motion relationship, and use the vertical direction component of the second direction flow field motion relationship as the corrected vertical direction component of the second direction flow field motion relationship.
9. The apparatus according to claim 7, wherein if the first image and the second image are overlapped in a vertical direction, the first correction unit is further configured to multiply a vertical direction component of the motion relationship of the first direction flow field by a third coefficient factor including a vertical coordinate of a pixel of the overlap region and a vertical pixel height relationship of the overlap region to obtain a corrected vertical direction component of the motion relationship of the first direction flow field, and to use a horizontal direction component of the motion relationship of the first direction flow field as a corrected horizontal direction component of the motion relationship of the first direction flow field;
the second correction unit is further configured to multiply the vertical direction component of the second direction flow field motion relationship by a fourth coefficient factor including a vertical coordinate of a pixel in the overlap region and a vertical pixel height relationship of the overlap region to obtain a corrected vertical direction component of the second direction flow field motion relationship, and use the horizontal direction component of the second direction flow field motion relationship as the corrected horizontal direction component of the second direction flow field motion relationship.
10. The apparatus of claim 7, wherein the deformation module comprises:
the first deformation unit is used for performing forward deformation transformation on the overlapping area of the first image by adopting the corrected first direction flow field motion relation to obtain a forward deformation transformation image of the overlapping area of the first image;
the second deformation unit is used for carrying out backward deformation transformation on the overlapping area of the first image by adopting the corrected second-direction flow field motion relation to obtain a backward deformation transformation image of the overlapping area of the first image;
the third deformation unit is used for performing forward deformation transformation on the overlapping area of the second image by adopting the corrected second-direction flow field motion relation to obtain a forward deformation transformation image of the overlapping area of the second image;
and the fourth deformation unit is used for performing backward deformation transformation on the overlapping area of the second image by adopting the corrected first direction flow field motion relation to obtain a backward deformation transformation image of the overlapping area of the second image.
11. The apparatus of claim 10, wherein the fusion module comprises:
the first fusion unit is used for fusing the forward deformation transformation image and the backward deformation transformation image of the overlapping area of the first image to obtain a first fusion image;
the second fusion unit is used for fusing the forward deformation transformation image and the backward deformation transformation image of the overlapping area of the second image to obtain a second fusion image;
when the horizontal coordinate value of the pixel point of the overlapping area is smaller than the horizontal coordinate value of the middle pixel point of the overlapping area, the final image of the overlapping area of the first image and the second image is the first fusion image;
and when the horizontal coordinate value of the pixel point in the overlapping area is equal to or greater than the horizontal coordinate value of the middle pixel point in the overlapping area, the final image of the overlapping area of the first image and the second image is the second fused image.
12. The apparatus of claim 11, wherein the first fused image is a forward transformed image of the overlapping region of the first image if the pixels of the overlapping region belong to a first type of defect region, and wherein the first fused image is a backward transformed image of the overlapping region of the first image if the pixels of the overlapping region do not belong to a first type of defect region;
and if the pixel points of the overlapping area belong to the first type of defect area, the second fused image is a forward deformation transformation image of the overlapping area of the second image, and if the pixel points of the overlapping area do not belong to the first type of defect area, the second fused image is a backward deformation transformation image of the overlapping area of the second image.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610522270.8A CN106162143B (en) | 2016-07-04 | 2016-07-04 | parallax fusion method and device |
PCT/CN2017/086950 WO2018006669A1 (en) | 2016-07-04 | 2017-06-02 | Parallax fusion method and apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610522270.8A CN106162143B (en) | 2016-07-04 | 2016-07-04 | parallax fusion method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106162143A CN106162143A (en) | 2016-11-23 |
CN106162143B true CN106162143B (en) | 2018-11-09 |
Family
ID=58061810
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610522270.8A Active CN106162143B (en) | 2016-07-04 | 2016-07-04 | parallax fusion method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106162143B (en) |
WO (1) | WO2018006669A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106162143B (en) * | 2016-07-04 | 2018-11-09 | 腾讯科技(深圳)有限公司 | parallax fusion method and device |
CN106815802A (en) * | 2016-12-23 | 2017-06-09 | 深圳超多维科技有限公司 | A kind of image split-joint method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104954664A (en) * | 2014-03-24 | 2015-09-30 | 东芝阿尔派·汽车技术有限公司 | Image processing apparatus and image processing method |
CN105488760A (en) * | 2015-12-08 | 2016-04-13 | 电子科技大学 | Virtual image stitching method based on flow field |
CN105635808A (en) * | 2015-12-31 | 2016-06-01 | 电子科技大学 | Video splicing method based on Bayesian theory |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5963664A (en) * | 1995-06-22 | 1999-10-05 | Sarnoff Corporation | Method and system for image combination using a parallax-based technique |
CN105205796A (en) * | 2014-06-30 | 2015-12-30 | 华为技术有限公司 | Wide-area image acquisition method and apparatus |
US20160191795A1 (en) * | 2014-12-30 | 2016-06-30 | Alpine Electronics, Inc. | Method and system for presenting panoramic surround view in vehicle |
CN105141920B (en) * | 2015-09-01 | 2018-06-19 | 电子科技大学 | A kind of 360 degree of panoramic video splicing systems |
CN106162143B (en) * | 2016-07-04 | 2018-11-09 | 腾讯科技(深圳)有限公司 | parallax fusion method and device |
-
2016
- 2016-07-04 CN CN201610522270.8A patent/CN106162143B/en active Active
-
2017
- 2017-06-02 WO PCT/CN2017/086950 patent/WO2018006669A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104954664A (en) * | 2014-03-24 | 2015-09-30 | 东芝阿尔派·汽车技术有限公司 | Image processing apparatus and image processing method |
CN105488760A (en) * | 2015-12-08 | 2016-04-13 | 电子科技大学 | Virtual image stitching method based on flow field |
CN105635808A (en) * | 2015-12-31 | 2016-06-01 | 电子科技大学 | Video splicing method based on Bayesian theory |
Also Published As
Publication number | Publication date |
---|---|
WO2018006669A1 (en) | 2018-01-11 |
CN106162143A (en) | 2016-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109509146B (en) | Image splicing method and device and storage medium | |
AU2017246716B2 (en) | Efficient determination of optical flow between images | |
JP4938093B2 (en) | System and method for region classification of 2D images for 2D-TO-3D conversion | |
CN101673395B (en) | Image mosaic method and image mosaic device | |
CN105374019A (en) | A multi-depth image fusion method and device | |
JP6882868B2 (en) | Image processing equipment, image processing method, system | |
US20210295467A1 (en) | Method for merging multiple images and post-processing of panorama | |
KR102450236B1 (en) | Electronic apparatus, method for controlling thereof and the computer readable recording medium | |
CN113643414A (en) | Three-dimensional image generation method and device, electronic equipment and storage medium | |
WO2021185036A1 (en) | Point cloud data generation and real-time display method and apparatus, device, and medium | |
CN106162143B (en) | parallax fusion method and device | |
CN109785225B (en) | Method and device for correcting image | |
JP2019029721A (en) | Image processing apparatus, image processing method, and program | |
CN111783497B (en) | Method, apparatus and computer readable storage medium for determining characteristics of objects in video | |
US20220070426A1 (en) | Restoration of the fov of images for stereoscopic rendering | |
CN112085842A (en) | Depth value determination method and device, electronic equipment and storage medium | |
WO2017209213A1 (en) | Image processing device, image processing method, and computer-readable recording medium | |
Kim et al. | Seamless registration of dual camera images using optimal mask-based image fusion | |
Khayotov et al. | Efficient Stitching Algorithm for Stereoscopic VR Images | |
JP5636966B2 (en) | Error detection apparatus and error detection program | |
CN114143442B (en) | Image blurring method, computer device, and computer-readable storage medium | |
CN111709880B (en) | Multi-path picture splicing method based on end-to-end neural network | |
WO2021176877A1 (en) | Image processing device, image processing method, and image processing program | |
JP6700539B2 (en) | Video processing device, video processing method, and video processing program | |
US20240333908A1 (en) | Image processing apparatus, image processing method, and non-transitory computer-readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |