CN109214983B - Image acquisition device and image splicing method thereof - Google Patents

Image acquisition device and image splicing method thereof Download PDF

Info

Publication number
CN109214983B
CN109214983B CN201710526703.1A CN201710526703A CN109214983B CN 109214983 B CN109214983 B CN 109214983B CN 201710526703 A CN201710526703 A CN 201710526703A CN 109214983 B CN109214983 B CN 109214983B
Authority
CN
China
Prior art keywords
image
auxiliary
generate
image sensor
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710526703.1A
Other languages
Chinese (zh)
Other versions
CN109214983A (en
Inventor
和佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Acer Inc
Original Assignee
Acer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acer Inc filed Critical Acer Inc
Priority to CN201710526703.1A priority Critical patent/CN109214983B/en
Publication of CN109214983A publication Critical patent/CN109214983A/en
Application granted granted Critical
Publication of CN109214983B publication Critical patent/CN109214983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention provides an image acquisition device and an image splicing method thereof, wherein the method comprises the following steps: respectively detecting shooting scenes by using a first image sensor and a second image sensor of an image acquisition device to generate first shooting information and second shooting information; acquiring images of a shooting scene according to the first shooting information and the second shooting information respectively by using a first image sensor to generate a first image and a first auxiliary image; acquiring images of a shooting scene according to the second shooting information and the first shooting information respectively by using a second image sensor to generate a second image and a second auxiliary image; the first image and the first auxiliary image are fused, and the second image and the second auxiliary image are fused to obtain respective fusion results of the first overlapping area in the first image and the second overlapping area in the second image, so that a spliced image is generated.

Description

Image acquisition device and image splicing method thereof
Technical Field
The invention relates to an image acquisition device and an image splicing method thereof.
Background
With the development of science and technology, various intelligent image acquisition devices, such as tablet computers, personal digital assistants, smart phones and the like, have become indispensable tools for modern people. The camera lens carried by the high-level intelligent image acquisition device is comparable to or even can replace the traditional consumer camera, and a few high-level intelligent image acquisition devices have the pixel and image quality close to a digital monocular or provide more advanced functions and effects.
Taking a panoramic camera as an example, an image stitching technique (image stitching) is used to join images shot by multiple lenses at the same time to capture a shot scene with a large field of view, so that a user can feel as if he or she is looking at the picture. However, since different shots view the same scene at different viewing angles, the scene information detected by different shots is slightly different, which leads to difficulty in subsequent image stitching processing. For example, when sunlight is irradiated in a direction close to the left lens, the exposure degree of the images acquired by the left lens and the right lens will be different, and a remarkable seam line or an unnatural color band will appear in the subsequent spliced images.
Disclosure of Invention
In view of this, the present invention provides an image obtaining apparatus and an image stitching method thereof, which can greatly improve the quality of a stitched image.
In an embodiment of the invention, the image stitching method is applied to an image capturing apparatus including a first image sensor and a second image sensor, and the method includes the following steps. The shooting scene is detected by a first image sensor and a second image sensor respectively to generate first shooting information corresponding to the first image sensor and second shooting information corresponding to the second image sensor. And acquiring an image of a shooting scene according to the first shooting information and the second shooting information respectively by using a first image sensor to generate a first image and a first auxiliary image. And acquiring an image of the shooting scene according to the second shooting information and the first shooting information respectively by using a second image sensor to generate a second image and a second auxiliary image, wherein the first image and the first auxiliary image have a first overlapping area, the second image and the second auxiliary image have a second overlapping area, and the first overlapping area corresponds to the second overlapping area. The first image and the first auxiliary image are fused, and the second image and the second auxiliary image are fused to respectively obtain the fusion results corresponding to the first overlapping area and the second overlapping area. And then generating a spliced image according to the first image, the fusion result and the second image.
In an embodiment of the invention, the image capturing apparatus includes a first image sensor, a second image sensor and a processor, wherein the first image sensor and the second image sensor are coupled to each other, and the processor is coupled to the first image sensor and the second image sensor. The first image sensor and the second image sensor are used for detecting a shooting scene and acquiring an image of the shooting scene. The processor is configured to detect a shooting scene by using a first image sensor and a second image sensor respectively, to generate first shooting information corresponding to the first image sensor and second shooting information corresponding to the second image sensor, to acquire an image of the shooting scene by using the first image sensor according to the first shooting information and the second shooting information respectively, to generate a first image and a first auxiliary image, to acquire an image of the shooting scene by using the second image sensor according to the second shooting information and the first shooting information respectively, to generate a second image and a second auxiliary image, to fuse the first image and the first auxiliary image, and to fuse the second image and the second auxiliary image to acquire fusion results corresponding to a first overlapping region and a second overlapping region respectively, to generate a stitched image, wherein the first image and the first auxiliary image have a first overlapping region, the second image and the second auxiliary image have a second overlapping region, and the first overlapping region corresponds to the second overlapping region.
In an embodiment of the present invention, the image stitching method is applied to an image capturing apparatus including a single image sensor, and the method includes the following steps. The method includes detecting a photographic scene at a first angle of view with an image sensor to generate first camera information corresponding to the first angle of view, and acquiring an image of the photographic scene at the first angle of view with the image sensor to generate a first image. The method comprises the steps of detecting a shooting scene at a second visual angle by using an image sensor to generate second shooting information corresponding to the second visual angle, and acquiring images of the shooting scene according to the second shooting information and the first shooting information by using the image sensor to generate a second image and an auxiliary image, wherein the first image has a first overlapping area, the second image and the auxiliary image have a second overlapping area, and the first overlapping area corresponds to the second overlapping area. The second image and the auxiliary image are fused to generate a fusion result, and a spliced image is generated according to the first image, the fusion result and the second image.
In an embodiment of the invention, the image capturing apparatus includes a single image sensor and a processor, wherein the processor is coupled to the image sensor. The image sensor is used for detecting a shooting scene and acquiring an image of the shooting scene. The processor is configured to detect a shooting scene at a first viewing angle by using the image sensor to generate first shooting information corresponding to the first viewing angle, acquire an image of the shooting scene at the first viewing angle by using the image sensor to generate a first image, detect the shooting scene at a second viewing angle by using the image sensor to generate second shooting information corresponding to the second viewing angle, acquire an image of the shooting scene according to the second shooting information and the first shooting information by using the image sensor to generate a second image and an auxiliary image, fuse the second image and the auxiliary image to generate a fusion result, and generate a stitched image according to the first image, the fusion result and the second image, wherein the first image has a first overlapping area, the second image and the auxiliary image have a second overlapping area, and the first overlapping area corresponds to the second overlapping area.
In order to make the aforementioned and other features and advantages of the invention more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
FIG. 1 is a block diagram of an image capturing device according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating an image stitching method of an image capturing apparatus according to an embodiment of the present invention.
Fig. 3 is a second image according to an embodiment of the invention.
Fig. 4 is a functional flow chart of an image stitching method of an image capturing apparatus according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of an image acquisition apparatus according to another embodiment of the present invention.
FIG. 6 is a schematic diagram illustrating an overlap region in accordance with one embodiment of the present invention.
Fig. 7A is a schematic diagram of an image capturing apparatus according to another embodiment of the present invention.
Fig. 7B is a flowchart illustrating an image stitching method of an image capturing apparatus according to another embodiment of the invention.
Description of the reference numerals
100. 700: image acquisition device
10A: first image sensor
10B: second image sensor
10C: third image sensor
710: image sensor with a plurality of pixels
20. 720: processor with a memory having a plurality of memory cells
S202A to S208
Img2: second image
LO: second overlapping border line
LS: second seam line
LB: second image boundary line
P, P': pixel
dS, dO: distance between two adjacent plates
PI1: first camera information
And (3) PI2: second camera information
Img1: first image
Img2: second image
Img12: first auxiliary image
Img21: second auxiliary image
IBP1, IBP2: fusion results
SP: image stitching processing
Img: stitching images
L OL : left overlap region
L OR : right overlap region
OA: region in third image
r C : distance between two adjacent plates
S702 to S712: step (ii) of
Detailed Description
Some embodiments of the invention will now be described in detail with reference to the drawings, wherein like reference numerals are used to refer to like or similar elements throughout the several views. These embodiments are merely exemplary of the invention, and do not disclose all possible implementations of the invention. Rather, these embodiments are merely exemplary of the methods and apparatus of the present invention as claimed.
Fig. 1 is a block diagram of an image capturing device according to an embodiment of the present invention, which is for convenience of illustration only and is not intended to limit the present invention. First, fig. 1 first describes all the components and the configuration of the image capturing device, and the detailed functions will be disclosed together with fig. 2.
Referring to fig. 1, the image capturing apparatus 100 includes a first image sensor 10A, a second image sensor 10B and a processor 20. In the embodiment, the image capturing apparatus 100 is, for example, a digital camera, a monocular camera, a digital video camera, or other devices with image capturing functions, such as a smart phone, a tablet computer, a personal digital assistant, a tablet computer, a head-mounted display, etc., and the invention is not limited thereto.
The first image sensor 10A and the second image sensor 10B each include a lens, an actuator, and a light-sensing element, wherein the lens includes a lens. The actuator may be, for example, a stepping motor (stepping motor), a Voice Coil Motor (VCM), a piezoelectric actuator (piezoelectric actuator), or other actuators that can mechanically move the lens, which is not limited in this respect. The photosensitive elements are used for respectively sensing the intensity of light rays entering the lens and further respectively generating images. The photosensitive element may be, for example, a Charge Coupled Device (CCD), a complementary metal-oxide semiconductor (CMOS) element, or other elements, but the invention is not limited thereto. It should be noted that the first image sensor 10A and the second image sensor 10B are coupled to each other for mutually transmitting the image capturing information detected by each other, which will be described in detail later.
The processor 20 may be, for example, a Central Processing Unit (CPU), or other programmable general purpose or special purpose microprocessor (microprocessor), digital Signal Processor (DSP), programmable controller, application Specific Integrated Circuit (ASIC), programmable Logic Device (PLD), or the like, or any combination thereof. The processor 20 is coupled to the first image sensor 10A and the second image sensor 10B, and is configured to control the overall operation of the image capturing apparatus 100.
It should be clear to those skilled in the art that the image capturing device further comprises a data storage device for storing images and data, which may be any type of fixed or removable Random Access Memory (RAM), read-only memory (ROM), flash memory (flash memory), hard disk or other similar devices or combinations thereof.
The following describes detailed steps of the image stitching method performed by the image capturing apparatus 100, by way of example, in conjunction with the elements of the image capturing apparatus 100 of fig. 1.
Fig. 2 is a flowchart illustrating an image stitching method of an image capturing apparatus according to an embodiment of the present invention.
Referring to fig. 1 and fig. 2, first, before the image capturing apparatus 100 captures an image of a captured scene, the processor 20 detects the captured scene by the first image sensor 10A to generate first image capturing information corresponding to the first image sensor 10A (step S202A), and detects the captured scene by the second image sensor 10B to generate second image capturing information corresponding to the second image sensor 10B (step S202B). The first imaging information may be related information obtained by the 3A algorithm and analyzed by the first image sensor 10A for the imaging scene. Similarly, the second imaging information may be related information analyzed for the imaging scene by the second image sensor 10B acquired by the 3A algorithm. In the present embodiment, the first image pickup information and the second image pickup information may be, for example, an exposure degree, a color temperature, or the like.
Next, the first image sensor 10A and the second image sensor 10B transmit the first image information and the second image information to each other, and the processor 20 acquires an image of the shooting scene according to the first image information and the second image information by using the first image sensor 10A to generate a first image and a first auxiliary image (step S204A), and acquires an image of the shooting scene according to the second image information and the first image information by using the second image sensor 10B to generate a second image and a second auxiliary image (step S204B). Since the first image sensor 10A and the second image sensor 10B capture the same shooting scene at different angles of view, respectively, the first image and the second image will have an overlapping area of the same shooting content. As a corollary, the first auxiliary image and the second auxiliary image are only the same images obtained by the first image sensor 10A and the second image sensor 10B with different imaging information, and thus the first auxiliary image and the second auxiliary image will have the same overlapping area as the first image and the second image. For convenience of explanation, the overlapping region in the first image and the first auxiliary image captured by the first image sensor 10A is hereinafter referred to as a "first overlapping region", and the overlapping region in the second image and the second auxiliary image captured by the second image sensor 10B is hereinafter referred to as a "second overlapping region".
In the image stitching technique, the overlapping area of two images will determine the quality of image stitching. To ensure natural continuity of image stitching, the processor 20 fuses the first image and the first auxiliary image (step S206A), and fuses the second image and the second auxiliary image (step S206B), to obtain the fusion results corresponding to the first overlap region and the second overlap region, respectively, and further generates a stitched image from the first image, the fusion result, and the second image (step S208). In other words, in the subsequent image stitching, the portions corresponding to the original first overlapping region and the original second overlapping region are replaced by the fusion result. Because the fusion result is based on the camera information detected by the two image sensors at the same time, obvious seam lines or unnatural color bands can be avoided from appearing in the spliced image.
In detail, from the perspective of the first image sensor 10A, the processor 20 uses the first overlapping region of the first image to fuse with the first overlapping region in the first auxiliary image, and from the perspective of the second image sensor 10B, the processor 20 uses the second overlapping region of the second image to fuse with the second overlapping region in the second auxiliary image. Here, the first overlapping area includes a first overlapping boundary line and a first seam line, and the second overlapping area includes a second overlapping boundary line and a second seam line, and the first seam line in the first image and the second seam line in the second image are to be a seam where the two images are stitched. Therefore, in the case of the first image, the processor 20 replaces the region between the first overlap boundary line and the first seam line with the fusion result, and will be referred to as "first fusion overlap region", and in the case of the second image, the processor 20 replaces the region between the second overlap boundary line and the second seam line with the fusion result, and will be referred to as "second fusion overlap region". The processor 20 generates a stitched image from the first image, the first fused overlap region, the second fused overlap region, and the second image. In the present embodiment, it is assumed that the areas of the first fused overlapping region and the second fused overlapping region are equal, that is, half of the spliced overlapping region will be based on the first image and the other half will be based on the second image. However, this is for illustrative purposes only and is not meant to limit the present invention.
Specifically, the generation of the second blending overlapped region will be described below by using fig. 3 as a second image according to an embodiment of the present invention, and the generation of the first blending overlapped region can be analogized by the same method.
Referring to fig. 3, the second image Img2 acquired by the second image sensor 10B includes a second overlapping boundary line LO, a second seam line LS and a second image boundary line LB. The area between the second overlap boundary line LO and the second image boundary line LB is a second overlap area, and the area between the second seam line LS and the second image boundary line LB is a stitching area, wherein the stitching area is only used for image stitching but is not present in the final stitched image.
Here, the region between the second overlap boundary line LO and the second seam line LS in the second image Img2 will be replaced by the second fused overlap region. The processor 20 will perform image fusion using the original second image Img2 and the same region of the second auxiliary image (not shown) to generate a second fused overlapping region. For example, assume that pixel P is one of the pixels in the second blending overlap region. The processor 20 calculates the distance dO of the pixel P with respect to the second overlap boundary line LO and the distance dS of the pixel P with respect to the second seam line LS to generate the second weight ratio. Then, the processor 20 calculates a weighted sum of the pixel value of the pixel corresponding to the pixel P in the second image and the pixel value of the pixel corresponding to the pixel P in the second auxiliary image according to the second weight ratio to generate a pixel value of the pixel P, which can be represented by the following procedure:
p x,y,O =f A (T(d O ),T(d S ))×p x,y,A +f B (T(d O ),T(d S ))×p x,y,B
wherein p is x,y,O Is the pixel value, p, of a pixel of coordinate (x, y) in the second blending overlap region x,y,A Is a pixel value, p, of a pixel having coordinates (x, y) in a second auxiliary image captured using the first imaging information of the first image sensor 10A x,y,B T is a coordinate conversion function between image pickup device positions, f is a pixel value of a pixel having coordinates (x, y) in a second image picked up using second image pickup information of the second image sensor 10B itself A And f B To be arbitrarily conformed to f A (x,y)+f B (x, y) = 1. Further, when the pixel P is located at the second overlap boundary line LO (i.e., d) O = 0), the pixel value of the pixel P will be closest to the original pixel value (e.g., P) of the second image captured by the second imaging information because it is already the farthest position from the first image in the second fusion overlap region x,y,O =p x,y, B) In that respect On the other hand, when the pixel P is located at the second seam line LS (i.e., d) S = 0), the pixel value of the pixel P will be the pixel value of the second image and the second auxiliary image both of which are taken using the second image pickup information and the first detail information (for example,
Figure GDA0003823704630000081
). Overall, in the present embodiment, the processor 20 may generate the pixel value of the pixel P, for example, according to the following equation:
Figure GDA0003823704630000082
for convenience and clarity, fig. 4 is a functional flow chart of an image stitching method of an image capturing apparatus according to an embodiment of the present invention, so as to integrate the above-mentioned overall processes.
Referring to fig. 4, first, the first image sensor 10A will detect a shooting scene to generate the first image information PI1, and the second image sensor 10B will detect the shooting scene to generate the second image information PI2. The first image sensor 10A transmits the first image pickup information PI1 to the second image sensor 10B, and the second image sensor 10B transmits the second image pickup information PI2 to the first image sensor 10A.
Next, the first image sensor 10A will acquire an image of the shooting scene from the first image sensing information PI1 to generate a first image Img1, and will acquire an image of the shooting scene from the second image sensing information PI2 to generate a first auxiliary image Img12. The processor 20 performs an image fusion process IBP on the first image Img1 and the first auxiliary image Img12. On the other hand, the second image sensor 10B will acquire an image of the photographic scene from the second image pickup information PI2 to generate a second image Img2, and will acquire an image of the photographic scene from the first image pickup information PI1 to generate a second auxiliary image Img21. The processor 20 performs an image fusion process IBP on the second image Img2 and the second auxiliary image Img21.
Next, the processor 20 will perform an image stitching process SP according to the fusion result together with the first image Img1 and the second image Img2 to generate a stitched image Img'. For details of the above process, reference is made to the related description of the foregoing embodiments, which are not repeated herein.
The above embodiments may be extended to an image capturing apparatus having three or more image sensors. When all the image sensors of the image capturing device are arranged in a collinear manner, the stitched image may be concatenated according to the flow of fig. 2 for the overlapping area of the images captured by each two adjacent image sensors. On the other hand, when all image sensors of the image capturing device are not arranged in a line (for example, in the image capturing device for capturing 360 degree surroundings), the common overlapping area obtained by more than three image sensors must be further considered when performing image stitching.
In detail, fig. 5 is a schematic diagram of an image capturing apparatus according to another embodiment of the present invention.
Referring to fig. 5, it is assumed that the image capturing apparatus 100 'includes a first image sensor 10A, a second image sensor 10B and a third image sensor 10C coupled to each other, that is, the image capturing apparatus 100' can be regarded as an additional third image sensor 10C added to the image capturing apparatus 100, wherein the third image sensor 10C is located between the first image sensor 10A and the second image sensor 10B but is not located on a connection line therebetween. In addition, the third image sensor 10C is also controlled by the processor 20. For convenience of explanation, the following will be described with the arrangement positions of the first image sensor 10A, the second image sensor 10B, and the third image sensor 10C being "left", "right", and "middle", respectively.
In the present embodiment, the image overlapping areas acquired by the first image sensor 10A and the second image sensor 10B may be subjected to image stitching according to the flow of fig. 2. On the other hand, from the perspective of the third image sensor 10C, it also detects the shooting scene first to generate third shooting information. At the same time, the third image sensor 10C also acquires images of the captured scene (hereinafter referred to as "left auxiliary image" and "right auxiliary image") based on the first imaging information and the second imaging information received from the first image sensor 10A and the second image sensor 10B. Here, the third image, the left auxiliary image, and the right auxiliary image all have an overlapping region with the first image (hereinafter referred to as "left overlapping region"), the third image, the left auxiliary image, and the right auxiliary image all have an overlapping region with the second image (hereinafter referred to as "right overlapping region"), and the third image, the left auxiliary image, and the right auxiliary image all have an overlapping region with the first image and the second image (hereinafter referred to as "common overlapping region"). The blending manner of the overlapped regions will be described with reference to fig. 6, which is a schematic diagram of the overlapped regions according to an embodiment of the invention.
Referring to fig. 6, the area OA is a portion of the third image, wherein the area covered by the left LOL is a left overlapping area overlapping with the first image, the area covered by the right LOR is a right overlapping area overlapping with the second image, and the overlapping area of the left overlapping area and the right overlapping area is a common overlapping area of the first image, the second image, and the third image. Further, the splicing area between the overlapping areas may be, for example, a range of 10 degrees in a spherical space of 360 degrees. The processor 20 fuses the left overlap region with the third image and the left auxiliary image, fuses the right overlap region with the third image and the right auxiliary image, and simultaneously fuses the common overlap region with the third image, the left auxiliary image and the right auxiliary image, and the pixel value of the fused pixel P' can be represented by the following procedure:
Figure GDA0003823704630000101
wherein p is x,y,O Is the pixel value, p, of a pixel with coordinates (x, y) x,y,L Is a pixel value, p, of a pixel having coordinates (x, y) in the left auxiliary image captured using the first imaging information x,y,R Is a pixel value, p, of a pixel having coordinates (x, y) in the right subsidiary image captured using the second imaging information x,y,C The pixel value of a pixel having coordinates (x, y) in a third image captured using the third imaging information itself. T is a coordinate transformation function between a Cartesian coordinate system and 360-degree spherical space (360 spherical space).
In the present embodiment, the processor 20 may generate the pixel value of the pixel P' by, for example, the following procedure:
Figure GDA0003823704630000102
wherein OR is a right overlap region, OL is a left overlap region, and the regions belonging to OR and OL are common overlap regions, R is the distance between the pixel P 'and the center point of the common overlap region, and Γ R, Γ L, and Γ C pixels P' are the distances between the right overlap region profile, the left overlap region profile, and the common overlap region profile, respectively.
After all the overlapped areas are subjected to the image fusion processing, the processor 20 performs image stitching by using the first image, the first fused overlapped area, the second image, the second fused overlapped area, the third image and the fusion result of the right overlapped area, the left overlapped area and the common overlapped area in the third image to generate a stitched image.
The above concept can also be implemented with a single image sensor image capture device. In detail, fig. 7A is a schematic diagram of an image capturing apparatus according to another embodiment of the present invention.
Referring to fig. 7A, the image capturing apparatus 700 includes an image sensor 710 and a processor 720. In the present embodiment, the functional structures of the image sensor 710 and the processor 720 are equivalent to the image sensors 10A/10B and the processor 20 of the image capturing apparatus 100 in fig. 1, and please refer to the relevant paragraphs of fig. 1 for detailed description, which is not repeated herein.
Fig. 7B is a flowchart illustrating an image stitching method of the image capturing apparatus according to an embodiment of the present invention, and the flowchart of fig. 7B is applicable to the image capturing apparatus 700 of fig. 7A.
Referring to fig. 7A and fig. 7B, the processor 720 of the image capturing apparatus 700 detects the captured scene with the image sensor 710 at the first viewing angle to generate first image capturing information corresponding to the first viewing angle (step S702), and captures an image of the captured scene with the image sensor 710 at the first viewing angle to generate a first image (step S704). Next, the processor 720 detects the shooting scene with the image sensor 710 at the second angle of view to generate second image capturing information corresponding to the second angle of view (step S706), and acquires an image of the shooting scene with the image sensor 710 according to the second image capturing information and the first image capturing information to generate a second image and an auxiliary image, respectively (step S708). In other words, the second image of the captured scene acquired by the image sensor 710 at the second angle of view has the same concept as that of fig. 1 which acquires the second image of the captured scene by using the second image sensor 10B, and the difference is only that the image acquisition apparatus 700 of the present embodiment needs to move to the position of the second angle of view before acquiring the second image.
Since the image sensor 710 itself captures the same shot scene at different angles of view, the first image and the second image will have overlapping regions of the same shot content. It can be inferred that the auxiliary image is simply the image sensor 710 acquiring the same image with different camera information, and therefore the auxiliary image will also have the same overlapping area as the first image, the second image. Hereinafter, the overlapping region of the first image is referred to as a "first overlapping region", and the overlapping region of the second image and the second auxiliary image is referred to as a "second overlapping region".
In the present embodiment, the processor 20 fuses the second image and the auxiliary image to generate a fused result (step S710). The difference from the previous embodiment is that the first overlap region is discarded and the second overlap region is directly replaced by the fused result. The weight ratio for fusing the second image and the auxiliary image may be a ratio according to the distance between the pixel and the two overlapping boundary lines of the overlapping region, respectively, but the invention is not limited thereto. Thereafter, the processor 720 will generate a stitched image based on the first image, the fusion result and the second image (step S712).
In summary, according to the image acquiring apparatus and the image stitching method thereof provided by the present invention, after the images of the shooting scene are acquired by using the image capturing information respectively detected by the two image sensors, and the images of the shooting scene are additionally acquired by using the image capturing information detected by the other side, the images acquired by using different image capturing information are merged with respect to the overlapping area of the images to be stitched. Therefore, the image which accords with the real shooting scene information can be generated, and the obvious seam line or unnatural color band can be avoided from appearing in the spliced image, so that the quality of the spliced image is improved. In addition, the present invention can also be implemented by an image capturing apparatus having a single image sensor and three or more image sensors, so as to enhance the applicability of the present invention in practical applications.
Although the present invention has been described with reference to the above embodiments, it should be understood that various changes and modifications can be made therein by those skilled in the art without departing from the spirit and scope of the invention.

Claims (7)

1. An image stitching method of an image acquisition device is applicable to the image acquisition device comprising a first image sensor and a second image sensor, and comprises the following steps:
respectively detecting shooting scenes by using the first image sensor and the second image sensor to generate first shooting information corresponding to the first image sensor and second shooting information corresponding to the second image sensor;
acquiring images of the shooting scene according to the first shooting information and the second shooting information respectively by using the first image sensor to generate a first image and a first auxiliary image;
acquiring an image of the shooting scene according to the second shooting information and the first shooting information respectively by using the second image sensor to generate a second image and a second auxiliary image, wherein the first image and the first auxiliary image have a first overlapping area, the second image and the second auxiliary image have a second overlapping area, and the first overlapping area corresponds to the second overlapping area; and
fusing the first image with the first auxiliary image and fusing the second image with the second auxiliary image to generate a stitched image,
wherein fusing the first image with the first auxiliary image and fusing the second image with the second auxiliary image to generate the stitched image comprises:
fusing the first overlap region of the first image with the first overlap region in the first auxiliary image to produce a first fused overlap region, and fusing the second overlap region of the second image with the second overlap region in the second auxiliary image to produce a second fused overlap region; and
and generating the spliced image according to the first image, the first fusion overlapping area, the second fusion overlapping area and the second image.
2. The method of claim 1, wherein the first overlap region comprises a first overlap boundary line and a first seam line, the first fused overlap region corresponds to a region between the first overlap boundary line and the first seam line, the second overlap region comprises a second overlap boundary line and a second seam line, and the second fused overlap region corresponds to a region between the second overlap boundary line and the second seam line.
3. The method of claim 2, wherein:
each first pixel in the first blending overlap region is generated in a manner that includes:
calculating distances between the first pixels with respect to the first overlap boundary line and the first seam line, respectively, to generate a first weight ratio; and
calculating a weighted sum of a pixel value corresponding to the first pixel in the first image and a pixel value corresponding to the first pixel in the first auxiliary image according to the first weight proportion to generate a pixel value of the first pixel; and
each second pixel in the second blending overlap region is generated in a manner that includes:
calculating a distance between the second pixel relative to the second overlapping boundary line and the second seam line, respectively, to produce a second weight ratio; and
and calculating a weighted sum of the pixel value corresponding to the first pixel in the second image and the pixel value corresponding to the second pixel in the second auxiliary image according to the second weight proportion so as to generate the pixel value of the second pixel.
4. The method of claim 1, wherein the image acquisition device further comprises a third image sensor, and the method further comprises:
detecting the shooting scene by using the third image sensor to generate third shooting information corresponding to the third image sensor;
acquiring an image of the shooting scene according to the third shooting information, the first shooting information and the second shooting information respectively by using the third image sensor to generate a third image, a left auxiliary image and a right auxiliary image, wherein the third image, the left auxiliary image and the right auxiliary image all have a left overlapping area related to the first image, a right overlapping area related to the second image and a common overlapping area related to the first image and the second image simultaneously; and
fusing the left overlap region of the third image and the left auxiliary image, fusing the right overlap region of the third image and the right auxiliary image, fusing the common overlap region of the third image, the left auxiliary image, and the right auxiliary image to generate a fusion result associated with the third image.
5. The method of claim 4, wherein generating the stitched image from the first image, the first fused overlap region, the second fused overlap region, and the second image further comprises:
generating the stitched image using the first image, the first fused overlapping region, the second image, the third image, and the fusion result associated with the third image.
6. An image acquisition apparatus, characterized by comprising:
a first image sensor to acquire an image;
the second image sensor is coupled with the first image sensor and used for acquiring an image; and
a processor coupled to the first image sensor and the second image sensor and configured to perform the following steps:
respectively detecting shooting scenes by using the first image sensor and the second image sensor to generate first shooting information corresponding to the first image sensor and second shooting information corresponding to the second image sensor;
acquiring images of the shooting scene according to the first shooting information and the second shooting information respectively by using the first image sensor to generate a first image and a first auxiliary image;
acquiring an image of the shooting scene according to the second shooting information and the first shooting information respectively by using the second image sensor to generate a second image and a second auxiliary image, wherein the first image and the first auxiliary image have a first overlapping area, the second image and the second auxiliary image have a second overlapping area, and the first overlapping area corresponds to the second overlapping area; and
fusing the first image with the first auxiliary image and fusing the second image with the second auxiliary image to generate a stitched image,
wherein fusing the first image with the first auxiliary image and fusing the second image with the second auxiliary image to generate the stitched image comprises:
fusing the first overlapping region of the first image with the first overlapping region in the first auxiliary image to produce a first fused overlapping region, and fusing the second overlapping region of the second image with the second overlapping region in the second auxiliary image to produce a second fused overlapping region; and
and generating the spliced image according to the first image, the first fusion overlapping area, the second fusion overlapping area and the second image.
7. The image capturing device as claimed in claim 6, wherein the image capturing device further comprises a third image sensor coupled to the first image sensor, the second image sensor and the processor, wherein the processor is further configured to perform the following steps:
detecting the shooting scene by using the third image sensor to generate third shooting information corresponding to the third image sensor;
acquiring an image of the shooting scene according to the third shooting information, the first shooting information and the second shooting information respectively by using the third image sensor to generate a third image, a left auxiliary image and a right auxiliary image, wherein the third image, the left auxiliary image and the right auxiliary image all have a left overlapping area related to the first image, a right overlapping area related to the second image and a common overlapping area related to the first image and the second image simultaneously; and
fusing the left overlap region of the third image and the left auxiliary image, fusing the right overlap region of the third image and the right auxiliary image, fusing the common overlap region of the third image, the left auxiliary image, and the right auxiliary image to generate a fusion result associated with the third image.
CN201710526703.1A 2017-06-30 2017-06-30 Image acquisition device and image splicing method thereof Active CN109214983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710526703.1A CN109214983B (en) 2017-06-30 2017-06-30 Image acquisition device and image splicing method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710526703.1A CN109214983B (en) 2017-06-30 2017-06-30 Image acquisition device and image splicing method thereof

Publications (2)

Publication Number Publication Date
CN109214983A CN109214983A (en) 2019-01-15
CN109214983B true CN109214983B (en) 2022-12-13

Family

ID=64976197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710526703.1A Active CN109214983B (en) 2017-06-30 2017-06-30 Image acquisition device and image splicing method thereof

Country Status (1)

Country Link
CN (1) CN109214983B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311359A (en) * 2020-01-21 2020-06-19 杭州微洱网络科技有限公司 Jigsaw method for realizing human shape display effect based on e-commerce image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142138A (en) * 2011-03-23 2011-08-03 深圳市汉华安道科技有限责任公司 Image processing method and subsystem in vehicle assisted system
CN102859987A (en) * 2010-04-05 2013-01-02 高通股份有限公司 Combining data from multiple image sensors
CN103366351A (en) * 2012-03-29 2013-10-23 华晶科技股份有限公司 Method for generating panoramic image and image acquisition device thereof
WO2016165016A1 (en) * 2015-04-14 2016-10-20 Magor Communications Corporation View synthesis-panorama

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070081081A1 (en) * 2005-10-07 2007-04-12 Cheng Brett A Automated multi-frame image capture for panorama stitching using motion sensor
JP4297111B2 (en) * 2005-12-14 2009-07-15 ソニー株式会社 Imaging apparatus, image processing method and program thereof
US9390530B2 (en) * 2011-05-27 2016-07-12 Nokia Technologies Oy Image stitching
US20130141526A1 (en) * 2011-12-02 2013-06-06 Stealth HD Corp. Apparatus and Method for Video Image Stitching

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102859987A (en) * 2010-04-05 2013-01-02 高通股份有限公司 Combining data from multiple image sensors
CN102142138A (en) * 2011-03-23 2011-08-03 深圳市汉华安道科技有限责任公司 Image processing method and subsystem in vehicle assisted system
CN103366351A (en) * 2012-03-29 2013-10-23 华晶科技股份有限公司 Method for generating panoramic image and image acquisition device thereof
WO2016165016A1 (en) * 2015-04-14 2016-10-20 Magor Communications Corporation View synthesis-panorama

Also Published As

Publication number Publication date
CN109214983A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
JP6471777B2 (en) Image processing apparatus, image processing method, and program
US9325899B1 (en) Image capturing device and digital zooming method thereof
KR101657039B1 (en) Image processing apparatus, image processing method, and imaging system
US20180213218A1 (en) Equipment and method for promptly performing calibration and verification of intrinsic and extrinsic parameters of a plurality of image capturing elements installed on electronic device
WO2014023231A1 (en) Wide-view-field ultrahigh-resolution optical imaging system and method
CN114095662B (en) Shooting guide method and electronic equipment
WO2023024697A1 (en) Image stitching method and electronic device
JP2017208619A (en) Image processing apparatus, image processing method, program and imaging system
JP5846172B2 (en) Image processing apparatus, image processing method, program, and imaging system
TWI617195B (en) Image capturing apparatus and image stitching method thereof
JP6222205B2 (en) Image processing device
JP5796611B2 (en) Image processing apparatus, image processing method, program, and imaging system
CN109214983B (en) Image acquisition device and image splicing method thereof
JP6665917B2 (en) Image processing device
US9743007B2 (en) Lens module array, image sensing device and fusing method for digital zoomed images
WO2015198478A1 (en) Image distortion correction apparatus, information processing apparatus and image distortion correction method
JP6079838B2 (en) Image processing apparatus, program, image processing method, and imaging system
JP6579764B2 (en) Image processing apparatus, image processing method, and program
JP6439845B2 (en) Image processing device
JP2010154323A (en) Image processing apparatus, image extraction method, and, program
KR102052725B1 (en) Method and apparatus for generating virtual reality image inside the vehicle by using image stitching technique
CN116823601A (en) Image stitching method, device, equipment and storage medium
KR20200054853A (en) Apparatus and method for recognizing motion based on plenoptic image
TWM640759U (en) Omni-directional image-taking apparatus with motion correction
CN116579923A (en) Image stitching method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant