CN109214983A - Image acquiring device and its image split-joint method - Google Patents

Image acquiring device and its image split-joint method Download PDF

Info

Publication number
CN109214983A
CN109214983A CN201710526703.1A CN201710526703A CN109214983A CN 109214983 A CN109214983 A CN 109214983A CN 201710526703 A CN201710526703 A CN 201710526703A CN 109214983 A CN109214983 A CN 109214983A
Authority
CN
China
Prior art keywords
image
overlapping region
information
generate
imaging sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710526703.1A
Other languages
Chinese (zh)
Other versions
CN109214983B (en
Inventor
和佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Acer Inc
Original Assignee
Acer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acer Inc filed Critical Acer Inc
Priority to CN201710526703.1A priority Critical patent/CN109214983B/en
Publication of CN109214983A publication Critical patent/CN109214983A/en
Application granted granted Critical
Publication of CN109214983B publication Critical patent/CN109214983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

It includes: to detect photographed scene respectively using the first imaging sensor and second sensor of image acquiring device that the present invention, which provides a kind of image acquiring device and its image split-joint method, the method, to generate the first camera shooting information and the second camera shooting information;Using the first imaging sensor, respectively according to the first camera shooting information and the image of the second camera shooting acquisition of information photographed scene, to generate the first image and the first assistant images;Using the second imaging sensor, respectively according to the second camera shooting information and the image of the first camera shooting acquisition of information photographed scene, to generate the second image and the second assistant images;The first image and the first assistant images are merged, the second image and the second assistant images are merged, to obtain the first overlapping region in the first image and the respective fusion results in the second overlapping region in the second image, generates stitching image accordingly.

Description

Image acquiring device and its image split-joint method
Technical field
The present invention relates to a kind of image acquiring device and its image split-joint methods.
Background technique
With the development of science and technology, miscellaneous intelligent image acquisition device, tablet computer, individual digital are helped such as Reason and smart phone etc., it has also become the indispensable tool of modern.Wherein, the intelligent image acquisition device of high-order money is taken The camera lens of load are equally matched with traditional consumption type camera, it might even be possible to replace, a small number of high-order moneys, which also have, to be connect Nearly number simple eye pixel and image quality is to provide more advanced function and effect.
It is to utilize image mosaic technology (image stitching) by a plurality of lenses while institute by taking panorama camera as an example The image of shooting is connected, and grasps more wide-field photographed scene to mend, so that just like the feeling being personally on the scene when viewing photo.So And just because of different camera lenses are to go viewing Same Scene with different perspectives, so that scene information detected by different camera lenses omits It is variant, and then lead to the difficulty in subsequent image splicing.For example, when sunlight is irradiated in close to the direction of left camera lens When, the depth of exposure of left camera lens and image accessed by right camera lens will will be different, and will in subsequent spliced image It will appear apparent jointing line or unnatural colour band.
Summary of the invention
In view of this, the present invention provides a kind of image acquiring device and its image split-joint method, splicing can be substantially improved The quality of image.
In one embodiment of this invention, above-mentioned image split-joint method, be suitable for including the first imaging sensor and The image acquiring device of second imaging sensor, the method include the following steps.It is passed using the first imaging sensor and second Sensor detects photographed scene respectively, is passed with generating to correspond to the first camera shooting information of the first imaging sensor and correspond to second Second camera shooting information of sensor.Using the first imaging sensor, obtained respectively according to the first camera shooting information and the second camera shooting information The image of photographed scene is taken, to generate the first image and the first assistant images.Using the second imaging sensor, respectively according to The image of two camera shooting information and the first camera shooting acquisition of information photographed scene, to generate the second image and the second assistant images, Wherein the first image and the first assistant images have the first overlapping region, and the second image and the second assistant images have second Overlapping region, the first overlapping region correspond to the second overlapping region.It merges the first image and the first assistant images, and merges the Two images and the second assistant images, obtained respectively corresponding to the first overlapping region and corresponding to the fusion of the second overlapping region As a result.Later, according to the first image, fusion results and the second image, stitching image is generated.
In one embodiment of this invention, above-mentioned image acquiring device includes the first imaging sensor, the second image biography Sensor and processor, wherein the first imaging sensor is coupled against each other with the second imaging sensor, processor is coupled to the first figure As sensor and the second imaging sensor.First imaging sensor and the second imaging sensor are to detect photographed scene simultaneously And obtain the image of photographed scene.Processor to detect shooting field using the first imaging sensor and second sensor respectively Scape corresponds to the first camera shooting information of the first imaging sensor and corresponding to the second camera shooting letter of second sensor to generate Breath images the image of acquisition of information photographed scene using the first imaging sensor according to the first camera shooting information and second respectively, To generate the first image and the first assistant images, using the second imaging sensor, respectively according to the second camera shooting information and the The image of one camera shooting acquisition of information photographed scene merges the first image and the to generate the second image and the second assistant images One assistant images, and merge the second image and the second assistant images, obtained respectively corresponding to the first overlapping region and the The fusion results of two overlapping regions, generate stitching image accordingly, wherein the first image and the first assistant images have the first weight Folded region, the second image and the second assistant images have the second overlapping region, and the first overlapping region corresponds to the second overlay region Domain.
In one embodiment of this invention, above-mentioned image split-joint method, suitable for including single a imaging sensor Image acquiring device, the method include the following steps.Photographed scene is detected with the first visual angle using imaging sensor, with generation pair Information should be imaged in the first of the first visual angle, and obtain the image of photographed scene with the first visual angle using imaging sensor, with Generate the first image.Photographed scene is detected with the second visual angle using imaging sensor, to generate correspond to the second visual angle second Information is imaged, and acquisition of information photographed scene is imaged according to the second camera shooting information and first respectively using imaging sensor Image, to generate the second image and assistant images, wherein the first image has the first overlapping region, the second image and auxiliary Image has the second overlapping region, and the first overlapping region corresponds to the second overlapping region.The second image and assistant images are merged, with Fusion results are generated, and according to the first image, fusion results and the second image, generate stitching image.
In one embodiment of this invention, above-mentioned image acquiring device includes single a imaging sensor and processing Device, wherein processor is coupled to imaging sensor.Imaging sensor is to detect photographed scene and obtain the figure of photographed scene Picture.Processor is to detect photographed scene using imaging sensor with the first visual angle, to generate correspond to the first visual angle first Information is imaged, is passed with generating the first image using image using imaging sensor with the image that the first visual angle obtains photographed scene Sensor detects photographed scene with the second visual angle, to generate the second camera shooting information for corresponding to the second visual angle, and is passed using image Sensor respectively according to second camera shooting information and first camera shooting acquisition of information photographed scene image, with generate the second image and Assistant images merge the second image and assistant images, to generate fusion results, and according to the first image, fusion results and Second image generates stitching image, wherein the first image has the first overlapping region, the second image and assistant images have the Two overlapping regions, the first overlapping region correspond to the second overlapping region.
To make the foregoing features and advantages of the present invention clearer and more comprehensible, special embodiment below, and it is detailed to cooperate attached drawing to make Carefully it is described as follows.
Detailed description of the invention
Fig. 1 is the block diagram of image acquiring device shown by an embodiment according to the present invention.
Fig. 2 is the image split-joint method flow chart of the image acquiring device according to shown by one embodiment of the invention.
Fig. 3 is the second image according to shown by one embodiment of the invention.
Fig. 4 is the image split-joint method functional sequence of the image acquiring device according to shown by one embodiment of the invention Figure.
Fig. 5 is the schematic diagram of image acquiring device shown by another embodiment according to the present invention.
Fig. 6 is the schematic diagram of overlapping region shown by an embodiment according to the present invention.
Fig. 7 A is the schematic diagram of image acquiring device shown by another embodiment according to the present invention.
Fig. 7 B is the image split-joint method flow chart of the image acquiring device according to shown by another embodiment of the present invention.
Description of symbols
100,700: image acquiring device
10A: the first imaging sensor
10B: the second imaging sensor
10C: third imaging sensor
710: imaging sensor
20,720: processor
S202A~S208: step
Img2: the second image
LO: the second overlapping boundary line
LS: the second jointing line
LB: the second image boundary line
P, P ': pixel
DS, dO: distance
PI1: the first camera shooting information
PI2: the second camera shooting information
Img1: the first image
Img2: the second image
Img12: the first assistant images
Img21: the second assistant images
IBP1, IBP2: fusion results
SP: image mosaic processing
Img: stitching image
LOL: left overlapping region
LOR: right overlapping region
OA: the region in third image
rC: distance
S702~S712: step
Specific embodiment
Next part embodiment of the invention will cooperate attached drawing to be described in detail, element cited in description below Symbol will be regarded as the same or similar element when identical component symbol occur in different attached drawings.These embodiments are the present invention Some, do not disclose all embodiments of the invention.More precisely, these embodiments are patent of the invention The example of method and apparatus in application range.
Fig. 1 is the block diagram of image acquiring device shown by an embodiment according to the present invention, but this is merely for convenience Illustrate, is not intended to limit the invention.Fig. 1 first first introduces all components and configuration relation of image acquiring device, in detail Function will cooperate Fig. 2 to disclose together.
Please refer to Fig. 1, image acquiring device 100 include the first imaging sensor 10A, the second imaging sensor 10B and Processor 20.In the present embodiment, image acquiring device 100 be, for example, digital camera, S.L.R, digital code camera or its He has the dress such as smart phone, tablet computer, personal digital assistant, tablet computer of image-acquisition functions etc., head-mounted display It sets, invention is not limited thereto.
First imaging sensor 10A and the second imaging sensor 10B respectively include camera lens, actuator and photosensitive member Part, wherein camera lens includes lens.Actuator may, for example, be stepper motor (stepping motor), voice coil motor (voice Coil motor, VCM), piezoelectric actuator (piezoelectric actuator) or other lens that may be mechanically moved Actuator, the present invention do not limit herein.Photosensitive element generates figure to sense the light intensity into lens respectively respectively Picture.Photosensitive element may, for example, be charge coupled cell (charge coupled device, CCD), Complimentary Metal-Oxide half Conductor (complementary metal-oxide semiconductor, CMOS) element or other elements, the present invention is not herein It limits.It is worth noting that, the first imaging sensor 10A is mutually coupled with the second imaging sensor 10B, to transmit that mutually This camera shooting information detected, details will be in described later on.
Processor 20 may, for example, be central processing unit (central processing unit, CPU) or other The general service of programmable or microprocessor (microprocessor), the digital signal processor (digital of specific use Signal processor, DSP), programmable controller, Application Specific Integrated Circuit (application specific Integrated circuits, ASIC), programmable logic device (programmable logic device, PLD) or its The combination of his similar device or these devices.Processor 20 couples the first imaging sensor 10A and the second imaging sensor 10B, to control the whole running of image acquiring device 100.
Those skilled in the art should be apparent that image acquiring device further includes data storage device, to store image, number According to, may, for example, be any pattern fixed or packaged type random access memory (random access memory, RAM), read-only memory (read-only memory, ROM), flash memory (flash memory), hard disk or other similar The combination of device or these devices.
Each element of the i.e. image acquiring device 100 of collocation Fig. 1 enumerates embodiment below, to illustrate image acquiring device 100 execute the detailed step of image split-joint method.
Fig. 2 is the image split-joint method flow chart of the image acquiring device according to shown by one embodiment of the invention.
Referring to Fig. 1 and Fig. 2, firstly, when image acquiring device 100 is obtaining it to photographed scene progress image Before, processor 20 will detect photographed scene using the first imaging sensor 10A, correspond to the first imaging sensor 10A to generate First camera shooting information (step S202A), and using the second imaging sensor 10B detect photographed scene, with generate correspond to The second camera shooting information (step S202B) of second imaging sensor 10B.First camera shooting information in this, which can be, utilizes 3A algorithm The acquired relevant information analyzed to the first imaging sensor 10A for photographed scene.Similarly, the second camera shooting information It can be and utilize the relevant information that the second imaging sensor 10B analyzes photographed scene acquired by 3A algorithm.This implementation In example, the first camera shooting information and the second camera shooting information may, for example, be depth of exposure, colour temperature etc..
Then, the first imaging sensor 10A and the second imaging sensor 10B can take the photograph the first camera shooting information and second As information is sent to each other, and processor 20 will utilize the first imaging sensor 10A, respectively according to the first camera shooting information and the The image of two camera shooting acquisition of information photographed scenes, to generate the first image and the first assistant images (step S204A), and will Using the second imaging sensor 10B, the image of acquisition of information photographed scene is imaged according to the second camera shooting information and first respectively, To generate the second image and the second assistant images (step S204B).Since the first imaging sensor 10A and the second image are passed Sensor 10B is to obtain identical photographed scene respectively with different perspectives, therefore the first image and the second image will have identical bat Take the photograph the overlapping region of content.Can deductively, the first assistant images and the second assistant images be only the first imaging sensor 10A with And second imaging sensor 10B with the different identical images of camera shooting acquisition of information, therefore the first assistant images and second auxiliary Overlapping region identical with the first image, the second image will be had by helping image equally.For convenience of explanation, below by the first image Overlapping region in first image and the first assistant images taken by sensor 10A is known as " the first overlapping region ", the Second image taken by two imaging sensor 10B and the overlapping region in the second assistant images are known as " the second overlay region Domain ".
In the technology of image mosaic, the overlapping region of two images will determine the quality of image mosaic.In order to ensure The natural continuity of image mosaic, processor 20 will merge the first image and the first assistant images (step S206A), and merge Second image and the second assistant images (step S206B), obtained respectively corresponding to the first overlapping region and the second overlay region The fusion results in domain, and then according to the first image, fusion results and the second image, it generates stitching image (step S208).It changes Sentence is talked about, will corresponding to the part of original first overlapping region and original second overlapping region in subsequent image mosaic It can be fused replaced result.Since fusion results are simultaneously based on information is imaged detected by two imaging sensors, so One can avoid occurring in stitching image apparent jointing line or unnatural colour band.
Specifically, for the viewpoint of the first imaging sensor 10A, processor 20 is the first weight using the first image Folded region is merged with the first overlapping region in the first assistant images, and with the viewpoint of the second imaging sensor 10B for, Processor 20 is to be merged using the second overlapping region of the second image with the second overlapping region in the second assistant images.? This, the first overlapping region includes the first overlapping boundary line and the first jointing line, and the second overlapping region includes the second overlapping side Boundary line and the second jointing line, and the second jointing line in the first jointing line and the second image in the first image will become two The suture of image mosaic.Therefore, for the first image, processor 20 can by first overlapping boundary line and the first jointing line it Between region replaced with fusion results, therefore be referred to as " the first fusion overlapping region ", and with the second image for, processor 20 can be replaced in the region between the second overlapping boundary line and the second jointing line with fusion results, therefore be referred to as that " second melts Close overlapping region ".Processor 20 be according to the first image, the first fusion overlapping region, the second fusion overlapping region and this Two images generate stitching image.In this example, it is assumed that the face that overlapping region is merged with second in the first fusion overlapping region Product is equal, that is, spliced overlapping region half using the first image will be substrate and the other half will be using the second image as base Bottom.However, this is only to be used to illustrate to be used, not to limit the present invention.
It specifically, will be below to illustrate according to the second image shown by one embodiment of the invention the using Fig. 3 The producing method of two fusion overlapping regions, and the producing method of the first fusion overlapping region can be analogized according to identical method.
Referring to figure 3., the second image Img2 acquired in the second imaging sensor 10B include second overlapping boundary line LO, Second jointing line LS and the second image boundary line LB.Region between second overlapping boundary line LO and the second image boundary line LB For the second overlapping region, and the region between the second jointing line LS and the second image boundary line LB is splicing regions, wherein splicing Region only to image mosaic but not is presented among final stitching image.
Here, the region between the second overlapping boundary line LO and the second jointing line LS in the second image Img2 will be by the Replaced two fusion overlapping regions.Processor 20 will utilize original the second image Img2 and the second assistant images (not shown) Same area carries out image co-registration, to generate the second fusion overlapping region.As an example it is assumed that pixel P is the second fusion weight One of pixel in folded region.Processor 20 will calculate pixel P relative to second overlapping boundary line LO distance dO with And distance dS of the pixel P relative to the second jointing line LS, to generate the second weight proportion.Then, processor 20 is further according to second Weight ratio calculates in the second image corresponding to the picture for corresponding to pixel P in the pixel value and the second assistant images of the pixel of pixel P The weighted sum of the pixel value of element can be indicated with generating the pixel value of pixel P with following equations sequences:
px,y,O=fA(T(dO),T(dS))×px,y,A+fB(T(dO),T(dS))×px,y,B
Wherein px,y,OFor the pixel value for the pixel that coordinate in the second fusion overlapping region is (x, y), px,y,ATo utilize first Coordinate is the pixel value of the pixel of (x, y) in second assistant images taken by the first camera shooting information of imaging sensor 10A, px,y,BIt is (x, y) for coordinate in the second image taken by the second camera shooting information using the second imaging sensor 10B itself Pixel pixel value, coordinate conversion function of the T between image acquiring device position, fAWith fBArbitrarily to meet fA(x,y)+ fBThe function of (x, y)=1.In addition, when pixel P is located at the second overlapping boundary line LO (that is, dO=0), because being the second fusion The position farthest from the first image in overlapping region, therefore the pixel value of pixel P will be clapped closest to the second camera shooting information The original pixel value for the second image taken the photograph is (for example, px,y,O=px,y,B).On the other hand, when pixel P is located at the second jointing line LS When (that is, dS=0), the pixel value of pixel P, which will be while be contemplated, takes the photograph thin information institute using the second camera shooting information and first The second image and the second assistant images that take pixel value (for example,).Generally, In the present embodiment, processor 20 may, for example, be with following equations sequence the pixel value for generating pixel P:
It is illustrated for convenience, Fig. 4 is the image mosaic according to image acquiring device shown by one of present invention embodiment Methodological function flow chart, to integrate above-mentioned mentioned overall flow.
Referring to figure 4., the first imaging sensor 10A will test photographed scene first, to generate the first camera shooting information PI1, And the second imaging sensor 10B will test photographed scene, to generate the second camera shooting information PI2.First imaging sensor 10A meeting First camera shooting information PI1 is transmitted to the second imaging sensor 10B, and the second imaging sensor 10B can image information for second PI2 is transmitted to the first imaging sensor 10A.
Then, the first imaging sensor 10A will obtain the image of photographed scene according to the first camera shooting information PI1, to generate First image Img1, and the image for photographed scene being obtained according to the second camera shooting information PI2, to generate the first assistant images Img12.Processor 20 will carry out image co-registration for the first image Img1 and the first assistant images Img12 and handle IBP.Another party Face, the second imaging sensor 10B will obtain the image of photographed scene according to the second camera shooting information PI2, to generate the second image Img2, and the image for photographed scene being obtained according to the first camera shooting information PI1, to generate the second assistant images Img21.Processing Device 20 will carry out image co-registration for the second image Img2 and the second assistant images Img21 and handle IBP.
Then, processor 20 will carry out image spelling together with the first image Img1 and the second image Img2 according to fusion results Processing SP is met, to generate stitching image Img '.The details of the above process please refers to the related description of previous embodiment, no longer in this It repeats.
Above embodiment can prolong Shen, and to tool, there are three the image acquiring devices of above imaging sensor.When image obtains When all imaging sensors of device being taken collinearly to arrange, it can be for figure accessed by every two adjacent imaging sensor As overlapping region concatenates out stitching image according to the process of Fig. 2.On the other hand, when all imaging sensors of image acquiring device When not arranging collinearly (such as on the market to obtain the image acquiring devices of 360 degree of panoramas), then it carries out palpus when image mosaic Further consider common overlapping region domain accessed by three images above sensors.
Specifically, Fig. 5 is the schematic diagram of image acquiring device shown by another embodiment according to the present invention.
Referring to figure 5., it is assumed that image acquiring device 100 ' includes the first imaging sensor 10A, the second figure being mutually coupled As sensor 10B and third imaging sensor 10C, that is to say, that image acquiring device 100 ' can be considered in image acquiring device Additional third imaging sensor 10C in 100, wherein third imaging sensor 10C will be located at the first imaging sensor 10A with Between second imaging sensor 10B but not on the line of both settings.In addition, third imaging sensor 10C is also controllable by Processor 20.It for convenience of explanation, below will be with the first imaging sensor 10A, the second imaging sensor 10B and third figure As sensor 10C setting position respectively with "left", "right" and " in " describe.
In the present embodiment, image overlay region accessed by the first imaging sensor 10A and the second imaging sensor 10B Domain can carry out image mosaic according to the process of Fig. 2.It on the other hand, also can be first for the viewpoint of third imaging sensor 10C Photographed scene is detected, to generate third camera shooting information.Meanwhile third imaging sensor 10C also can be according to from the first image sensing The received first camera shooting information of device 10A and the second imaging sensor 10B institute and the second camera shooting information, obtain photographed scene Image (hereinafter referred to as " left redundant image " and " right redundant image ").Third image, left redundant image and inside right forward in this Help image all with first image have overlapping region (hereinafter referred to as " left overlapping region "), third image, left redundant image with And right redundant image all has overlapping region (hereinafter referred to as " right overlapping region ") with second image, third image, left redundant Image and right redundant image and the first image and the second image have overlapping region (hereinafter referred to as " common overlapping region simultaneously Domain ").The schematic diagram of overlapping region shown by collocation Fig. 6 according to the present invention an embodiment is illustrated into above-mentioned overlay region below The amalgamation mode in domain.
Fig. 6 is please referred to, region OA is a part of third image, and wherein LOL is and first with left covered region The left overlapping region of image overlapping, LOR are the right overlapping region Chong Die with the second image, left overlapping with right covered region The overlapping region of region and right overlapping region is the common overlapping region domain of the first image, the second image and third image.This Outside, for 360 degree of spherical space, the splicing regions between overlapping region may, for example, be 10 degree of range.Processor 20 Left overlapping region will be merged with left redundant image with third image, merged with third image with right redundant image right overlapping Region, and common overlapping region domain, fused pixel are merged with left redundant image, right redundant image with third image simultaneously The pixel value of P ' can be indicated with following equations sequences:
Wherein px,y,OFor the pixel value for the pixel that coordinate is (x, y), px,y,LFor using taken by the first camera shooting information Coordinate is the pixel value of the pixel of (x, y), p in left redundant imagex,y,RTo utilize right redundant taken by the second camera shooting information Coordinate is the pixel value of the pixel of (x, y), p in imagex,y,CTo image third figure taken by information using the third of itself Coordinate is the pixel value of the pixel of (x, y) as in.The Cartesian coordinates (Cartesian coordinate system) that T is Coordinate conversion function between 360 degree of spherical space (360 spherical space).
In the present embodiment, processor 20 may, for example, be with following equations sequence the pixel value for generating pixel P ':
Wherein OR is right overlapping region, and OL is left overlapping region, while the region for belonging to OR and OL is common overlapping region Domain, r are the distance between the central point in the negative lap region together pixel P ', Γ R, Γ L and Γ C pixel P ' respectively with it is right overlapping The distance between region contour, left overlapping region profile and common overlapping region domain profile.
In all overlapping regions after image co-registration is handled, processor 20 will utilize the first image, the first fusion weight Folded region, the second image, the second fusion overlapping region, third image and right overlapping region, left overlay region in third image The fusion results in domain and common overlapping region domain carry out image mosaic, to generate stitching image.
Above concept can also be realized with the image acquiring device of single a imaging sensor.Specifically, Fig. 7 A is The schematic diagram of image acquiring device shown by another embodiment according to the present invention.
Fig. 7 A is please referred to, image acquiring device 700 includes imaging sensor 710 and processor 720.In the present embodiment In, the functional structure of imaging sensor 710 and processor 720 is equivalent to the imaging sensor of image acquiring device 100 in Fig. 1 10A/10B and processor 20 are described in detail the relevant paragraph for please referring to Fig. 1, repeat no more in this.
Fig. 7 B is the image split-joint method flow chart according to image acquiring device shown by one of present invention embodiment, and The process of Fig. 7 B is suitable for the image acquiring device 700 of Fig. 7 A.
Referring to Fig. 7 A and Fig. 7 B, the processor 720 of image acquiring device 700 will utilize imaging sensor 710 Photographed scene is detected with the first visual angle, to generate the first camera shooting information (step S702) for corresponding to the first visual angle, and is utilized Imaging sensor 710 obtains the image of photographed scene with the first visual angle, to generate the first image (step S704).Then, it handles Device 720 will detect photographed scene using imaging sensor 710 with the second visual angle, to generate the second camera shooting for corresponding to the second visual angle Information (step S706), and using imaging sensor 710 respectively according to the second camera shooting information and the first camera shooting acquisition of information The image of photographed scene, to generate the second image and assistant images (step S708).In other words, imaging sensor 710 with The second image and Fig. 1 of photographed scene accessed by second visual angle obtain photographed scene using the second imaging sensor 10B The second image concept it is identical, difference is only that the image acquiring device 700 of the present embodiment needs before obtaining the second image It is moved to the position at the second visual angle.
Since imaging sensor 710 is to obtain identical photographed scene, the first image respectively with different perspectives in itself With the second image by the overlapping region with identical content of shooting.Can deductively, assistant images are only imaging sensors 710 with not With the identical image of camera shooting acquisition of information, therefore assistant images will equally have it is identical heavy with the first image, the second image Folded region.The overlapping region of the first image is known as " the first overlapping region " below, in the second image and the second assistant images Overlapping region be known as " the second overlapping region ".
In the present embodiment, processor 20 will merge the second image and assistant images, to generate fusion results (step S710).Being in the difference of previous embodiment will be rejected in the first overlapping region, and the second overlapping region will be directly fused As a result replaced.Weight proportion to merge the second image and assistant images, which can be, is respectively relative to overlay region according to pixel The ratio of the distance of two overlapping boundary lines in domain, however the present invention does not limit herein.Later, processor 720 will be according to first Image, fusion results and the second image generate stitching image (step S712).
In conclusion image acquiring device and its image split-joint method proposed by the invention, are passed when using two images Sensor itself distinguishes the image of detected camera shooting acquisition of information photographed scene, and utilizes camera shooting information detected by other side In addition obtain photographed scene image after, it will for the image for being intended to be spliced overlapping region with different camera shooting information Accessed image is merged.In this way, meet the image of true photographed scene information in addition to can produce, also can avoid There is apparent jointing line or unnatural colour band, in stitching image to promote the quality of stitching image.In addition, of the invention Also it can have the image acquiring device of single a imaging sensor and three images above sensors to implement, to enhance The applicability of the present invention in practical applications.
Although the present invention is disclosed as above with embodiment, however, it is not to limit the invention, any technical field Middle technical staff, without departing from the spirit and scope of the present invention, when can make some changes and embellishment, therefore protection of the invention Range is subject to view as defined in claim.

Claims (10)

1. a kind of image split-joint method of image acquiring device, which is characterized in that be suitable for including the first imaging sensor and The image acquiring device of second imaging sensor, the method includes the following steps:
Photographed scene is detected respectively using the first image sensor and the second sensor, to generate corresponding to described The first of first imaging sensor images information and images information corresponding to the second of second sensor;
Using the first image sensor, respectively according to the first camera shooting information and second camera shooting acquisition of information institute The image of photographed scene is stated, to generate the first image and the first assistant images;
Using second imaging sensor, respectively according to the second camera shooting information and first camera shooting acquisition of information institute The image of photographed scene is stated, to generate the second image and the second assistant images, wherein the first image and described first Assistant images have the first overlapping region, and second image and second assistant images have the second overlapping region, institute The first overlapping region is stated corresponding to second overlapping region;And
The first image and first assistant images are merged, and merge second image and the second auxiliary figure Picture generates stitching image accordingly.
2. the method according to claim 1, wherein fusion the first image and first assistant images, And the step of merging second image and second assistant images, generating the stitching image accordingly includes:
First overlapping region and first overlapping region in first assistant images for merging the first image, With generate first fusion overlapping region, and merge second image second overlapping region and the second auxiliary figure Second overlapping region as in, to generate the second fusion overlapping region;And
According to the first image, first fusion overlapping region, the second fusion overlapping region and second figure Picture generates the stitching image.
3. according to the method described in claim 2, it is characterized in that, first overlapping region include first overlapping boundary line with And first jointing line, first fusion overlapping region correspond between the first overlapping boundary line and first jointing line Region, second overlapping region includes the second overlapping boundary line and the second jointing line, second fusion overlapping region Corresponding to the region between the second overlapping boundary line and second jointing line.
4. according to the method described in claim 3, it is characterized by:
The producing method of every one first pixel in first fusion overlapping region includes:
It calculates first pixel and is respectively relative to first overlapping the distance between boundary line and first jointing line, To generate the first weight proportion;And
According to first weight proportion, the pixel value for corresponding to first pixel in the first image and described the are calculated Weighted sum in one assistant images corresponding to the pixel value of first pixel, generates the pixel of first pixel accordingly Value;And
The producing method of every one second pixel in second fusion overlapping region includes:
It calculates second pixel and is respectively relative to second overlapping the distance between boundary line and second jointing line, To generate the second weight proportion;And
According to second weight proportion, the pixel value for corresponding to first pixel in second image and described the are calculated Weighted sum in two assistant images corresponding to the pixel value of second pixel, generates the pixel of second pixel accordingly Value.
5. the method according to claim 1, wherein described image acquisition device further includes third image sensing Device, and the method also includes:
The photographed scene is detected using the third imaging sensor, to generate correspond to the third imaging sensor the Three camera shooting information;
Using the third imaging sensor, information, the first camera shooting information and described are imaged according to the third respectively The image of photographed scene described in second camera shooting acquisition of information, to generate third image, left redundant image and right redundant image, Middle third image, left redundant image and right redundant image all have the left overlapping region for being associated with the first image, association In second image right overlapping region and be simultaneously associated with the common weight of the first image and second image Folded region;And
The left overlapping region for merging the third image Yu the left redundant image, merges the third image and the right side The third image, the left redundant image and the right redundant image are merged in the right overlapping region of assistant images The common overlapping region domain, to generate the fusion results for being associated with the third image.
6. according to the method described in claim 5, it is characterized in that, according to the first image, first fusion overlay region Domain, the second fusion overlapping region and second image, the method for generating the stitching image further include:
Using the first image, it is described first fusion overlapping region, it is described second fusion overlapping region, second image, The third image and the fusion results for being associated with the third image, generate the stitching image.
7. a kind of image acquiring device characterized by comprising
First imaging sensor, to obtain image;
Second imaging sensor couples the first image sensor, to obtain image;And
Processor couples the first image sensor and second imaging sensor, and to execute the following steps:
Photographed scene is detected respectively using the first image sensor and the second sensor, to generate corresponding to described The first of first imaging sensor images information and images information corresponding to the second of second sensor;
Using the first image sensor, respectively according to the first camera shooting information and second camera shooting acquisition of information institute The image of photographed scene is stated, to generate the first image and the first assistant images;
Using second imaging sensor, respectively according to the second camera shooting information and first camera shooting acquisition of information institute The image of photographed scene is stated, to generate the second image and the second assistant images, wherein the first image and described first Assistant images have the first overlapping region, and second image and second assistant images have the second overlapping region, institute The first overlapping region is stated corresponding to second overlapping region;And
The first image and first assistant images are merged, and merge second image and the second auxiliary figure Picture generates stitching image accordingly.
8. image acquiring device according to claim 7, which is characterized in that described image acquisition device further includes third figure As sensor, the third imaging sensor coupling the first image sensor, second imaging sensor and described Processor, wherein the processor is also to execute the following steps:
The photographed scene is detected using the third imaging sensor, to generate correspond to the third imaging sensor the Three camera shooting information;
Using the third imaging sensor, information, the first camera shooting information and described are imaged according to the third respectively The image of photographed scene described in second camera shooting acquisition of information, to generate third image, left redundant image and right redundant image, Middle third image, left redundant image and right redundant image all have the left overlapping region for being associated with the first image, association In second image right overlapping region and be simultaneously associated with the common weight of the first image and second image Folded region;And
The left overlapping region for merging the third image Yu the left redundant image, merges the third image and the right side The third image, the left redundant image and the right redundant image are merged in the right overlapping region of assistant images The common overlapping region domain, to generate the fusion results for being associated with the third image.
9. a kind of image split-joint method of image acquiring device, which is characterized in that suitable for including single a imaging sensor Image acquiring device, including the following steps:
Photographed scene is detected with the first visual angle using described image sensor, is taken the photograph with generating corresponding to the first of first visual angle As information, and the image of the photographed scene is obtained using described image sensor with first visual angle, to generate first Image;
Photographed scene is detected with the second visual angle using described image sensor, is taken the photograph with generating corresponding to the second of second visual angle As information, and using described image sensor respectively according to the second camera shooting information and the first camera shooting acquisition of information The image of the photographed scene, to generate the second image and assistant images, wherein the first image has the first overlay region Domain, second image and the assistant images have the second overlapping region, and first overlapping region corresponds to described the Two overlapping regions;
Second image and the assistant images are merged, to generate fusion results;And
According to the first image, the fusion results and second image, stitching image is generated.
10. a kind of image acquiring device characterized by comprising
Single a imaging sensor, to obtain image;
Processor couples described image sensor, and to execute the following steps:
Photographed scene is detected with the first visual angle using described image sensor, is taken the photograph with generating corresponding to the first of first visual angle As information, and the image of the photographed scene is obtained using described image sensor with first visual angle, to generate first Image;
Photographed scene is detected with the second visual angle using described image sensor, is taken the photograph with generating corresponding to the second of second visual angle As information, and using described image sensor respectively according to the second camera shooting information and the first camera shooting acquisition of information The image of the photographed scene, to generate the second image and assistant images, wherein the first image has the first overlay region Domain, second image and the assistant images have the second overlapping region, and first overlapping region corresponds to described the Two overlapping regions;
Second image and the assistant images are merged, to generate fusion results;And
According to the first image, the fusion results and second image, stitching image is generated.
CN201710526703.1A 2017-06-30 2017-06-30 Image acquisition device and image splicing method thereof Active CN109214983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710526703.1A CN109214983B (en) 2017-06-30 2017-06-30 Image acquisition device and image splicing method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710526703.1A CN109214983B (en) 2017-06-30 2017-06-30 Image acquisition device and image splicing method thereof

Publications (2)

Publication Number Publication Date
CN109214983A true CN109214983A (en) 2019-01-15
CN109214983B CN109214983B (en) 2022-12-13

Family

ID=64976197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710526703.1A Active CN109214983B (en) 2017-06-30 2017-06-30 Image acquisition device and image splicing method thereof

Country Status (1)

Country Link
CN (1) CN109214983B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311359A (en) * 2020-01-21 2020-06-19 杭州微洱网络科技有限公司 Jigsaw method for realizing human shape display effect based on e-commerce image

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070081081A1 (en) * 2005-10-07 2007-04-12 Cheng Brett A Automated multi-frame image capture for panorama stitching using motion sensor
US20070132863A1 (en) * 2005-12-14 2007-06-14 Sony Corporation Image taking apparatus, image processing method, and image processing program
CN102142138A (en) * 2011-03-23 2011-08-03 深圳市汉华安道科技有限责任公司 Image processing method and subsystem in vehicle assisted system
CN102859987A (en) * 2010-04-05 2013-01-02 高通股份有限公司 Combining data from multiple image sensors
US20130141523A1 (en) * 2011-12-02 2013-06-06 Stealth HD Corp. Apparatus and Method for Panoramic Video Hosting
CN103366351A (en) * 2012-03-29 2013-10-23 华晶科技股份有限公司 Method for generating panoramic image and image acquisition device thereof
US20140111607A1 (en) * 2011-05-27 2014-04-24 Nokia Corporation Image Stitching
WO2016165016A1 (en) * 2015-04-14 2016-10-20 Magor Communications Corporation View synthesis-panorama

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070081081A1 (en) * 2005-10-07 2007-04-12 Cheng Brett A Automated multi-frame image capture for panorama stitching using motion sensor
US20070132863A1 (en) * 2005-12-14 2007-06-14 Sony Corporation Image taking apparatus, image processing method, and image processing program
CN102859987A (en) * 2010-04-05 2013-01-02 高通股份有限公司 Combining data from multiple image sensors
CN102142138A (en) * 2011-03-23 2011-08-03 深圳市汉华安道科技有限责任公司 Image processing method and subsystem in vehicle assisted system
US20140111607A1 (en) * 2011-05-27 2014-04-24 Nokia Corporation Image Stitching
US20130141523A1 (en) * 2011-12-02 2013-06-06 Stealth HD Corp. Apparatus and Method for Panoramic Video Hosting
CN103366351A (en) * 2012-03-29 2013-10-23 华晶科技股份有限公司 Method for generating panoramic image and image acquisition device thereof
WO2016165016A1 (en) * 2015-04-14 2016-10-20 Magor Communications Corporation View synthesis-panorama

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311359A (en) * 2020-01-21 2020-06-19 杭州微洱网络科技有限公司 Jigsaw method for realizing human shape display effect based on e-commerce image

Also Published As

Publication number Publication date
CN109214983B (en) 2022-12-13

Similar Documents

Publication Publication Date Title
TWI834744B (en) Electronic device and method for disparity estimation using cameras with different fields of view
US10872420B2 (en) Electronic device and method for automatic human segmentation in image
JP6347675B2 (en) Image processing apparatus, imaging apparatus, image processing method, imaging method, and program
US9325899B1 (en) Image capturing device and digital zooming method thereof
CN108600576B (en) Image processing apparatus, method and system, and computer-readable recording medium
US8928736B2 (en) Three-dimensional modeling apparatus, three-dimensional modeling method and computer-readable recording medium storing three-dimensional modeling program
WO2015180659A1 (en) Image processing method and image processing device
EP3816929A1 (en) Method and apparatus for restoring image
CN107818305A (en) Image processing method, device, electronic equipment and computer-readable recording medium
CN105141858B (en) The background blurring system and method for photo
JP5110138B2 (en) AR processing apparatus, AR processing method, and program
JP6452360B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
US20110234759A1 (en) 3d modeling apparatus, 3d modeling method, and computer readable medium
CN104811684B (en) A kind of three-dimensional U.S. face method and device of image
CN104641625B (en) Image processing apparatus, camera device and image processing method
CN110677621B (en) Camera calling method and device, storage medium and electronic equipment
JP2015197745A (en) Image processing apparatus, imaging apparatus, image processing method, and program
JP6020471B2 (en) Image processing method, image processing apparatus, and image processing program
JP2000102040A (en) Electronic stereo camera
CN109906599A (en) A kind of photographic method and terminal of terminal
CN111105370B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
JP6608311B2 (en) Image evaluation apparatus and image evaluation program
US10506155B2 (en) Image capturing apparatus and image stitching method thereof
TWI549478B (en) Method for generating 3d image and electronic apparatus using the same
CN109214983A (en) Image acquiring device and its image split-joint method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant