CN105611181A - Multi-frame photographed image synthesizer and method - Google Patents

Multi-frame photographed image synthesizer and method Download PDF

Info

Publication number
CN105611181A
CN105611181A CN201610192808.3A CN201610192808A CN105611181A CN 105611181 A CN105611181 A CN 105611181A CN 201610192808 A CN201610192808 A CN 201610192808A CN 105611181 A CN105611181 A CN 105611181A
Authority
CN
China
Prior art keywords
image
multiple image
pixel
parameter
described multiple
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610192808.3A
Other languages
Chinese (zh)
Inventor
戴向东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201610192808.3A priority Critical patent/CN105611181A/en
Publication of CN105611181A publication Critical patent/CN105611181A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/684Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
    • H04N23/6845Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time by combination of a plurality of images sequentially taken

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-frame photographed image synthesizer and method. The synthesizer comprises an image acquisition module, a matched image pixel point search module, an alignment model calculation module, an image alignment module and an image synthesis module, wherein the image acquisition module is used for acquiring a photographed multi-frame image; the matched pixel point search module is used for searching pixel points matched with each other in the multi-frame image; the alignment model calculation module is used for calculating an alignment model for aligning the multi-frame image according to the matched pixel points in the multi-frame image; the image alignment module is used for utilizing the alignment model to align the multi-frame image; and the image synthesis module is used for synthesizing the aligned multi-frame image. According to the multi-frame photographed image synthesizer, on the basis of not depending on the device such as a fixed bracket, the multi-frame image can be aligned, and the pixel error after the multi-frame image is synthesized can be reduced.

Description

Multiframe photographic images synthesizer and method
Technical field
The present invention relates to technical field of image processing, relate in particular to a kind of multiframe photographic images synthesizer and method.
Background technology
Along with popularizing of mobile photographing device, a lot of different technique for taking in prior art, are there are. Wherein, manyIt is a kind of compared to the more complicated technique for taking of single frames shooting that frame is taken synthetic, and it is in pan-shot, synthetic, the long exposure of HDRIn electric aperture, all have application, the difficult point of these technique for taking is how to carry out image to synthesize.
Generally speaking, handheld mobile device is easily shaken, and need to utilize permanent plant to fix mobile device when shooting, withWhile preventing multiframe photographic images, there is shake, while causing image synthetic, occur misalignment of pixels. This requires user in the time takingNeed to carry permanent plant, as heavy tripod, affect user and taken convenience and experience property.
Therefore, how in the situation that not relying on permanent plant, reduce the picture of mobile device multiframe photographic images when syntheticElement deviation, just becomes technical issues that need to address.
Summary of the invention
Main purpose of the present invention is to propose a kind of multiframe photographic images synthesizer and method, is intended to solve multiframe and clapsTake the photograph the pixel deviation of image when synthetic.
For achieving the above object, a kind of multiframe photographic images synthesizer provided by the invention, comprising: image acquisition mouldPiece, for obtaining the multiple image of shooting; Matched pixel point is searched module, for searching mutually coupling of described multiple imagePixel; Alignment model computing module, for according to the described multiple image pixel of coupling mutually, calculates the institute that is used for aligingState the alignment model of multiple image; Image alignment module, for using described alignment model, described multiple image aligns; ImageSynthesis module, for synthesizing the described multiple image after alignment.
Alternatively, aforesaid device, the pixel reflection of the described mutual coupling of described alignment model computing module calculatingConvergent-divergent and rotation parameter, displacement parameter, and set up described alignment model according to the convergent-divergent obtaining and rotation parameter, displacement parameter.
Alternatively, aforesaid device, the pixel reflection of the described mutual coupling of described alignment model computing module calculatingDeformation parameter in convergent-divergent and rotation parameter, displacement parameter, level and vertical direction, and according to the convergent-divergent obtaining and rotation ginsengDeformation parameter in number, displacement parameter, level and vertical direction is set up described alignment model.
Alternatively, aforesaid device, described matched pixel point is searched module described multiple image is dwindled, from dwindlingAfter described multiple image in search the pixel of mutual coupling, and according to coupling mutually in the described multiple image after dwindlingPixel, determines the pixel mutually mating in the described multiple image before dwindling.
Alternatively, aforesaid device, described image synthesis unit is according to current image applications scene, to described multiframe figurePicture selects corresponding synthesis strategy to synthesize.
For achieving the above object, the present invention also provides a kind of multiframe photographic images synthetic method, comprising: obtain shootingMultiple image; Search the pixel mutually mating in described multiple image; According to the pixel of mutually mating in described multiple imagePoint, calculates the alignment model for the described multiple image that aligns; Use described alignment model, described multiple image aligns; To rightDescribed multiple image after neat synthesizes.
Alternatively, aforesaid method, calculates the alignment model for the described multiple image that aligns, and specifically comprises: calculate instituteState the convergent-divergent of pixel reflection of mutual coupling and rotation parameter, displacement parameter, and according to the convergent-divergent obtaining and rotation parameter, positionShifting parameter is set up described alignment model.
Alternatively, aforesaid method, calculates the alignment model for the described multiple image that aligns, and specifically comprises: calculate instituteState the deformation parameter in convergent-divergent and rotation parameter, displacement parameter, level and the vertical direction of pixel reflection of mutual coupling, andSet up described alignment model according to the deformation parameter in the convergent-divergent obtaining and rotation parameter, displacement parameter, level and vertical direction.
Alternatively, aforesaid method, searches the pixel mutually mating in described multiple image, specifically comprises: to describedMultiple image dwindles, and searches the pixel of mutual coupling in the described multiple image from dwindling; According to the institute after dwindlingState the pixel mutually mating in multiple image, determine the pixel of mutual coupling in the described multiple image before dwindling.
Alternatively, aforesaid method, synthesizes the described multiple image after alignment, specifically comprises: according to currentImage applications scene, selects corresponding synthesis strategy to synthesize to described multiple image.
According to above technical scheme, multiframe photographic images synthesizer of the present invention and method at least have the following advantages:
According to technical scheme of the present invention, the multiple image obtaining for shooting, first finds out phase between multiple imageThe pixel of coupling mutually, the pixel based on mutual coupling can calculate the model for the multiple image that aligns, and then utilizesAlignment model is by image alignment; According to technical scheme of the present invention, do not relying on the basis of the equipment such as fixed mount,Carry out the alignment of multiple image, reduced the pixel deviation after multiple image is synthetic.
Brief description of the drawings
Fig. 1 is optional hardware configuration schematic diagram of mobile terminal of realizing each embodiment of the present invention;
Fig. 2 is the electrical structure block diagram of realizing the mobile terminal with shoot function of each embodiment of the present invention;
Fig. 3 is the block diagram of multiframe photographic images synthesizer according to an embodiment of the invention;
Fig. 3 A is the schematic diagram of multiframe photographic images synthesizer according to an embodiment of the invention;
Fig. 3 B is the workflow diagram of multiframe photographic images synthesizer according to an embodiment of the invention;
Fig. 3 C is the schematic diagram of multiframe photographic images synthesizer according to an embodiment of the invention;
Fig. 4 is the flow chart of multiframe photographic images synthetic method according to an embodiment of the invention;
Fig. 5 is the flow chart of multiframe photographic images synthetic method according to an embodiment of the invention.
Realization, functional characteristics and the advantage of the object of the invention, in connection with embodiment, are described further with reference to accompanying drawing.
Detailed description of the invention
Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not intended to limit the present invention.
The mobile terminal of realizing each embodiment of the present invention is described referring now to accompanying drawing. In follow-up description, useFor the suffix such as " module ", " parts " or " unit " that represents element only in order to be conducive to explanation of the present invention, itselfDo not have specific meaning. Therefore, " module " can mixedly be used with " parts ".
Mobile terminal can be implemented with various forms. For example, the terminal of describing in the present invention can comprise such as movementPhone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMPThe mobile terminal of (portable media player), guider etc. and consolidating such as digital TV, desktop computer etc.Determine terminal. Suppose that terminal is mobile terminal below. But, it will be appreciated by those skilled in the art that except being used in particular for mobileOutside the element of object, structure according to the embodiment of the present invention also can be applied to the terminal of fixed type.
Fig. 1 is the optional hardware configuration schematic diagram of mobile terminal that can realize each embodiment of the present invention.
Mobile terminal 100 can comprise wireless communication unit 110, A/V (audio/video) input block 120, user's inputUnit 130, output unit 150, memory 160, interface unit 170, controller 180 and power subsystem 190 etc. Fig. 1 illustratesThere is the mobile terminal of various assemblies, it should be understood that, and not require and implement all assemblies that illustrate. Can be alternativelyImplement more or less assembly. Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assemblies, and it allows mobile terminal 100 and wireless communication systemOr radio communication between network. For example, wireless communication unit can comprise mobile communication module 112, wireless Internet mouldAt least one in piece 113, junction service module 114.
Radio signal is sent to base station (for example, access point, Node B etc.), exterior terminal by mobile communication module 112And in server at least one and/or receive radio signals from it. Such radio signal can comprise that voice are logicalWords signal, video calling signal or the various types of data according to text and/or Multimedia Message transmission and/or reception.
Wireless Internet module 113 is supported the Wi-Fi (Wireless Internet Access) of mobile terminal. This module can be inner or externallyBe couple to terminal. The related Wi-Fi (Wireless Internet Access) technology of this module can comprise WLAN (WLAN) (Wi-Fi), Wibro(WiMAX), Wimax (worldwide interoperability for microwave access), HSDPA (high-speed downlink packet access) etc.
Junction service module 114 is the modules for supporting junction service. Some examples of short-range communication technology comprise indigo plantToothTM, RF identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), purple honeybeeTMEtc..
A/V input block 120 is for audio reception or vision signal. A/V input block 120 can comprise camera 121 HesMicrophone 1220, camera 121 is to the static map being obtained by image capture apparatus in Video Capture pattern or image capture modeThe view data of sheet or video is processed. Picture frame after treatment may be displayed on display unit 151. Through camera 121 placesPicture frame after reason can be stored in memory 160 (or other storage medium) or via wireless communication unit 110 and carry outSend, can provide two or more cameras 1210 according to the structure of mobile terminal. Microphone 122 can be at telephone relation mouldIn formula, logging mode, speech recognition mode etc. operational mode, receive sound (voice data) via microphone, and can be bySuch acoustic processing is voice data. Audio frequency after treatment (voice) data can be changed the in the situation that of telephone relation patternFor sending to via mobile communication module 112 formatted output of mobile communication base station. Microphone 122 can be implemented all kindsNoise eliminate (or suppress) algorithm with eliminate noise that (or inhibition) produce in receiving and sending the process of audio signal orPerson disturbs.
User input unit 130 can generate key input data to control the each of mobile terminal according to the order of user's inputPlant operation. User input unit 130 allows user to input various types of information, and can comprise keyboard, the young sheet of pot, touchPlate (for example, detecting due to the touch-sensitive assembly of variation that is touched the resistance that causes, pressure, electric capacity etc.), roller, rocking bar etc.Deng. Especially, in the time that touch pad is superimposed upon on display unit 151 with the form of layer, can form touch-screen.
Interface unit 170 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 100. For example,External device (ED) can comprise wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothingLine FPDP, memory card port, for connecting port, audio frequency I/O (I/O) end of the device with identification moduleMouth, video i/o port, ear port etc. Identification module can be that storage is used the each of mobile terminal 100 for authentication of usersKind of information and can comprise subscriber identification module (UIM), client identification module (SIM), general client identification module (USIM)Etc.. In addition, the device (being called " recognition device " below) with identification module can be taked the form of smart card, therefore, knowsZhuan Zhi not be connected with mobile terminal 100 via port or other jockey. Interface unit 170 can for receive fromThe input (for example, data message, electric power etc.) of external device (ED) and the input receiving is transferred in mobile terminal 100One or more elements or can be for transmit data between mobile terminal and external device (ED).
In addition, in the time that mobile terminal 100 is connected with external base, interface unit 170 can be as allowing by it electricityPower is provided to the path of mobile terminal 100 or can passes through it as the various command signals that allows to input from base from baseBe transferred to the path of mobile terminal. From various command signals or the electric power of base input can be with acting on identification mobile terminalThe no signal being arranged on exactly on base. Output unit 150 is constructed to provide defeated with vision, audio frequency and/or tactile mannerGo out signal (for example, audio signal, vision signal, alarm signal, vibration signal etc.). Output unit 150 can comprise demonstrationUnit 151, audio frequency output module 152, alarm unit 153 etc.
Display unit 151 may be displayed on the information of processing in mobile terminal 100. For example,, when mobile terminal 100 is in electricityWhen words call mode, display unit 151 can show and call or other (for example, text messaging, multimedia file of communicating by letterDownload etc.) relevant user interface (UI) or graphic user interface (GUI). When mobile terminal 100 is in video calling patternOr when image capture mode, display unit 151 can the image of display capture and/or the image of reception, video or figure are shownThe UI of picture and correlation function or GUI etc.
Meanwhile, in the time that display unit 151 and touch pad superpose to form touch-screen with the form of layer each other, display unit151 can be used as input unit and output device. Display unit 151 can comprise liquid crystal display (LCD), thin film transistor (TFT)In LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc. at leastA kind of. Some in these displays can be constructed to transparence to allow user to watch from outside, and this can be called transparentDisplay, typical transparent display can be for example TOLED (transparent organic light emitting diode) display etc. According to specificThe embodiment of wanting, mobile terminal 100 can comprise two or more display units (or other display unit), for example, movesMoving terminal can comprise outernal display unit (not shown) and inner display unit (not shown). Touch-screen can be used for detecting touchInput pressure and touch input position and touch input area.
Audio frequency output module 152 can mobile terminal in call signal receiving mode, call mode, logging mode,In the isotype such as speech recognition mode, broadcast reception pattern lower time,, by that receive wireless communication unit 110 or at memory 160The voice data convert audio signals of middle storage and be output as sound. And audio frequency output module 152 can provide and movementThe relevant audio frequency output (for example, call signal receives sound, message sink sound etc.) of specific function that terminal 100 is carried out.Audio frequency output module 152 can comprise loudspeaker, buzzer etc.
Memory 160 can be stored the processing carried out by controller 180 and software program of control operation etc., or canThe data (for example, telephone directory, message, still image, video etc.) that maybe will export through output temporarily to store oneself. AndAnd memory 160 can be stored about when touching the vibration of the variety of way of exporting while being applied to touch-screen and audio signalData.
Memory 160 can comprise the storage medium of at least one type, and described storage medium comprises flash memory, hard disk, manyMedia card, card type memory (for example, SD or DX memory etc.), random access storage device (RAM), static random-access storageDevice (SRAM), read-only storage (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory(PROM), magnetic storage, disk, CD etc. And, mobile terminal 100 can be connected execute store by networkThe network storage device cooperation of 160 memory function.
Controller 180 is controlled the overall operation of mobile terminal conventionally. For example, controller 180 execution and voice call, dataCommunication, video calling etc. relevant control and processing. Controller 180 can pattern recognition processing, with will be at touch-screenThe handwriting input of upper execution or picture are drawn input and are identified as character or image.
Power subsystem 190 receives external power or internal power and provides operation each unit under the control of controller 180The suitable electric power that part and assembly are required.
Various embodiment described herein can be to use for example calculating of computer software, hardware or its any combinationMachine computer-readable recording medium is implemented. For hardware implementation, embodiment described herein can be by using application-specific IC(ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), scene canProgramming gate array (FPGA), processor, controller, microcontroller, microprocessor, be designed to carry out function described hereinAt least one in electronic unit implemented, and in some cases, such embodiment can be implemented in controller 180.For implement software, can carry out the independent of at least one function or operation with permission such as the embodiment of process or functionSoftware module is implemented. Software code can be come by the software application (or program) of writing with any suitable programming languageImplement, software code can be stored in memory 160 and by controller 180 and carry out.
So far, oneself through according to its functional description mobile terminal. Below, for the sake of brevity, will describe such as folded form,Slide type mobile terminal in various types of mobile terminals of board-type, oscillating-type, slide type mobile terminal etc. is as showingExample. Therefore, the present invention can be applied to the mobile terminal of any type, and is not limited to slide type mobile terminal.
As shown in Figure 1 mobile terminal 100 can be constructed to utilize via frame or grouping send data all if anyLine and wireless communication system and satellite-based communication system operate.
Now according to the electrical structure of describing the mobile terminal with shoot function with reference to figure 2.
Phtographic lens 2211 is made up of the multiple optical lens that are used to form shot object image, is single-focus lens or varifocal mirrorHead. Phtographic lens 2211 can move under the control of lens driver 2221 on optical axis direction, 2221 of lens driverAccording to the control signal from lens driving control circuit 2222, control the focal position of phtographic lens 2211, at zoom lensIn situation, also can control focal length. Lens driving control circuit 2222 is according to the control command from microcomputer 2217Carry out the driving control of lens driver 2221.
Near the position of the shot object image forming on the optical axis of phtographic lens 2211, by phtographic lens 2211, dispose and take the photographPixel part 2212. Imaging apparatus 2212 is for making a video recording to shot object image and obtaining photographed images data. On imaging apparatus 2212Two dimension and be rectangular disposing and form the photodiode of each pixel. Each photodiode produces the photoelectricity corresponding with light incomeSwitching current, this opto-electronic conversion electric current carries out electric charge by the capacitor being connected with each photodiode and accumulates. The front table of each pixelFace disposes the RGB colour filter that Bayer is arranged.
Imaging apparatus 2212 is connected with imaging circuit 2213, and this imaging circuit 2213 carries out electric charge in imaging apparatus 2212Accumulate control and picture signal and read control, to carrying out after this picture signal of reading (analog picture signal) reduction replacement noiseWaveform shaping, so gain improve etc. to become suitable signal level. Imaging circuit 2213 connects with A/D converter 2214Connect, this A/D converter 2214 carries out analog-to-digital conversion to analog picture signal, (following to bus 2227 output digital image signalsBe referred to as view data).
Bus 2227 is transfer paths of the various data reading or generate of the inside for being transmitted in camera. In bus2227 are connecting above-mentioned A/D converter 2214, are connecting in addition image processor 2215, jpeg processor 2216, microcomputerCalculation machine 2217, SDRAM (SynchronousDynamicrandomaccessmemory, SDRAM)2218, memory interface (being referred to as below memory I/F) 2219, LCD (LiquidCrystalDisplay, liquid crystal displayDevice) driver 2220.
Image processor 2215 carries out OB to the view data of the output based on imaging apparatus 2212 and subtracts each other processing, white balanceAdjustment, color matrix computing, gamma conversion, colour difference signal processing, noise removal process, change processing simultaneously, edge treated etc. are eachPlant image processing. Jpeg processor 2216 during in recording medium 2225, is being pressed Imagery Data Recording according to JPEG compress modeThe view data that contracting is read from SDRAM2218. In addition, jpeg processor 2216 carries out JPEG in order to carry out image reproducing demonstrationThe decompression of view data. While decompression, read the file being recorded in recording medium 2225, at jpeg processor 2216In implemented after decompression, the view data of decompression is temporarily stored in SDRAM2218 and on LCD2226 and is carried outShow. In addition, in the present embodiment, what adopt as compression of images decompression mode is JPEG mode, but Compress softwaresContracting mode is not limited to this, certainly can adopt MPEG, TIFF, other the compressed and decompressed mode such as H.264.
Microcomputer 2217 is brought into play the function as the control part of this camera entirety, the unified various processing of controlling cameraSequence. Microcomputer 2217 is connecting operating unit 2223 and flash memory 2224.
Operating unit 2223 includes but not limited to physical button or virtual key, and this entity or virtual key can be electricitySource button, the key of taking pictures, edit key, dynamic image button, reproduce button, menu button, cross key, OK button, delete button,The operational controls such as the various load buttons such as large buttons and various enter keies, detect the mode of operation of these operational controls.
Testing result is exported to microcomputer 2217. In addition be provided with at the front surface of the LCD2226 as display,Touch panel, detection user's touch location, exports this touch location to microcomputer 2217. 2217 of microcomputersAccording to the testing result of the operating position from operating unit 2223, carry out the various processing sequences corresponding with user's operation.
Flash memory 2224 storages are used for the program of the various processing sequences of carrying out microcomputer 2217. Microcomputer 2217Carry out the control of camera entirety according to this program. In addition, flash memory 2224 is stored the various adjusted values of camera, microcomputer 2217Read adjusted value, carry out the control of camera according to this adjusted value. SDRAM2218 is for view data etc. is temporarily storedCan electricity rewrite volatile memory. The view data that the temporary transient storage of this SDRAM2218 is exported from A/D converter 2214 andIn image processor 2215, jpeg processor 2216 etc., carry out view data after treatment.
Memory interface 2219 is connected with recording medium 2225, carries out view data and is attached to the literary composition in view dataThe first-class data writing recording medium 2225 of part and the control of reading from recording medium 2225. Recording medium 2225 is for example canThe recording medium such as memory cards of disassembled and assembled freely on camera main-body, but be not limited to this, can be to be also built in camera main-bodyIn hard disk etc.
Lcd driver 2210 is connected with LCD2226, will be stored in by image processor 2215 view data after treatmentSDRAM2218, need to show time, reads the view data of SDRAM2218 storage and shows on LCD2226, or, JPEG placeThe view data that reason device 2216 compressed is stored in SDRAM2218, and in the time that needs show, jpeg processor 2216 readsThe view data of the compression of SDRAM2218, then decompress, the view data after decompressing is undertaken by LCD2226Show.
LCD1226 is configured in the back side of camera main-body and carries out image demonstration. This LCD2226LCD, but be not limited to this, alsoCan adopt the various display floaters (LCD2226) such as organic EL, but be not limited to this, also can adopt the various demonstrations such as organic ELPanel.
Based on above-mentioned mobile terminal hardware configuration and electrical structure block diagram, each embodiment of the inventive method is proposed.
As shown in Figure 3, first embodiment of the invention proposes a kind of multiframe photographic images synthesizer, comprising:
Image collection module 310, for obtaining the multiple image of shooting. In the present embodiment, in the shooting of multiple imageIn process, be easily subject to the impact of the factors such as random noise, time for exposure, focusing, photometry, illumination due to shooting process, makeObtain between multiple image and there will be the brightness of image pixel or the difference of color, need to carry out pretreatment to image, comprising: clapTake the photograph process need locking acquisition parameters, time for exposure, focusing, the photometric quantity of this sampled images are consistent; Obtain imageAfter, need to carry out simple image filtering and contrast enhancing to image, these pretreatment measures can reduce noise effectivelyWith the fuzzy impact for later image alignment effect of details.
Matched pixel point is searched module 320, for searching the multiple image pixel of coupling mutually.
Image alignment has a lot of algorithms, is mainly divided into the method based on local feature and global characteristics. Based on local featureTypical method be to extract and the key feature points of matching image, then utilize these key feature points to carry out image space alignmentThe mapping matrix of model calculates, and finally utilizes mapping matrix to carry out image alignment. The registration effect of these class methods generally can expireThe requirement of a lot of scenes of foot, as the variation of illumination (synthesizing of different exposure images), (panoramic picture is spelled in image shift on a large scaleConnect), the scene of the various complexity such as half-light image (noise increasing). But the extraction of image characteristic point and coupling generally all relatively consumeTime, as sift, surf Feature Points Matching algorithm. An other class is the search alignment schemes based on overall intercommunication coupling, canReduce the matching error that random character point causes, but for the conversion of illumination and on a large scale image move, speed slow andEffect is unstable.
Optical flow field is also a kind of matching algorithm based on point, and its analysis space moving object is at the picture of observing on imaging planeThe instantaneous velocity of element motion is to utilize the correlation between variation and the consecutive frame of pixel in time-domain in image sequenceThe corresponding relation that finds previous frame to follow to exist between present frame, thus calculate the one of the movable information of object between consecutive frameMethod. The object of research optical flow field is exactly for the approximate sports ground that can not directly obtain of obtaining from sequence of pictures. Wherein, fortuneMoving is exactly the motion of object in three-dimensional real world in fact; Optical flow field is sports ground (people in two dimensional image planeEyes or camera) projection.
By a sequence of pictures, it is exactly light that the movement velocity of each pixel in every image and the direction of motion are found outFlow field. As shown in Figure 3A, the position that when T frame, A is ordered is (x1, y1), and we find A in T+1 frame again soPoint, if its position is (x2, y2), we just can determine the motion vector that A is ordered so:
V=(x2,y2)-(x1,y1)
The key of problem is exactly the position that when how finding t+1 frame, A is ordered, and introduces a kind of Lucas-Kanade hereOptical flow method, basic process is as follows:
The color of an object of its hypothesis of the method supposition does not have huge and significantly changes at front and back two frames. Based on thisThinking, can obtain image constraint equation. Different optical flow algorithms has solved the light stream problem of having supposed different additional conditions. RightUse partial derivative in room and time coordinate, image constraint equation can be written as
I(x,yt)=I(x+dx,y+dy,t+dt)
I (x, y, t) is the pixel value in image (x, y) position.
d x = ∂ x ∂ t d y = ∂ y ∂ t
We suppose mobile enough little, so image constraint equation are used to Taylor's formula, and we can obtain:
I ( x + d x , y + d y , t + d t ) = I ( x , y , t ) + ∂ I ∂ x d x + ∂ I ∂ y d y + ∂ I ∂ t + H O T
H.O.T. refer to the more situation of high-order, mobile enough little in the situation that, can ignore. From this equation, we canTo obtain:
∂ I ∂ x d x + ∂ I ∂ y d y + ∂ I ∂ t = 0
Vx=dxVx=dx
I x = ∂ I ∂ x I y = ∂ I ∂ y
Vx, Vy is respectively x in the light stream vectors of I (x, y, t), the axial motion vector of y. Ix and Iy are that image exists(x, y, t) this point x, the difference value on y direction of principal axis. So just have:
Ix*Vx+Iy*Vy=-It
▿ I T * V → = - I t
In this equation, there are 2 unknown quantitys, at least need two uncorrelated equations to solve. Lucas-Kanade light streamThe motion of method supposition aerial image vegetarian refreshments is consistent, and spot projection contiguous in a scene is also neighbor point to image, and contiguous spot speedDegree is consistent. This is the distinctive supposition of Lucas-Kanade optical flow method, because the constraint of optical flow method fundamental equation only has one, and requirementX, the speed of y direction, has two known variables. We suppose in characteristic point neighborhood and do similar movement, just can many sides of simultaneous nJourney is asked for x, the speed (n is that characteristic point neighborhood is always counted, and comprises this characteristic point) of y direction. Can obtain equation below
I x 1 I y 1 I x 2 I y 2 . . . . . . I x n I y n V x V y = - I t 1 - I t 2 . . . - I t n
In order to solve overdetermined problem, we adopt least square method:
A V → = - b
V → = ( A T A ) - 1 A T ( - b )
Then can obtain the adjacent V of light stream:
V x V y = Σ I x i 2 Σ I x i I y i Σ I x i I y i Σ I y i 2 - 1 - Σ I x i I t i - Σ I y i I t i
Mention this supposition of little motion above, when very fast this supposition of target velocity can be false, many chis fortunatelyFirst degree can address this problem, and each two field picture is carried out, after dwindling in various degree, setting up a gaussian pyramid, minimumThe picture of yardstick is at top layer, and original image is at bottom. Then, start to estimate that from top layer pixel is in next frame position,As the initial position of pixel one deck under present frame, search for downwards along pyramid, repeat to estimate action, until arrive goldThe bottom of word tower. Search can navigate to the direction of motion and the position of pixel fast like this.
Alignment model computing module 330, for the pixel mutually mating according to multiple image, calculating is used for aliging manyThe alignment model of two field picture. In the technical scheme of the present embodiment, because the motion vector between the pixel mutually mating is anti-What reflect is the motion between multiple image, and the alignment model that pixel based on mutual coupling calculates, can eliminate manyMovement relation between two field picture, is synthesized to together multiple image in high quality; In image alignment process, select correctImage alignment model very important, available in the present embodiment is affine Transform Model and Perspective transformation model.
Image alignment module 340, for using alignment model, alignment multiple image. In the present embodiment, can utilize rightNeat model, aligns to the pixel of multiple image.
Image synthesis unit 350, for synthesizing the multiple image after alignment. As shown in Figure 3 B, be the present embodimentThe principle flow chart of technical scheme, carries out after pretreatment T frame, T+1 two field picture, utilizes the sparse match point of optical flow computation (rareThin coupling is characteristic matching, for calculating the pixel of coupling), utilize the pixel of coupling to calculate alignment matrix, and carry out figurePicture alignment. According to the technical scheme of the present embodiment, the multiple image obtaining for shooting, first finds out phase between multiple imageThe pixel of coupling mutually, the pixel based on mutual coupling can calculate the model for the multiple image that aligns, and then utilizesAlignment model is by image alignment; According to technical scheme of the present invention, do not relying on the basis of the equipment such as fixed mount,Carry out the alignment of multiple image, reduced the pixel deviation after multiple image is synthetic.
Second embodiment of the invention proposes a kind of multiframe photographic images synthesizer, comprising:
Image collection module 310, obtains the multiple image of shooting.
Matched pixel point is searched module 320, and multiple image is dwindled, and searches mutually in the multiple image from dwindlingThe pixel of coupling, and according to the pixel mutually mating in the multiple image after dwindling, determine in the multiple image before dwindlingThe pixel of coupling mutually. In the present embodiment, as shown in Figure 3 C, in order to accelerate image alignment speed, meeting the precision of imagesPrerequisite under, first image is dwindled, find out the coordinate of sparse match point at Small-scale Space, afterwards according to change of scale relation willThereby sparse match point coordinate carries out change of scale obtains the sparse match point coordinate mapping of large-size images.
Alignment model computing module 330, calculates convergent-divergent and rotation parameter, the displacement of the pixel reflection of coupling mutually and joinsNumber, and set up alignment model according to the convergent-divergent obtaining and rotation parameter, displacement parameter, be specially:
According to default formulaGet the pixel of the first two field picture in multiple imageX, y in coordinate figure substitution formula, and by coordinate figure substitution x ', the y ' of the pixel in the second two field picture mutually mating,To calculate convergent-divergent and rotation parameter a00、a01And a10、a11, and displacement parameter a02And a12,Formula is as to multiple imageThe alignment model of alignment. In the present embodiment, above-mentioned formula is actually affine Transform Model: parallel arbitrarily in a planeQuadrangle can be mapped as another parallelogram by affine transformation, and the map operation of image enters in same space planeOK, make its distortion obtain dissimilar parallelogram by different transformation parameters. In the present embodiment, affine in useWhen transformation matrix model, based on convergent-divergent and the rotation of convergent-divergent and rotation parameter control chart picture, position-based parameter control chart pictureDisplacement.
In another embodiment, alignment model computing module 330 calculates the convergent-divergent of the pixel reflection of coupling mutually and revolvesTurn the deformation parameter in parameter, displacement parameter, level and vertical direction, and join according to the convergent-divergent obtaining and rotation parameter, displacementDeformation parameter in number, level and vertical direction is set up alignment model, is specially:
According to default formulaGet the pixel of the first two field picture in multiple imageX, y in coordinate figure substitution formula, and by coordinate figure substitution x ', the y ' of the pixel in the second two field picture mutually mating,To calculate convergent-divergent and rotation parameter a00、a01And a10、a11, and displacement parameter a02And a12, and level and vertical directionDeformation parameter a20、a21, formula is as the alignment model to image alignment. In the present embodiment, above-mentioned formula in factBe transmission transformation model, its affine Transform Model of comparing has more flexibility: a transmission conversion can be transformed into rectangleTrapezoidal, it has been described in space, a plane projection is in another space plane, and affine transformation can be used as perspective transformA special case. Consider handheld mobile device, if mobile phone is in the time taking multiple image continuously, the jitter motion of mobile phone substantiallyNot in same plane, now can selective transmission transformation model. In the present embodiment, in the time using transmission transformation matrix model,Based on convergent-divergent and the rotation of convergent-divergent and rotation parameter control chart picture, the displacement of position-based parameter control chart picture, based on level withThe deformation parameter of vertical direction, the distortion of control chart picture in level, vertical direction.
Image alignment module 340, is used alignment model, alignment multiple image.
Image synthesis unit 350, according to current image applications scene, selects corresponding synthesis strategy to enter to multiple imageRow is synthetic. In the present embodiment, after the pixel of each position of multiple image all aligns, the synthetic pixel mistake that there will not be of imagePosition. Different images is processed under application scenarios, and synthesis strategy is generally different, as multiple-exposure need to carry out translucent fusion, manyIt is average etc. that frame noise reduction need to be weighted.
As shown in Figure 4, third embodiment of the invention proposes a kind of multiframe photographic images synthetic method, comprising:
Step 410, obtains the multiple image of shooting. In the present embodiment, in the shooting process of multiple image, due to batThe process of taking the photograph is easily subject to the impact of the factors such as random noise, time for exposure, focusing, photometry, illumination, makes between multiple imageThere will be the brightness of image pixel or the difference of color, need to carry out pretreatment to image, comprising: shooting process needs lockingAcquisition parameters, is consistent time for exposure, focusing, the photometric quantity of this sampled images; Obtain after image, need to enter imageThe simple image filtering of row and contrast strengthen, and these pretreatment measures can effectively reduce noise and details is fuzzy for rearThe impact of phase image alignment effect.
Step 420, searches the pixel mutually mating in multiple image.
Image alignment has a lot of algorithms, is mainly divided into the method based on local feature and global characteristics. Based on local featureTypical method be to extract and the key feature points of matching image, then utilize these key feature points to carry out image space alignmentThe mapping matrix of model calculates, and finally utilizes mapping matrix to carry out image alignment. The registration effect of these class methods generally can expireThe requirement of a lot of scenes of foot, as the variation of illumination (synthesizing of different exposure images), (panoramic picture is spelled in image shift on a large scaleConnect), the scene of the various complexity such as half-light image (noise increasing). But the extraction of image characteristic point and coupling generally all relatively consumeTime, as sift, surf Feature Points Matching algorithm. An other class is the search alignment schemes based on overall intercommunication coupling, canReduce the matching error that random character point causes, but for the conversion of illumination and on a large scale image move, speed slow andEffect is unstable.
Optical flow field is also a kind of matching algorithm based on point, and its analysis space moving object is at the picture of observing on imaging planeThe instantaneous velocity of element motion is to utilize the correlation between variation and the consecutive frame of pixel in time-domain in image sequenceThe corresponding relation that finds previous frame to follow to exist between present frame, thus calculate the one of the movable information of object between consecutive frameMethod. The object of research optical flow field is exactly for the approximate sports ground that can not directly obtain of obtaining from sequence of pictures. Wherein, fortuneMoving is exactly the motion of object in three-dimensional real world in fact; Optical flow field is sports ground (people in two dimensional image planeEyes or camera) projection.
By a sequence of pictures, it is exactly light that the movement velocity of each pixel in every image and the direction of motion are found outFlow field. As shown in Figure 3A, the position that when T frame, A is ordered is (x1, y1), and we find A in T+1 frame again soPoint, if its position is (x2, y2), we just can determine the motion vector that A is ordered so:
V=(x2,y2)-(x1,y1)
The key of problem is exactly the position that when how finding t+1 frame, A is ordered, and introduces a kind of Lucas-Kanade hereOptical flow method, basic process is as follows:
The color of an object of its hypothesis of the method supposition does not have huge and significantly changes at front and back two frames. Based on thisThinking, can obtain image constraint equation. Different optical flow algorithms has solved the light stream problem of having supposed different additional conditions. RightUse partial derivative in room and time coordinate, image constraint equation can be written as
I(x,yt)=I(x+dx,y+dy,t+dt)
I (x, y, t) is the pixel value in image (x, y) position.
d x = ∂ x ∂ t d y = ∂ y ∂ t
We suppose mobile enough little, so image constraint equation are used to Taylor's formula, and we can obtain:
I ( x + d x , y + d y , t + d t ) = I ( x , y , t ) + ∂ I ∂ x d x + ∂ I ∂ y d y + ∂ I ∂ t + H O T
H.O.T. refer to the more situation of high-order, mobile enough little in the situation that, can ignore. From this equation, we canTo obtain:
∂ I ∂ x d x + ∂ I ∂ y d y + ∂ I ∂ t = 0
Vx=dxVx=dx
I x = ∂ I ∂ x I y = ∂ I ∂ y
Vx, Vy is respectively x in the light stream vectors of I (x, y, t), the axial motion vector of y. Ix and Iy are that image exists(x, y, t) this point x, the difference value on y direction of principal axis. So just have:
Ix*Vx+Iy*Vy=-It
▿ I T * V → = - I t
In this equation, there are 2 unknown quantitys, at least need two uncorrelated equations to solve. Lucas-Kanade light streamThe motion of method supposition aerial image vegetarian refreshments is consistent, and spot projection contiguous in a scene is also neighbor point to image, and contiguous spot speedDegree is consistent. This is the distinctive supposition of Lucas-Kanade optical flow method, because the constraint of optical flow method fundamental equation only has one, and requirementX, the speed of y direction, has two known variables. We suppose in characteristic point neighborhood and do similar movement, just can many sides of simultaneous nJourney is asked for x, the speed (n is that characteristic point neighborhood is always counted, and comprises this characteristic point) of y direction. Can obtain equation below
I x 1 I y 1 I x 2 I y 2 . . . . . . I x n I y n V x V y = - I t 1 - I t 2 . . . - I t n
In order to solve overdetermined problem, we adopt least square method:
A V → = - b
V → = ( A T A ) - 1 A T ( - b )
Then can obtain the adjacent V of light stream:
V x V y = Σ I x i 2 Σ I x i I y i Σ I x i I y i Σ I y i 2 - 1 - Σ I x i I t i - Σ I y i I t i
Mention this supposition of little motion above, when very fast this supposition of target velocity can be false, many chis fortunatelyFirst degree can address this problem, and each two field picture is carried out, after dwindling in various degree, setting up a gaussian pyramid, minimumThe picture of yardstick is at top layer, and original image is at bottom. Then, start to estimate that from top layer pixel is in next frame position,As the initial position of pixel one deck under present frame, search for downwards along pyramid, repeat to estimate action, until arrive goldThe bottom of word tower. Search can navigate to the direction of motion and the position of pixel fast like this.
Step 430, according to the pixel mutually mating in multiple image, calculates the alignment mould for the multiple image that alignsType. In the technical scheme of the present embodiment, due to the motion vector reflection between the pixel mutually mating is multiple imageBetween motion, the alignment model that pixel based on mutual coupling calculates, can eliminate the fortune between multiple imageMoving relation, is synthesized to together multiple image in high quality; In image alignment process, select correct image alignment modelVery important, available in the present embodiment is affine Transform Model and Perspective transformation model. .
Step 440, is used alignment model, alignment multiple image. In the present embodiment, can utilize alignment model, to manyThe pixel of two field picture is alignd.
Step 450, synthesizes the multiple image after alignment. As shown in Figure 3 B, former for the present embodiment technical schemeReason flow chart, carries out after pretreatment T frame, T+1 two field picture, and (sparse coupling is feature to utilize the sparse match point of optical flow computationCoupling, for calculating the pixel of coupling), utilize the pixel of coupling to calculate alignment matrix, and carry out image alignment. According to thisThe technical scheme of embodiment, the multiple image obtaining for shooting, first finds out the pixel of mutually mating between multiple imagePoint, the pixel based on mutual coupling can calculate the model for the multiple image that aligns, and then to utilize alignment model will be manyTwo field picture alignment; According to technical scheme of the present invention, do not relying on the basis of the equipment such as fixed mount, can carry out multiple imageAlignment, reduced the pixel deviation of multiple image after synthetic.
As shown in Figure 5, fourth embodiment of the invention proposes a kind of multiframe photographic images synthesizer, comprising:
Step 510, obtains the multiple image of shooting.
Step 520, dwindles multiple image, searches the pixel of mutual coupling in the multiple image from dwindling,And according to the pixel mutually mating in the multiple image after dwindling, determine the pixel of mutual coupling in the multiple image before dwindlingPoint. In the present embodiment, as shown in Figure 3 C, in order to accelerate image alignment speed, under the prerequisite that meets the precision of images, first will schemePicture dwindle, find out the coordinate of sparse match point at Small-scale Space, afterwards according to change of scale relation by sparse match point coordinateThereby carry out change of scale and obtain the sparse match point coordinate mapping of large-size images.
Step 530, the calculating convergent-divergent that the pixel of coupling reflects mutually and rotation parameter, displacement parameter, and according to obtainingConvergent-divergent and rotation parameter, displacement parameter set up alignment model, be specially:
According to default formulaGet the pixel of the first two field picture in multiple imageX, y in coordinate figure substitution formula, and by coordinate figure substitution x ', the y ' of the pixel in the second two field picture mutually mating,To calculate convergent-divergent and rotation parameter a00、a01And a10、a11, and displacement parameter a02And a12, formula is as to multiple imageThe alignment model of alignment. In the present embodiment, above-mentioned formula is actually affine Transform Model: parallel arbitrarily in a planeQuadrangle can be mapped as another parallelogram by affine transformation, and the map operation of image enters in same space planeOK, make its distortion obtain dissimilar parallelogram by different transformation parameters. In the present embodiment, affine in useWhen transformation matrix model, based on convergent-divergent and the rotation of convergent-divergent and rotation parameter control chart picture, position-based parameter control chart pictureDisplacement.
In another embodiment, the calculating convergent-divergent that the pixel of coupling reflects mutually and rotation parameter, displacement parameter, levelWith the deformation parameter in vertical direction, and according in the convergent-divergent obtaining and rotation parameter, displacement parameter, level and vertical directionDeformation parameter is set up described alignment model, is specially:
According to default formulaGet the pixel of the first two field picture in multiple imageX, y in coordinate figure substitution formula, and by coordinate figure substitution x ', the y ' of the pixel in the second two field picture mutually mating,To calculate convergent-divergent and rotation parameter a00、a01And a10、a11, and displacement parameter a02And a12, and level and vertical directionDeformation parameter a20、a21, formula is as the alignment model to image alignment. In the present embodiment, above-mentioned formula in factBe transmission transformation model, its affine Transform Model of comparing has more flexibility: a transmission conversion can be transformed into rectangleTrapezoidal, it has been described in space, a plane projection is in another space plane, and affine transformation can be used as perspective transformA special case. Consider handheld mobile device, if mobile phone is in the time taking multiple image continuously, the jitter motion of mobile phone substantiallyNot in same plane, now can selective transmission transformation model. In the present embodiment, in the time using transmission transformation matrix model,Based on convergent-divergent and the rotation of convergent-divergent and rotation parameter control chart picture, the displacement of position-based parameter control chart picture, based on level withThe deformation parameter of vertical direction, the distortion of control chart picture in level, vertical direction.
Step 540, is used alignment model, alignment multiple image.
Step 550, according to current image applications scene, selects corresponding synthesis strategy to synthesize to multiple image.In the present embodiment, after the pixel of each position of multiple image all aligns, the synthetic misalignment of pixels that there will not be of image. DifferentImage process under application scenarios, synthesis strategy is generally different, as multiple-exposure need to carry out translucent fusion, multiframe noise reduction needsBe weighted average etc.
It should be noted that, in this article, term " comprises ", " comprising " or its any other variant are intended to contain non-rowComprising of his property, thus make to comprise that process, method, article or the device of a series of key elements not only comprise those key elements, andAnd also comprise other key elements of clearly not listing, or it is intrinsic to be also included as this process, method, article or device instituteKey element. The in the situation that of more restrictions not, the key element being limited by statement " comprising ... ", and be not precluded within and comprise thisIn process, method, article or the device of key element, also there is other identical element.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be well understood to above-described embodiment sideThe mode that method can add essential general hardware platform by software realizes, and can certainly pass through hardware, but in a lot of situationThe former is better embodiment. Based on such understanding, technical scheme of the present invention is done prior art in essence in other wordsThe part that goes out contribution can embody with the form of software product, and this computer software product is stored in a storage mediumIn (as ROM/RAM, magnetic disc, CD), comprise that (can be mobile phone, computer, takes in order to make a station terminal equipment in some instructionsBusiness device, air-conditioner, or the network equipment etc.) carry out the method described in each embodiment of the present invention.
These are only the preferred embodiments of the present invention, not thereby limit the scope of the claims of the present invention, everyly utilize thisThe equivalent structure that bright description and accompanying drawing content are done or the conversion of equivalent flow process, or be directly or indirectly used in other relevant skillsArt field, is all in like manner included in scope of patent protection of the present invention.

Claims (10)

1. a multiframe photographic images synthesizer, is characterized in that, comprising:
Image collection module, for obtaining the multiple image of shooting;
Matched pixel point is searched module, for searching the described multiple image pixel of coupling mutually;
Alignment model computing module, for the pixel mutually mating according to described multiple image, described in calculating is used for aligingThe alignment model of multiple image;
Image alignment module, for using described alignment model, described multiple image aligns;
Image synthesis unit, for synthesizing the described multiple image after alignment.
2. device according to claim 1, is characterized in that,
Described alignment model computing module calculates convergent-divergent and rotation parameter, the displacement of the pixel reflection of described mutual coupling joinsNumber, and set up described alignment model according to the convergent-divergent obtaining and rotation parameter, displacement parameter.
3. device according to claim 1, is characterized in that,
Described alignment model computing module calculates convergent-divergent and rotation parameter, the displacement of the pixel reflection of described mutual coupling joinsDeformation parameter in number, level and vertical direction, and according to the convergent-divergent obtaining with rotation parameter, displacement parameter, level with verticalDeformation parameter in direction is set up described alignment model.
4. device according to claim 1, is characterized in that,
Described matched pixel point is searched module described multiple image is dwindled, and in the described multiple image from dwindling, searchesThe pixel of coupling mutually, and according to the pixel mutually mating in the described multiple image after dwindling, determine the institute before dwindlingState the pixel mutually mating in multiple image.
5. according to the device described in any one in claim 1 to 4, it is characterized in that,
Described image synthesis unit, according to current image applications scene, selects corresponding synthesis strategy to enter to described multiple imageRow is synthetic.
6. a multiframe photographic images synthetic method, is characterized in that, comprising:
Obtain the multiple image of shooting;
Search the pixel mutually mating in described multiple image;
According to the pixel mutually mating in described multiple image, calculate the alignment model for the described multiple image that aligns;
Use described alignment model, described multiple image aligns;
Described multiple image after alignment is synthesized.
7. method according to claim 6, is characterized in that, calculates the alignment model for the described multiple image that aligns,Specifically comprise:
Calculate the convergent-divergent and rotation parameter, displacement parameter of the pixel reflection of described mutual coupling, and according to the convergent-divergent obtaining andRotation parameter, displacement parameter are set up described alignment model.
8. method according to claim 6, is characterized in that, calculates the alignment model for the described multiple image that aligns,Specifically comprise:
Calculate in the convergent-divergent of pixel reflection of described mutual coupling and rotation parameter, displacement parameter, level and vertical directionDeformation parameter, and set up institute according to the deformation parameter in the convergent-divergent obtaining and rotation parameter, displacement parameter, level and vertical directionState alignment model.
9. method according to claim 6, is characterized in that, searches the pixel mutually mating in described multiple image,Specifically comprise:
Described multiple image is dwindled, in the described multiple image from dwindling, search the pixel of mutual coupling; According toThe pixel mutually mating in described multiple image after dwindling, determines the picture that dwindles mutual coupling in front described multiple imageVegetarian refreshments.
10. according to the method described in any one in claim 6 to 9, it is characterized in that, the described multiple image after alignment is enteredRow is synthetic, specifically comprises:
According to current image applications scene, select corresponding synthesis strategy to synthesize to described multiple image.
CN201610192808.3A 2016-03-30 2016-03-30 Multi-frame photographed image synthesizer and method Pending CN105611181A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610192808.3A CN105611181A (en) 2016-03-30 2016-03-30 Multi-frame photographed image synthesizer and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610192808.3A CN105611181A (en) 2016-03-30 2016-03-30 Multi-frame photographed image synthesizer and method

Publications (1)

Publication Number Publication Date
CN105611181A true CN105611181A (en) 2016-05-25

Family

ID=55990694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610192808.3A Pending CN105611181A (en) 2016-03-30 2016-03-30 Multi-frame photographed image synthesizer and method

Country Status (1)

Country Link
CN (1) CN105611181A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898159A (en) * 2016-05-31 2016-08-24 努比亚技术有限公司 Image processing method and terminal
CN105915796A (en) * 2016-05-31 2016-08-31 努比亚技术有限公司 Electronic aperture shooting method and terminal
CN106097284A (en) * 2016-07-29 2016-11-09 努比亚技术有限公司 The processing method of a kind of night scene image and mobile terminal
CN106254772A (en) * 2016-07-29 2016-12-21 广东欧珀移动通信有限公司 Multiple image synthetic method and device
CN107230192A (en) * 2017-05-31 2017-10-03 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
WO2017206656A1 (en) * 2016-05-31 2017-12-07 努比亚技术有限公司 Image processing method, terminal, and computer storage medium
CN107451952A (en) * 2017-08-04 2017-12-08 追光人动画设计(北京)有限公司 A kind of splicing and amalgamation method of panoramic video, equipment and system
CN107465882A (en) * 2017-09-22 2017-12-12 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN109246332A (en) * 2018-08-31 2019-01-18 北京达佳互联信息技术有限公司 Video flowing noise-reduction method and device, electronic equipment and storage medium
CN109767401A (en) * 2019-01-15 2019-05-17 深圳看到科技有限公司 Picture optimization method, device, terminal and corresponding storage medium
CN109819163A (en) * 2019-01-23 2019-05-28 努比亚技术有限公司 A kind of image processing control, terminal and computer readable storage medium
CN110213500A (en) * 2019-06-17 2019-09-06 易诚高科(大连)科技有限公司 A kind of wide dynamic drawing generating method for the shooting of more camera lenses
CN111145192A (en) * 2019-12-30 2020-05-12 维沃移动通信有限公司 Image processing method and electronic device
CN111182230A (en) * 2019-12-31 2020-05-19 维沃移动通信有限公司 Image processing method and device
CN111327788A (en) * 2020-02-28 2020-06-23 北京迈格威科技有限公司 Synchronization method, temperature measurement method and device of camera set and electronic system
WO2022226701A1 (en) * 2021-04-25 2022-11-03 Oppo广东移动通信有限公司 Image processing method, processing apparatus, electronic device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065854A (en) * 2014-06-18 2014-09-24 联想(北京)有限公司 Image processing method and electronic device
CN104463817A (en) * 2013-09-12 2015-03-25 华为终端有限公司 Image processing method and device
CN105427333A (en) * 2015-12-22 2016-03-23 厦门美图之家科技有限公司 Real-time registration method of video sequence image, system and shooting terminal
CN105430266A (en) * 2015-11-30 2016-03-23 努比亚技术有限公司 Image processing method based on multi-scale transform and terminal
CN105427263A (en) * 2015-12-21 2016-03-23 努比亚技术有限公司 Method and terminal for realizing image registering

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463817A (en) * 2013-09-12 2015-03-25 华为终端有限公司 Image processing method and device
CN104065854A (en) * 2014-06-18 2014-09-24 联想(北京)有限公司 Image processing method and electronic device
CN105430266A (en) * 2015-11-30 2016-03-23 努比亚技术有限公司 Image processing method based on multi-scale transform and terminal
CN105427263A (en) * 2015-12-21 2016-03-23 努比亚技术有限公司 Method and terminal for realizing image registering
CN105427333A (en) * 2015-12-22 2016-03-23 厦门美图之家科技有限公司 Real-time registration method of video sequence image, system and shooting terminal

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898159B (en) * 2016-05-31 2019-10-29 努比亚技术有限公司 A kind of image processing method and terminal
CN105915796A (en) * 2016-05-31 2016-08-31 努比亚技术有限公司 Electronic aperture shooting method and terminal
CN105898159A (en) * 2016-05-31 2016-08-24 努比亚技术有限公司 Image processing method and terminal
WO2017206656A1 (en) * 2016-05-31 2017-12-07 努比亚技术有限公司 Image processing method, terminal, and computer storage medium
CN106254772A (en) * 2016-07-29 2016-12-21 广东欧珀移动通信有限公司 Multiple image synthetic method and device
CN106254772B (en) * 2016-07-29 2017-11-07 广东欧珀移动通信有限公司 Multiple image synthetic method and device
US10728465B2 (en) 2016-07-29 2020-07-28 Guangdong Oppo Mobile Telecommuications Corp., Ltd. Method and device for compositing a plurality of images
CN106097284B (en) * 2016-07-29 2019-08-30 努比亚技术有限公司 A kind of processing method and mobile terminal of night scene image
CN106097284A (en) * 2016-07-29 2016-11-09 努比亚技术有限公司 The processing method of a kind of night scene image and mobile terminal
CN107483839A (en) * 2016-07-29 2017-12-15 广东欧珀移动通信有限公司 Multiple image synthetic method and device
WO2018018927A1 (en) * 2016-07-29 2018-02-01 广东欧珀移动通信有限公司 Method and device for synthesizing multiple frames of images
WO2018019128A1 (en) * 2016-07-29 2018-02-01 努比亚技术有限公司 Method for processing night scene image and mobile terminal
US10686997B2 (en) 2016-07-29 2020-06-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and device for compositing a plurality of images
CN107230192A (en) * 2017-05-31 2017-10-03 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
WO2018219013A1 (en) * 2017-05-31 2018-12-06 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and device, computer readable storage medium and electronic device
CN107230192B (en) * 2017-05-31 2020-07-21 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and mobile terminal
US10497097B2 (en) 2017-05-31 2019-12-03 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and device, computer readable storage medium and electronic device
CN107451952B (en) * 2017-08-04 2020-11-03 追光人动画设计(北京)有限公司 Splicing and fusing method, equipment and system for panoramic video
CN107451952A (en) * 2017-08-04 2017-12-08 追光人动画设计(北京)有限公司 A kind of splicing and amalgamation method of panoramic video, equipment and system
CN107465882A (en) * 2017-09-22 2017-12-12 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN109246332A (en) * 2018-08-31 2019-01-18 北京达佳互联信息技术有限公司 Video flowing noise-reduction method and device, electronic equipment and storage medium
WO2020042826A1 (en) * 2018-08-31 2020-03-05 北京达佳互联信息技术有限公司 Video stream denoising method and apparatus, electronic device and storage medium
CN108898567B (en) * 2018-09-20 2021-05-28 北京旷视科技有限公司 Image noise reduction method, device and system
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN109767401B (en) * 2019-01-15 2021-02-12 深圳看到科技有限公司 Picture optimization method, device, terminal and corresponding storage medium
CN109767401A (en) * 2019-01-15 2019-05-17 深圳看到科技有限公司 Picture optimization method, device, terminal and corresponding storage medium
CN109819163A (en) * 2019-01-23 2019-05-28 努比亚技术有限公司 A kind of image processing control, terminal and computer readable storage medium
CN110213500A (en) * 2019-06-17 2019-09-06 易诚高科(大连)科技有限公司 A kind of wide dynamic drawing generating method for the shooting of more camera lenses
CN111145192A (en) * 2019-12-30 2020-05-12 维沃移动通信有限公司 Image processing method and electronic device
CN111182230A (en) * 2019-12-31 2020-05-19 维沃移动通信有限公司 Image processing method and device
CN111182230B (en) * 2019-12-31 2021-08-06 维沃移动通信有限公司 Image processing method and device
CN111327788A (en) * 2020-02-28 2020-06-23 北京迈格威科技有限公司 Synchronization method, temperature measurement method and device of camera set and electronic system
WO2022226701A1 (en) * 2021-04-25 2022-11-03 Oppo广东移动通信有限公司 Image processing method, processing apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN105611181A (en) Multi-frame photographed image synthesizer and method
CN105430263A (en) Long-exposure panoramic image photographing device and method
CN105430295B (en) Image processing apparatus and method
CN105959543B (en) It is a kind of to remove reflective filming apparatus and method
CN105898159B (en) A kind of image processing method and terminal
CN105704369B (en) A kind of information processing method and device, electronic equipment
CN109788189A (en) The five dimension video stabilization device and methods that camera and gyroscope are fused together
CN106612397A (en) Image processing method and terminal
CN103873764A (en) Information processing apparatus, information processing method, and program
US20090169122A1 (en) Method and apparatus for focusing on objects at different distances for one image
CN105472246B (en) Camera arrangement and method
CN105578045A (en) Terminal and shooting method of terminal
CN105578056A (en) Photographing terminal and method
CN104995904A (en) Image pickup device
CN113810604B (en) Document shooting method, electronic device and storage medium
CN105407295B (en) Mobile terminal filming apparatus and method
CN104853091A (en) Picture taking method and mobile terminal
CN105744170A (en) Picture photographing device and method
CN105427369A (en) Mobile terminal and method for generating three-dimensional image of mobile terminal
CN104796625A (en) Picture synthesizing method and device
CN105407275B (en) Photo synthesizer and method
CN103795937A (en) Information processing apparatus, display apparatus, control method for an information processing apparatus, and program
CN105915796A (en) Electronic aperture shooting method and terminal
CN115359105B (en) Depth-of-field extended image generation method, device and storage medium
CN111835973A (en) Shooting method, shooting device, storage medium and mobile terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160525

RJ01 Rejection of invention patent application after publication