CN105488756A - Picture synthesizing method and device - Google Patents

Picture synthesizing method and device Download PDF

Info

Publication number
CN105488756A
CN105488756A CN201510845403.0A CN201510845403A CN105488756A CN 105488756 A CN105488756 A CN 105488756A CN 201510845403 A CN201510845403 A CN 201510845403A CN 105488756 A CN105488756 A CN 105488756A
Authority
CN
China
Prior art keywords
registration
picture
pictures
frame
reference frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510845403.0A
Other languages
Chinese (zh)
Other versions
CN105488756B (en
Inventor
李嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201510845403.0A priority Critical patent/CN105488756B/en
Publication of CN105488756A publication Critical patent/CN105488756A/en
Priority to PCT/CN2016/102847 priority patent/WO2017088618A1/en
Application granted granted Critical
Publication of CN105488756B publication Critical patent/CN105488756B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a picture synthesizing method and device. The device comprises a picture obtaining module, a picture registering module, a main body extracting module and a picture synthesizing module, wherein the picture obtaining module is used for obtaining a plurality of pictures; the picture registering module is used for performing feature registering on the plurality of pictures and obtaining a public region of the plurality of pictures; the main body extracting module is used for respectively extracting a main body region of each picture from the public region of the plurality of pictures; and the picture synthesizing module is used for synthesizing the main body regions of all of the pictures. The method and the device have the advantages that the main body regions in a plurality of photos can be automatically extracted and are merged into one photo; a user only needs to shoot the plurality of photos with different main bodies in the same scene, and a terminal system can automatically complete the synthesis operation of a plurality of main bodies; and a great number of manual operations can be reduced.

Description

Picture synthetic method and device
Technical field
The present invention relates to picture Processing Technique field, particularly relate to a kind of picture synthetic method and device.
Background technology
Along with smart machine capable of taking pictures is more and more universal, shooting style interest and appeal, simplification become a developing direction of software of taking pictures.Clone's camera is the class software occurred in recent years, and it is by taking multiple pictures in Same Scene, and photo main body puts different attitudes at diverse location, and the photo main body of the most repeatedly taking is synthesized on a photo.
But existing clone's camera software, requires that photographer manually chooses the region of main body in photo after having taken, to guide photo to synthesize, this complex operation and time-consuming.
Therefore, be necessary to provide a kind of method, automatically contrast and find the position of body region in multiple pictures, and automatically completing synthesis, for photographer eliminates a large amount of artificial process.
Summary of the invention
Fundamental purpose of the present invention is to propose a kind of picture synthetic method and device, is intended to realize body region in multiple pictures and automatically synthesizes, and simplifies user operation.
For achieving the above object, a kind of picture synthesizer that the embodiment of the present invention provides, comprising:
Picture acquisition module, for obtaining plurality of pictures;
Picture registration module, for carrying out feature registration to described plurality of pictures, obtains the public domain of described plurality of pictures;
Main body extraction module, for extracting the body region of each pictures respectively in the public domain from described plurality of pictures;
Picture synthesis module, for synthesizing the body region of all pictures.
Alternatively, described picture registration module comprises:
Reference frame chooses unit, and for choosing a pictures as reference frame picture from described plurality of pictures, other pictures are as frame picture subject to registration;
Registration parameter computing unit, for described reference frame picture for benchmark, feature registration is carried out to each frame picture subject to registration, calculates the registration parameter of each frame picture subject to registration;
Picture converter unit, for according to described registration parameter, frame picture subject to registration corresponding to described registration parameter respectively converts, and makes it and described reference frame picture match, obtains registration picture;
Public domain extraction unit, for extracting the common factor in region in all registration pictures, obtains the public domain of all registration pictures.
Alternatively, described registration parameter computing unit, also for choosing transformation model and registration features; According to the transformation model chosen and registration features, with described reference frame picture for benchmark, feature registration is carried out to each frame picture subject to registration, calculates the registration parameter of each frame picture subject to registration.
Alternatively, described main body extraction module comprises:
Frame difference unit, for the public domain based on registration picture, carries out comparison in difference by each registration picture and reference frame picture respectively, obtains the frame difference image of each registration picture; Binary conversion treatment is carried out to described frame difference image, obtains frame difference bianry image;
Reference frame main body extraction unit, for extracting the common factor of all described frame difference bianry images, obtains the body region of described reference frame picture;
Connected region extraction unit, for the frame difference bianry image according to each registration picture, the corresponding connected region extracting each registration picture, obtains the body region of each registration picture.
Alternatively, described picture synthesis module, also for the body region of all registration pictures being synthesized to the body region of described reference frame picture.
Alternatively, described device also comprises:
Picture output module, for processing picture after synthesis and/or externally send.
The embodiment of the present invention also proposes a kind of picture synthetic method, comprising:
Obtain plurality of pictures;
Feature registration is carried out to described plurality of pictures, obtains the public domain of described plurality of pictures;
The body region of each pictures is extracted respectively from the public domain of described plurality of pictures;
The body region of all pictures is synthesized.
Alternatively, describedly carry out feature registration to plurality of pictures, the step obtaining the public domain of described plurality of pictures comprises:
From described plurality of pictures, choose a pictures as reference frame picture, other pictures are as frame picture subject to registration;
With described reference frame picture for benchmark, feature registration is carried out to each frame picture subject to registration, calculates the registration parameter of each frame picture subject to registration;
According to described registration parameter, frame picture subject to registration corresponding to described registration parameter respectively converts, and makes it and described reference frame picture match, obtains registration picture;
Extract the common factor in region in all registration pictures, obtain the public domain of all registration pictures.
Alternatively, described with described reference frame picture for benchmark, carry out feature registration to each frame picture subject to registration, the step calculating the registration parameter of each frame picture subject to registration comprises:
Choose transformation model and registration features;
According to the transformation model chosen and registration features, with described reference frame picture for benchmark, feature registration is carried out to each frame picture subject to registration, calculates the registration parameter of each frame picture subject to registration.
Alternatively, the described step extracting the body region of each pictures from the public domain of plurality of pictures respectively comprises:
Based on the public domain of registration picture, each registration picture and reference frame picture are carried out comparison in difference, obtains the frame difference image of each registration picture;
Binary conversion treatment is carried out to described frame difference image, obtains frame difference bianry image;
Extract the common factor of all described frame difference bianry images, obtain the body region of described reference frame picture;
According to the frame difference bianry image of each registration picture, the corresponding connected region extracting each registration picture, obtains the body region of each registration picture.
Alternatively, described the step that the body region of all pictures carries out synthesizing to be comprised:
The body region of all registration pictures is synthesized to the body region of described reference frame picture.
A kind of picture synthetic method that the embodiment of the present invention proposes and device, can automatically extract body region in multiple pictures and be synthesized to a photo, user only needs to take the different photo of multiple main bodys in Same Scene, terminal system will complete the synthetic operation of multiple main body automatically, has saved a large amount of manual operations.
Accompanying drawing explanation
Fig. 1 is the hardware configuration schematic diagram of the optional mobile terminal realizing each embodiment of the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the high-level schematic functional block diagram of picture synthesizer first embodiment of the present invention;
Fig. 4 is the structural representation of picture registration module in the embodiment of the present invention;
Fig. 5 is the structural representation of main body extraction module in the embodiment of the present invention;
Fig. 6 is a kind of picture synthetic effect schematic diagram of the embodiment of the present invention;
Fig. 7 is the high-level schematic functional block diagram of picture synthesizer second embodiment of the present invention;
Fig. 8 is the schematic flow sheet of picture synthetic method of the present invention preferred embodiment.
The realization of the object of the invention, functional characteristics and advantage will in conjunction with the embodiments, are described further with reference to accompanying drawing.
Embodiment
Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
The terminal device related in embodiment of the present invention scheme mainly refers to mobile terminal.
The mobile terminal realizing each embodiment of the present invention is described referring now to accompanying drawing.In follow-up description, use the suffix of such as " module ", " parts " or " unit " for representing element only in order to be conducive to explanation of the present invention, itself is specific meaning not.Therefore, " module " and " parts " can mixedly use.
Mobile terminal can be implemented in a variety of manners.Such as, the terminal described in the present invention can comprise the such as mobile terminal of mobile phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP (portable media player), guider etc. and the fixed terminal of such as digital TV, desk-top computer etc.Below, suppose that terminal is mobile terminal.But it will be appreciated by those skilled in the art that except the element except being used in particular for mobile object, structure according to the embodiment of the present invention also can be applied to the terminal of fixed type.
Fig. 1 is the hardware configuration schematic diagram of the optional mobile terminal realizing each embodiment of the present invention.
Mobile terminal 100 can comprise wireless communication unit 110, A/V (audio/video) input block 120, user input unit 130, sensing cell 140, output unit 150, storer 160, interface unit 170, controller 180 and power supply unit 190 etc.Fig. 1 shows the mobile terminal with various assembly, it should be understood that, does not require to implement all assemblies illustrated.Can alternatively implement more or less assembly.Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assembly, and it allows the wireless communication between mobile terminal 100 and wireless communication system or network.Such as, wireless communication unit can comprise at least one in broadcast reception module 111, mobile communication module 112, wireless Internet module 113, short range communication module 114 and positional information module 115.
Broadcast reception module 111 via broadcast channel from external broadcasting management server receiving broadcast signal and/or broadcast related information.Broadcast channel can comprise satellite channel and/or terrestrial channel.Broadcast management server can be generate and send the server of broadcast singal and/or broadcast related information or the broadcast singal generated before receiving and/or broadcast related information and send it to the server of terminal.Broadcast singal can comprise TV broadcast singal, radio signals, data broadcasting signal etc.And broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast related information also can provide via mobile communications network, and in this case, broadcast related information can be received by mobile communication module 112.Broadcast singal can exist in a variety of manners, such as, it can exist with the form of the electronic service guidebooks (ESG) of the electronic program guides of DMB (DMB) (EPG), digital video broadcast-handheld (DVB-H) etc.Broadcast reception module 111 can by using the broadcast of various types of broadcast system Received signal strength.Especially, broadcast reception module 111 can by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video broadcasting-hand-held (DVB-H), forward link media (MediaFLO ) Radio Data System, received terrestrial digital broadcasting integrated service (ISDB-T) etc. digit broadcasting system receive digital broadcasting.Broadcast reception module 111 can be constructed to be applicable to providing the various broadcast system of broadcast singal and above-mentioned digit broadcasting system.The broadcast singal received via broadcast reception module 111 and/or broadcast related information can be stored in storer 160 (or storage medium of other type).
Radio signal is sent at least one in base station (such as, access point, Node B etc.), exterior terminal and server and/or receives radio signals from it by mobile communication module 112.Various types of data that such radio signal can comprise voice call signal, video calling signal or send according to text and/or Multimedia Message and/or receive.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.This module can be inner or be externally couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by this module can comprise WLAN (WLAN) (Wi-Fi), Wibro (WiMAX), Wimax (worldwide interoperability for microwave access), HSDPA (high-speed downlink packet access) etc.
Short range communication module 114 is the modules for supporting junction service.Some examples of short-range communication technology comprise bluetooth tM, radio-frequency (RF) identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), purple honeybee tMetc..
Positional information module 115 is the modules of positional information for checking or obtain mobile terminal.The typical case of positional information module is GPS (GPS).According to current technology, GPS module calculates from the range information of three or more satellite and correct time information and for the Information application triangulation calculated, thus calculates three-dimensional current location information according to longitude, latitude and pin-point accuracy.Current, the method for calculating position and temporal information uses three satellites and by using the error of the position that goes out of an other satellite correction calculation and temporal information.In addition, GPS module 115 can carry out computing velocity information by Continuous plus current location information in real time.
A/V input block 120 is for audio reception or vision signal.A/V input block 120 can comprise camera 121, and the view data of camera 121 to the static images obtained by image capture apparatus in Video Capture pattern or image capture mode or video processes.Picture frame after process may be displayed on display unit 151.Picture frame after camera 121 processes can be stored in storer 160 (or other storage medium) or via wireless communication unit 110 and send, and can provide two or more cameras 1210 according to the structure of mobile terminal.
User input unit 130 can generate key input data to control the various operations of mobile terminal according to the order of user's input.User input unit 130 allows user to input various types of information, and keyboard, the young sheet of pot, touch pad (such as, detecting the touch-sensitive assembly of the change of the resistance, pressure, electric capacity etc. that cause owing to being touched), roller, rocking bar etc. can be comprised.Especially, when touch pad is superimposed upon on display unit 151 as a layer, touch-screen can be formed.
Sensing cell 140 detects the current state of mobile terminal 100, (such as, mobile terminal 100 open or close state), the position of mobile terminal 100, user for mobile terminal 100 contact (namely, touch input) presence or absence, the orientation of mobile terminal 100, the acceleration or deceleration of mobile terminal 100 move and direction etc., and generate order or the signal of the operation for controlling mobile terminal 100.Such as, when mobile terminal 100 is embodied as sliding-type mobile phone, sensing cell 140 can sense this sliding-type phone and open or close.In addition, whether whether sensing cell 140 can detect power supply unit 190 provides electric power or interface unit 170 to couple with external device (ED).Interface unit 170 is used as at least one external device (ED) and is connected the interface that can pass through with mobile terminal 100.Such as, external device (ED) can comprise wired or wireless head-band earphone port, external power source (or battery charger) port, wired or wireless FPDP, memory card port, for connecting the port, audio frequency I/O (I/O) port, video i/o port, ear port etc. of the device with identification module.Identification module can be that storage uses the various information of mobile terminal 100 for authentication of users and can comprise subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) etc.In addition, the device (hereinafter referred to " recognition device ") with identification module can take the form of smart card, and therefore, recognition device can be connected with mobile terminal 100 via port or other coupling arrangement.Interface unit 170 may be used for receive from external device (ED) input (such as, data message, electric power etc.) and the input received be transferred to the one or more element in mobile terminal 100 or may be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 100 is connected with external base, interface unit 170 can be used as to allow by it electric power to be provided to the path of mobile terminal 100 from base or can be used as the path that allows to be transferred to mobile terminal by it from the various command signals of base input.The various command signal inputted from base or electric power can be used as and identify whether mobile terminal is arranged on the signal base exactly.
Output unit 150 is constructed to provide output signal (such as, sound signal, vision signal, alarm signal, vibration signal etc.) with vision, audio frequency and/or tactile manner.Output unit 150 can comprise display unit 151 etc.
Display unit 151 may be displayed on the information of process in mobile terminal 100.Such as, when mobile terminal 100 is in telephone calling model, display unit 151 can show with call or other communicate (such as, text messaging, multimedia file are downloaded etc.) be correlated with user interface (UI) or graphic user interface (GUI).When mobile terminal 100 is in video calling pattern or image capture mode, display unit 151 can the image of display capture and/or the image of reception, UI or GUI that video or image and correlation function are shown etc.
Meanwhile, when display unit 151 and touch pad as a layer superposed on one another to form touch-screen time, display unit 151 can be used as input media and output unit.Display unit 151 can comprise at least one in liquid crystal display (LCD), thin film transistor (TFT) LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc.Some in these displays can be constructed to transparence and watch from outside to allow user, and this can be called transparent display, and typical transparent display can be such as TOLED (transparent organic light emitting diode) display etc.According to the specific embodiment wanted, mobile terminal 100 can comprise two or more display units (or other display device), such as, mobile terminal can comprise outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used for detecting touch input pressure and touch input position and touch and inputs area.
Storer 160 software program that can store process and the control operation performed by controller 180 etc., or temporarily can store oneself through exporting the data (such as, telephone directory, message, still image, video etc.) that maybe will export.And, storer 160 can store about when touch be applied to touch-screen time the vibration of various modes that exports and the data of sound signal.
Storer 160 can comprise the storage medium of at least one type, described storage medium comprises flash memory, hard disk, multimedia card, card-type storer (such as, SD or DX storer etc.), random access storage device (RAM), static random-access memory (SRAM), ROM (read-only memory) (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc.And mobile terminal 100 can be connected the memory function of execute store 160 network storage device with by network cooperates.
Controller 180 controls the overall operation of mobile terminal usually.Such as, controller 180 performs the control relevant to voice call, data communication, video calling etc. and process.In addition, controller 180 can comprise the multi-media module 1810 for reproducing (or playback) multi-medium data, and multi-media module 1810 can be configured in controller 180, or can be configured to be separated with controller 180.Controller 180 can pattern recognition process, is identified as character or picture so that input is drawn in the handwriting input performed on the touchscreen or picture.
Power supply unit 190 receives external power or internal power and provides each element of operation and the suitable electric power needed for assembly under the control of controller 180.
Various embodiment described herein can to use such as computer software, the computer-readable medium of hardware or its any combination implements.For hardware implementation, embodiment described herein can by using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, being designed at least one performed in the electronic unit of function described herein and implementing, in some cases, such embodiment can be implemented in controller 180.For implement software, the embodiment of such as process or function can be implemented with allowing the independent software module performing at least one function or operation.Software code can be implemented by the software application (or program) write with any suitable programming language, and software code can be stored in storer 160 and to be performed by controller 180.
So far, oneself is through the mobile terminal according to its functional description.Below, for the sake of brevity, by the slide type mobile terminal that describes in various types of mobile terminals of such as folded form, board-type, oscillating-type, slide type mobile terminal etc. exemplarily.Therefore, the present invention can be applied to the mobile terminal of any type, and is not limited to slide type mobile terminal.
Mobile terminal 100 as shown in Figure 1 can be constructed to utilize and send the such as wired and wireless communication system of data via frame or grouping and satellite-based communication system operates.
Describe wherein according to the communication system that mobile terminal of the present invention can operate referring now to Fig. 2.
Such communication system can use different air interfaces and/or Physical layer.Such as, the air interface used by communication system comprises such as frequency division multiple access (FDMA), time division multiple access (TDMA) (TDMA), CDMA (CDMA) and universal mobile telecommunications system (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc.As non-limiting example, description below relates to cdma communication system, but such instruction is equally applicable to the system of other type.
With reference to figure 2, cdma wireless communication system can comprise multiple mobile terminal 100, multiple base station (BS) 270, base station controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is constructed to form interface with Public Switched Telephony Network (PSTN) 290.MSC280 is also constructed to form interface with the BSC275 that can be couple to base station 270 via back haul link.Back haul link can construct according to any one in some interfaces that oneself knows, described interface comprises such as E1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL.Will be appreciated that system as shown in Figure 2 can comprise multiple BSC2750.
Each BS270 can serve one or more subregion (or region), by multidirectional antenna or point to specific direction each subregion of antenna cover radially away from BS270.Or each subregion can by two or more antenna covers for diversity reception.Each BS270 can be constructed to support multiple parallel compensate, and each parallel compensate has specific frequency spectrum (such as, 1.25MHz, 5MHz etc.).
Subregion can be called as CDMA Channel with intersecting of parallel compensate.BS270 also can be called as base station transceiver subsystem (BTS) or other equivalent terms.Under these circumstances, term " base station " may be used for broadly representing single BSC275 and at least one BS270.Base station also can be called as " cellular station ".Or each subregion of particular B S270 can be called as multiple cellular station.
As shown in Figure 2, broadcast singal is sent to the mobile terminal 100 at operate within systems by broadcsting transmitter (BT) 295.Broadcast reception module 111 as shown in Figure 1 is arranged on mobile terminal 100 and sentences the broadcast singal receiving and sent by BT295.In fig. 2, several GPS (GPS) satellite 300 is shown.Satellite 300 helps at least one in the multiple mobile terminal 100 in location.
In fig. 2, depict multiple satellite 300, but understand, the satellite of any number can be utilized to obtain useful locating information.GPS module as shown in Figure 1 is constructed to coordinate to obtain the locating information wanted with satellite 300 usually.Substitute GPS tracking technique or outside GPS tracking technique, can use can other technology of position of tracking mobile terminal.In addition, at least one gps satellite 300 optionally or extraly can process satellite dmb transmission.
As a typical operation of wireless communication system, BS270 receives the reverse link signal from various mobile terminal 100.Mobile terminal 100 participates in call usually, information receiving and transmitting communicates with other type.Each reverse link signal that certain base station 270 receives is processed by particular B S270.The data obtained are forwarded to relevant BSC275.BSC provides call Resourse Distribute and comprises the mobile management function of coordination of the soft switching process between BS270.The data received also are routed to MSC280 by BSC275, and it is provided for the extra route service forming interface with PSTN290.Similarly, PSTN290 and MSC280 forms interface, and MSC and BSC275 forms interface, and BSC275 correspondingly control BS270 so that forward link signals is sent to mobile terminal 100.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of the present invention is proposed.
Due to existing clone's camera software, require that photographer manually chooses body region in photo and carries out photo synthesis after shooting completes, cause user operation loaded down with trivial details and time-consuming.
For this reason, the present invention proposes a solution, effectively can realize body region in multiple pictures and automatically synthesize, and simplifies user operation.
Particularly, as shown in Figure 3, first embodiment of the invention proposes a kind of picture synthesizer, comprising: picture acquisition module 201, picture registration module 202, main body extraction module 203 and picture synthesis module 204, wherein:
Picture acquisition module 201, for obtaining plurality of pictures;
Picture registration module 202, for carrying out feature registration to described plurality of pictures, obtains the public domain of described plurality of pictures;
Main body extraction module 203, for extracting the body region of each pictures respectively in the public domain from described plurality of pictures;
Picture synthesis module 204, for synthesizing the body region of all pictures.
Particularly, the present embodiment picture synthesizer can be arranged on the mobile terminals such as above-mentioned mobile phone, and realizing body region in multiple pictures by this picture synthesizer synthesizes automatically, to simplify user operation.
First, plurality of pictures is obtained by picture acquisition module 201, this plurality of pictures can for the picture taken under same photographed scene, certainly, also can for the picture taken under different photographed scene, wherein main body personage can be same people, also can be different people, or be multiple people, or be pure scenery picture.
Be that the picture taken under same photographed scene is illustrated with plurality of pictures, such as, in picture synthesizer, input user by picture acquisition module and take more than three photos at same photographed scene, be designated as I 1, I 2..., I n, n>=3.
Then, by picture registration module 202, the multiple pictures of input is snapped to same background, obtain the public domain of described plurality of pictures.
Be implemented as follows:
First, from described plurality of pictures, choose a pictures as reference frame picture, other pictures are as frame picture subject to registration;
Then, with described reference frame picture for benchmark, feature registration is carried out to each frame picture subject to registration, calculates the registration parameter of each frame picture subject to registration;
Afterwards, according to described registration parameter, frame picture described subject to registration corresponding to described registration parameter respectively converts, and makes it and described reference frame picture match, obtains registration picture;
Finally, extract the common factor in region in all registration pictures, obtain the public domain of all registration pictures.
During embody rule, as shown in Figure 4, picture registration module 202 can comprise: reference frame chooses unit 2021, registration parameter computing unit 2022, picture converter unit 2023 and public domain extraction unit 2024, wherein:
Reference frame chooses unit 2021, and for choosing a pictures as reference frame picture from described plurality of pictures, other pictures are as frame picture subject to registration;
Registration parameter computing unit 2022, for described reference frame picture for benchmark, feature registration is carried out to each frame picture subject to registration, calculates the registration parameter of each frame picture subject to registration;
Picture converter unit 2023, for according to described registration parameter, frame picture described subject to registration corresponding to described registration parameter respectively converts, and makes it and described reference frame picture match, obtains registration picture;
Public domain extraction unit 2024, for extracting the common factor in region in all registration pictures, obtains the public domain of all registration pictures.
Based on the picture registration module 202 of said structure, specific implementation process is as follows:
Step 21, chooses unit 2021 by reference to frame and to open photo from the n that picture acquisition module inputs and select a frame as with reference to frame.It can be random selecting that reference frame specifically chooses mode, also can be certain selected, and wherein, certain selected mode includes but not limited to several as follows:
1, select the first frame picture (first photo) as reference frame.
2, select last frame as reference frame.
3, photo is opened to n and carries out sharpness evaluation, select a conduct the most clearly with reference to frame.
Wherein, sharpness evaluation algorithms can adopt asks second derivative to picture x, y-axis direction, and add up to derivative absolute value, cumulative sum is larger, then represent that picture is clear higher.
Thus, the n of picture acquisition module input opens photo after selecting, and obtains reference frame F rwith other frames F 1, F 2..., F n-1.
Step 22, image parameters computing unit take reference frame as benchmark, carries out registration to other each frames to reference frame, calculates registration parameter.
Wherein, registration Algorithm embodiment can have multiple, is exemplified below:
First, a kind of transformation model be selected as hypothesis.Such as, select picture global change, or can select geometric transformation, similarity transformation, affined transformation, projective transformation etc., partial transformation can be divided into different piece picture, and calculates independent registration parameter to every part.
Then, select registration features, the feature that can select has: unique point, cross-correlation, mutual information etc.
For unique point, its registration parameter implementation method is: from reference frame F rextract some unique points, then from frame F subject to registration iextract or search characteristic of correspondence point, according to the position of unique point as data, solve registration parameter.
For cross-correlation, its registration parameter implementation method is: cross-correlation utilizes Fourier transform that picture is transformed to frequency domain, then uses cross-correlation formulae discovery frame F subject to registration ithe correlativity of each position in the spatial domain, gets maximum position as registration result.
For mutual information, its registration parameter implementation method is: mutual information is the evaluation method of picture analogies, the extreme value using optimization algorithm (as gradient descent method) to search for parameter in registration parameter space makes mutual information minimum, is namely make frame F subject to registration iwith reference frame F rthe parameter of optimal registration.
Step 23, picture converter unit after calculating registration parameter, to frame F subject to registration i, i=[1, n-1] converts, and makes it to scheme to mate with reference.Picture W after registration is obtained after conversion 1, W 2..., W n-1.
Step 24, public domain extraction unit is from picture W after registration 1, W 2..., W n-1the public domain of all conversion of middle calculating.
Wherein, public domain is the common factor of picture region after these registrations.After obtaining public domain, subsequent extracted and synthesis all only operate for the public domain part of all photos.
Then, the body region of each pictures is extracted from the public domain of described plurality of pictures.
Specific implementation process is as follows:
First, based on the public domain of registration picture, each registration picture and reference frame picture are carried out comparison in difference, obtain frame difference image;
Then, binary conversion treatment is carried out to described frame difference image, obtain frame difference bianry image;
Afterwards, extract the common factor of all described frame difference bianry images, obtain the body region of described reference frame picture;
Finally, according to the frame difference bianry image of each registration picture, the corresponding connected region extracting each registration picture, obtains the body region of each registration picture.
During embody rule, as shown in Figure 5, described main body extraction module 203 comprises: frame difference unit 2031, reference frame main body extraction unit 2032 and connected region extraction unit 2033, wherein:
Frame difference unit 2031, for the public domain based on registration picture, carries out comparison in difference by each registration picture and reference frame picture, obtains frame difference image; Binary conversion treatment is carried out to described frame difference image, obtains frame difference bianry image;
Reference frame main body extraction unit 2032, for extracting the common factor of all described frame difference bianry images, obtains the body region of described reference frame picture;
Connected region extraction unit 2033, for the frame difference bianry image according to each registration picture, the corresponding connected region extracting each registration picture, obtains the body region of each registration picture.
Based on the structure of aforementioned body extraction module 203, specific implementation process is as follows:
Step 31, detects poor unit and carries out comparison in difference to each registration picture and reference frame.Such as, to registration picture W iwith reference frame F rcalculate color distortion, obtain frame difference image, computing formula can be expressed as follows:
DIFF i(x,y)=abs(W i(x,y)-F r(x,y));
Wherein, DIFF i(x, y) represents the pixel value of x, y coordinate position in frame difference figure.In frame difference image, the size of pixel value represents the size of registration picture and reference frame color distortion.
Then, carry out binaryzation to frame difference figure, obtain frame difference binary map, computing formula is:
T i ( x , y ) = 1 , i f DIFF i ( x , y ) > θ 0 , e l s e ;
Wherein, θ is preset value, T i(x, y) represents the pixel value of frame difference binary map x, y coordinate, represents that registration picture and reference frame are variant when being 1,0 expression indifference.
Step 32, reference frame main body extraction unit is from frame difference binary map T 1, T 2..., T n-1middle acquisition reference frame F rin body region.Acquisition methods as shown in the formula:
Mask r ( x , y ) = 1 , i f ∀ i , T i ( x , y ) = 1 0 , e l s e ;
Because body position is different in all photos, so when reference frame and other frames make frame difference, reference frame main body portion area is certain and other frames are variant.Here the common factor of all frames difference bianry image is got as reference frame main body.
Step 33, is communicated with the body region that district's extraction unit is responsible for extracting other every frames except reference frame, obtains connected region.
First, by frame difference binary map T 1, T 2..., T n-1the body region of middle reference frame is removed, specific algorithm as shown in the formula:
T i ′ = T i ∩ Mask r ‾ ;
Its objective is to make frame difference binary map T ' after process 1, T ' 2..., T ' n-1only retain the main part in registration picture.
Then, to binary map T ' 1, T ' 2..., T ' n-1mark, adjacent be 1 pixel value be labeled as a region, there is independent numbering in each region, for distinguishing with other regions.
Wherein, the algorithm being communicated with district's mark has two-path twice traversal and seed mediated growth method etc.A signature L is obtained after mark i.L iin the value of each pixel represent that this pixel is at T ' iin which be communicated with district.Such as pixel (x, y) is at T ' ia jth region, then L i=j.
A Prototype drawing is generated to each connected region below.Template drawing generating method as shown in the formula:
Mask i j ( x , y ) = 1 , i f L i ( x , y ) = j 0 , e l s e .
Finally, by picture synthesis module 204, the body region of all registration pictures is synthesized to the body region of described reference frame picture.
Particularly, picture synthesis module 204 on the basis obtaining every frame main body by registration picture W 1, W 2..., W n-1in body region be synthesized in reference picture.
Synthesis order can be W 1, W 2..., W n-1, also can be W n-1, W n-2..., W 1.The relation covered between synthesis order different subjects is also different, because in same position, the main body of rear synthesis can cover the main body of first synthesizing.
Synthesising picture is I fusion, be initialized as F r, suppose that current main body template to be synthesized is Mask ij, then after this synthesis,
I fusion(x,y)=Mask ij(x,y)·W i+(1-Mask ij(x,y))·I fusion(x,y);
After the main body of all frames is synthesized successively, obtain the rear picture I of synthesis fusion.
Follow-up, according to user's needs, by picture compression, preservation, display after synthesis or be sent to network.
Be exemplified below based on the present embodiment scheme:
As shown in Fig. 6 a, Fig. 6 b, Fig. 6 c, under Same Scene, take three pictures respectively, after carrying out picture synthesis by such scheme, obtain the synthetic effect figure shown in Fig. 6 d.
The present embodiment passes through such scheme, by automatically finding body region in multiple pictures and being synthesized to a photo, user is made only to need to take the different photo of multiple main bodys in Same Scene, to can automatically complete the synthetic operation of multiple main body by this device, save a large amount of manual operations.
As shown in Figure 7, second embodiment of the invention proposes a kind of picture synthesizer, and based on the embodiment shown in above-mentioned Fig. 3, described device also comprises:
Picture output module 205, for processing picture after synthesis and/or externally send.
According to user's needs, by picture compression, preservation, display after synthesis or be sent to network.
The present embodiment passes through such scheme, by automatically finding body region in multiple pictures and being synthesized to a photo, user is made only to need to take the different photo of multiple main bodys in Same Scene, to can automatically complete the synthetic operation of multiple main body by this device, save a large amount of manual operations.
Accordingly, picture synthetic method embodiment of the present invention is proposed.
As shown in Figure 8, present pre-ferred embodiments proposes a kind of picture synthetic method, comprising:
Step S101, obtains plurality of pictures;
The present embodiment picture synthetic method can be performed by picture synthesizer, and this picture synthesizer can be arranged on the mobile terminals such as above-mentioned mobile phone, and realizing body region in multiple pictures by this picture synthesizer synthesizes automatically, to simplify user operation.
First, obtain plurality of pictures, this plurality of pictures can for the picture taken under same photographed scene, certainly, also can for the picture taken under different photographed scene, wherein main body personage can be same people, also can be different people, or be multiple people, or be pure scenery picture.
Be that the picture taken under same photographed scene is illustrated with plurality of pictures, such as, in picture synthesizer, input user by picture acquisition module and take more than three photos at same photographed scene, be designated as I 1, I 2..., I n, n>=3.
Step S102, carries out feature registration to described plurality of pictures, obtains the public domain of described plurality of pictures;
Then, by picture registration module, the multiple pictures of input is snapped to same background, obtain the public domain of described plurality of pictures.
Be implemented as follows:
From described plurality of pictures, choose a pictures as reference frame picture, other pictures are as frame picture subject to registration;
With described reference frame picture for benchmark, feature registration is carried out to each frame picture subject to registration, calculates the registration parameter of each frame picture subject to registration;
According to described registration parameter, frame picture described subject to registration corresponding to described registration parameter respectively converts, and makes it and described reference frame picture match, obtains registration picture;
Extract the common factor in region in all registration pictures, obtain the public domain of all registration pictures.
Embody rule citing implementation procedure is as follows:
Step 21, chooses unit by reference to frame and to open photo from the n that picture acquisition module inputs and select a frame as with reference to frame.It can be random selecting that reference frame specifically chooses mode, also can be certain selected, and wherein, certain selected mode includes but not limited to several as follows:
1, select the first frame picture (first photo) as reference frame.
2, select last frame as reference frame.
3, photo is opened to n and carries out sharpness evaluation, select a conduct the most clearly with reference to frame.
Wherein, sharpness evaluation algorithms can adopt asks second derivative to picture x, y-axis direction, and add up to derivative absolute value, cumulative sum is larger, then represent that picture is clear higher.
Thus, the n of picture acquisition module input opens photo after selecting, and obtains reference frame F rwith other frames F 1, F 2..., F n-1.
Step 22, image parameters computing unit take reference frame as benchmark, carries out registration to other each frames to reference frame, calculates registration parameter.
Wherein, registration Algorithm embodiment can have multiple, is exemplified below:
First, a kind of transformation model be selected as hypothesis.Such as, select picture global change, or can select geometric transformation, similarity transformation, affined transformation, projective transformation etc., partial transformation can be divided into different piece picture, and calculates independent registration parameter to every part.
Then, select registration features, the feature that can select has: unique point, cross-correlation, mutual information etc.
For unique point, its registration parameter implementation method is: from reference frame F rextract some unique points, then from frame F subject to registration iextract or search characteristic of correspondence point, according to the position of unique point as data, solve registration parameter.
For cross-correlation, its registration parameter implementation method is: cross-correlation utilizes Fourier transform that picture is transformed to frequency domain, then uses cross-correlation formulae discovery frame F subject to registration ithe correlativity of each position in the spatial domain, gets maximum position as registration result.
For mutual information, its registration parameter implementation method is: mutual information is the evaluation method of picture analogies, the extreme value using optimization algorithm (as gradient descent method) to search for parameter in registration parameter space makes mutual information minimum, is namely make frame F subject to registration iwith reference frame F rthe parameter of optimal registration.
Step 23, picture converter unit after calculating registration parameter, to frame F subject to registration i, i=[1, n-1] converts, and makes it to scheme to mate with reference.Picture W after registration is obtained after conversion 1, W 2..., W n-1.
Step 24, public domain extraction unit is from picture W after registration 1, W 2..., W n-1the public domain of all conversion of middle calculating.
Wherein, public domain is the common factor of picture region after these registrations.After obtaining public domain, subsequent extracted and synthesis all only operate for the public domain part of all photos.
Step S103, extracts the body region of each pictures respectively from the public domain of described plurality of pictures;
Then, the body region of each pictures is extracted from the public domain of described plurality of pictures.
Specific implementation process is as follows:
First, based on the public domain of registration picture, each registration picture and reference frame picture are carried out comparison in difference, obtain frame difference image;
Then, binary conversion treatment is carried out to described frame difference image, obtain frame difference bianry image;
Afterwards, extract the common factor of all described frame difference bianry images, obtain the body region of described reference frame picture;
Finally, according to the frame difference bianry image of each registration picture, the corresponding connected region extracting each registration picture, obtains the body region of each registration picture.
During embody rule, can be responsible for by main body extraction module the main part extracting each frame in the picture after registration.The structured flowchart of this module as shown in Figure 5.
Embody rule citing implementation procedure is as follows:
Step 31, detects poor unit and carries out comparison in difference to each registration picture and reference frame.Such as, to registration picture W iwith reference frame F rcalculate color distortion, obtain frame difference image, computing formula can be expressed as follows:
DIFF i(x,y)=abs(W i(x,y)-F r(x,y));
Wherein, DIFF i(x, y) represents the pixel value of x, y coordinate position in frame difference figure.In frame difference image, the size of pixel value represents the size of registration picture and reference frame color distortion.
Then, carry out binaryzation to frame difference figure, obtain frame difference binary map, computing formula is:
T i ( x , y ) = 1 , i f DIFF i ( x , y ) > θ 0 , e l s e ;
Wherein, θ is preset value, T i(x, y) represents the pixel value of frame difference binary map x, y coordinate, represents that registration picture and reference frame are variant when being 1,0 expression indifference.
Step 32, reference frame main body extraction unit is from frame difference binary map T 1, T 2..., T n-1middle acquisition reference frame F rin body region.Acquisition methods as shown in the formula:
Mask r ( x , y ) = 1 , i f ∀ i , T i ( x , y ) = 1 0 , e l s e ;
Because body position is different in all photos, so when reference frame and other frames make frame difference, reference frame main body portion area is certain and other frames are variant.Here the common factor of all frames difference bianry image is got as reference frame main body.
Step 33, is communicated with the body region that district's extraction unit is responsible for extracting other every frames except reference frame, obtains connected region.
First, by frame difference binary map T 1, T 2..., T n-1the body region of middle reference frame is removed, specific algorithm as shown in the formula:
T i ′ = T i ∩ Mask r ‾ ;
Its objective is to make frame difference binary map T ' after process 1, T ' 2..., T ' n-1only retain the main part in registration picture.
Then, to binary map T ' 1, T ' 2..., T ' n-1mark, adjacent be 1 pixel value be labeled as a region, there is independent numbering in each region, for distinguishing with other regions.
Wherein, the algorithm being communicated with district's mark has two-path twice traversal and seed mediated growth method etc.A signature L is obtained after mark i.L iin the value of each pixel represent that this pixel is at T ' iin which be communicated with district.Such as pixel (x, y) is at T ' ia jth region, then L i=j.
A Prototype drawing is generated to each connected region below.Template drawing generating method as shown in the formula:
Mask i j ( x , y ) = 1 , i f L i ( x , y ) = j 0 , e l s e .
Step S104, synthesizes the body region of all pictures.
Finally, the body region of all registration pictures is synthesized to the body region of described reference frame picture.
Particularly, picture synthesis module on the basis obtaining every frame main body by registration picture W 1, W 2..., W n-1in body region be synthesized in reference picture.
Synthesis order can be W 1, W 2..., W n-1, also can be W n-1, W n-2..., W 1.The relation covered between synthesis order different subjects is also different, because in same position, the main body of rear synthesis can cover the main body of first synthesizing.
Synthesising picture is I fusion, be initialized as F r, suppose that current main body template to be synthesized is Mask ij, then after this synthesis,
I fusion(x,y)=Mask ij(x,y)·W i+(1-Mask ij(x,y))·I fusion(x,y);
After the main body of all frames is synthesized successively, obtain the rear picture I of synthesis fusion.
Follow-up, according to user's needs, by picture compression, preservation, display after synthesis or be sent to network.
The present embodiment passes through such scheme, by automatically finding body region in multiple pictures and being synthesized to a photo, user is made only to need to take the different photo of multiple main bodys in Same Scene, to can automatically complete the synthetic operation of multiple main body by this device, save a large amount of manual operations.
It should be noted that, in this article, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or device and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or device.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the device comprising this key element and also there is other identical element.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be well understood to the mode that above-described embodiment method can add required general hardware platform by software and realize, hardware can certainly be passed through, but in a lot of situation, the former is better embodiment.Based on such understanding, technical scheme of the present invention can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product is stored in a storage medium (as ROM/RAM, magnetic disc, CD), comprising some instructions in order to make a station terminal equipment (can be mobile phone, computing machine, server, air conditioner, or the network equipment etc.) perform method described in each embodiment of the present invention.
These are only the preferred embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every utilize instructions of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.

Claims (10)

1. a picture synthesizer, is characterized in that, comprising:
Picture acquisition module, for obtaining plurality of pictures;
Picture registration module, for carrying out feature registration to described plurality of pictures, obtains the public domain of described plurality of pictures;
Main body extraction module, for extracting the body region of each pictures respectively in the public domain from described plurality of pictures;
Picture synthesis module, for synthesizing the body region of all pictures.
2. device according to claim 1, is characterized in that, described picture registration module comprises:
Reference frame chooses unit, and for choosing a pictures as reference frame picture from described plurality of pictures, other pictures are as frame picture subject to registration;
Registration parameter computing unit, for described reference frame picture for benchmark, feature registration is carried out to each frame picture subject to registration, calculates the registration parameter of each frame picture subject to registration;
Picture converter unit, for according to described registration parameter, frame picture subject to registration corresponding to described registration parameter respectively converts, and makes it and described reference frame picture match, obtains registration picture;
Public domain extraction unit, for extracting the common factor in region in all registration pictures, obtains the public domain of all registration pictures.
3. device according to claim 2, is characterized in that,
Described registration parameter computing unit, also for choosing transformation model and registration features; According to the transformation model chosen and registration features, with described reference frame picture for benchmark, feature registration is carried out to each frame picture subject to registration, calculates the registration parameter of each frame picture subject to registration.
4. device according to claim 3, is characterized in that, described main body extraction module comprises:
Frame difference unit, for the public domain based on registration picture, carries out comparison in difference by each registration picture and reference frame picture respectively, obtains the frame difference image of each registration picture; Binary conversion treatment is carried out to described frame difference image, obtains frame difference bianry image;
Reference frame main body extraction unit, for extracting the common factor of all described frame difference bianry images, obtains the body region of described reference frame picture;
Connected region extraction unit, for the frame difference bianry image according to each registration picture, the corresponding connected region extracting each registration picture, obtains the body region of each registration picture.
5. device according to claim 4, is characterized in that,
Described picture synthesis module, also for the body region of all registration pictures being synthesized to the body region of described reference frame picture.
6. the device according to any one of claim 1-5, is characterized in that, described device also comprises:
Picture output module, for processing picture after synthesis and/or externally send.
7. a picture synthetic method, is characterized in that, comprising:
Obtain plurality of pictures;
Feature registration is carried out to described plurality of pictures, obtains the public domain of described plurality of pictures;
The body region of each pictures is extracted respectively from the public domain of described plurality of pictures;
The body region of all pictures is synthesized.
8. method according to claim 7, is characterized in that, describedly carries out feature registration to plurality of pictures, and the step obtaining the public domain of described plurality of pictures comprises:
From described plurality of pictures, choose a pictures as reference frame picture, other pictures are as frame picture subject to registration;
With described reference frame picture for benchmark, feature registration is carried out to each frame picture subject to registration, calculates the registration parameter of each frame picture subject to registration;
According to described registration parameter, frame picture subject to registration corresponding to described registration parameter respectively converts, and makes it and described reference frame picture match, obtains registration picture;
Extract the common factor in region in all registration pictures, obtain the public domain of all registration pictures.
9. method according to claim 8, is characterized in that, described with described reference frame picture for benchmark, carry out feature registration to each frame picture subject to registration, the step calculating the registration parameter of each frame picture subject to registration comprises:
Choose transformation model and registration features;
According to the transformation model chosen and registration features, with described reference frame picture for benchmark, feature registration is carried out to each frame picture subject to registration, calculates the registration parameter of each frame picture subject to registration.
10. method according to claim 9, is characterized in that, the described step extracting the body region of each pictures from the public domain of plurality of pictures respectively comprises:
Based on the public domain of registration picture, each registration picture and reference frame picture are carried out comparison in difference, obtains the frame difference image of each registration picture;
Binary conversion treatment is carried out to described frame difference image, obtains frame difference bianry image;
Extract the common factor of all described frame difference bianry images, obtain the body region of described reference frame picture;
According to the frame difference bianry image of each registration picture, the corresponding connected region extracting each registration picture, obtains the body region of each registration picture.
CN201510845403.0A 2015-11-26 2015-11-26 Picture synthetic method and device Active CN105488756B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510845403.0A CN105488756B (en) 2015-11-26 2015-11-26 Picture synthetic method and device
PCT/CN2016/102847 WO2017088618A1 (en) 2015-11-26 2016-10-21 Picture synthesis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510845403.0A CN105488756B (en) 2015-11-26 2015-11-26 Picture synthetic method and device

Publications (2)

Publication Number Publication Date
CN105488756A true CN105488756A (en) 2016-04-13
CN105488756B CN105488756B (en) 2019-03-29

Family

ID=55675721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510845403.0A Active CN105488756B (en) 2015-11-26 2015-11-26 Picture synthetic method and device

Country Status (2)

Country Link
CN (1) CN105488756B (en)
WO (1) WO2017088618A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105915796A (en) * 2016-05-31 2016-08-31 努比亚技术有限公司 Electronic aperture shooting method and terminal
CN106097284A (en) * 2016-07-29 2016-11-09 努比亚技术有限公司 The processing method of a kind of night scene image and mobile terminal
WO2017088618A1 (en) * 2015-11-26 2017-06-01 努比亚技术有限公司 Picture synthesis method and device
WO2017206656A1 (en) * 2016-05-31 2017-12-07 努比亚技术有限公司 Image processing method, terminal, and computer storage medium
CN109544519A (en) * 2018-11-08 2019-03-29 顺德职业技术学院 A kind of picture synthetic method and picture synthesizer based under detection device
US11830235B2 (en) 2019-01-09 2023-11-28 Samsung Electronics Co., Ltd Image optimization method and system based on artificial intelligence

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070569B (en) * 2019-04-29 2023-11-10 西藏兆讯科技工程有限公司 Registration method and device of terminal image, mobile terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2429204A2 (en) * 2010-09-13 2012-03-14 LG Electronics Mobile terminal and 3D image composing method thereof
CN104135609A (en) * 2014-06-27 2014-11-05 小米科技有限责任公司 A method and a device for assisting in photographing, and a terminal
CN104243819A (en) * 2014-08-29 2014-12-24 小米科技有限责任公司 Photo acquiring method and device
CN105100642A (en) * 2015-07-30 2015-11-25 努比亚技术有限公司 Image processing method and apparatus
CN105100775A (en) * 2015-07-29 2015-11-25 努比亚技术有限公司 Image processing method and apparatus, and terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101954192B1 (en) * 2012-11-15 2019-03-05 엘지전자 주식회사 Array camera, Moblie terminal, and method for operating the same
KR20140122054A (en) * 2013-04-09 2014-10-17 삼성전자주식회사 converting device for converting 2-dimensional image to 3-dimensional image and method for controlling thereof
CN104796625A (en) * 2015-04-21 2015-07-22 努比亚技术有限公司 Picture synthesizing method and device
CN105488756B (en) * 2015-11-26 2019-03-29 努比亚技术有限公司 Picture synthetic method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2429204A2 (en) * 2010-09-13 2012-03-14 LG Electronics Mobile terminal and 3D image composing method thereof
CN104135609A (en) * 2014-06-27 2014-11-05 小米科技有限责任公司 A method and a device for assisting in photographing, and a terminal
CN104243819A (en) * 2014-08-29 2014-12-24 小米科技有限责任公司 Photo acquiring method and device
CN105100775A (en) * 2015-07-29 2015-11-25 努比亚技术有限公司 Image processing method and apparatus, and terminal
CN105100642A (en) * 2015-07-30 2015-11-25 努比亚技术有限公司 Image processing method and apparatus

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017088618A1 (en) * 2015-11-26 2017-06-01 努比亚技术有限公司 Picture synthesis method and device
CN105915796A (en) * 2016-05-31 2016-08-31 努比亚技术有限公司 Electronic aperture shooting method and terminal
WO2017206656A1 (en) * 2016-05-31 2017-12-07 努比亚技术有限公司 Image processing method, terminal, and computer storage medium
CN106097284A (en) * 2016-07-29 2016-11-09 努比亚技术有限公司 The processing method of a kind of night scene image and mobile terminal
WO2018019128A1 (en) * 2016-07-29 2018-02-01 努比亚技术有限公司 Method for processing night scene image and mobile terminal
CN106097284B (en) * 2016-07-29 2019-08-30 努比亚技术有限公司 A kind of processing method and mobile terminal of night scene image
CN109544519A (en) * 2018-11-08 2019-03-29 顺德职业技术学院 A kind of picture synthetic method and picture synthesizer based under detection device
US11830235B2 (en) 2019-01-09 2023-11-28 Samsung Electronics Co., Ltd Image optimization method and system based on artificial intelligence

Also Published As

Publication number Publication date
CN105488756B (en) 2019-03-29
WO2017088618A1 (en) 2017-06-01

Similar Documents

Publication Publication Date Title
CN105488756A (en) Picture synthesizing method and device
CN105227837A (en) A kind of image combining method and device
CN105245774A (en) Picture processing method and terminal
CN105303543A (en) Image enhancement method and mobile terminal
CN104954689A (en) Method and shooting device for acquiring photo through double cameras
CN104835165A (en) Image processing method and image processing device
CN105100775A (en) Image processing method and apparatus, and terminal
CN105141833A (en) Terminal photographing method and device
CN106780634A (en) Picture dominant tone extracting method and device
CN105338242A (en) Image synthesis method and device
CN105100642B (en) Image processing method and device
CN105045509A (en) Picture editing apparatus and method
CN105681582A (en) Control color adjusting method and terminal
CN105160628A (en) Method and device for acquiring RGB data
CN105095790A (en) Hidden object view method and device
CN104968033A (en) Terminal network processing method and apparatus
CN105306787A (en) Image processing method and device
CN104917965A (en) Shooting method and device
CN105100673A (en) Voice over long term evolution (VoLTE) based desktop sharing method and device
CN105162978A (en) Method and device for photographic processing
CN105187709A (en) Remote photography implementing method and terminal
CN106506965A (en) A kind of image pickup method and terminal
CN105554393A (en) Mobile terminal, photographing device and method for photographing pictures
CN105242483A (en) Focusing realization method and device and shooting realization method and device
CN106021292B (en) A kind of device and method for searching picture

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant