CN105430266A - Image processing method based on multi-scale transform and terminal - Google Patents

Image processing method based on multi-scale transform and terminal Download PDF

Info

Publication number
CN105430266A
CN105430266A CN201510864122.XA CN201510864122A CN105430266A CN 105430266 A CN105430266 A CN 105430266A CN 201510864122 A CN201510864122 A CN 201510864122A CN 105430266 A CN105430266 A CN 105430266A
Authority
CN
China
Prior art keywords
image
multiple images
registration process
individual features
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510864122.XA
Other languages
Chinese (zh)
Inventor
戴向东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201510864122.XA priority Critical patent/CN105430266A/en
Publication of CN105430266A publication Critical patent/CN105430266A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method based on multi-scale transform and a terminal. The method comprises: with regard to the same target object, acquiring a plurality of images corresponding to different focusing distances; performing pixel alignment on feature points in the plurality of images to obtain a plurality of pixel-aligned images; and fusing the plurality of aligned images to obtain a clear panoramic depth image. By adopting the method, a panoramic depth image can be conveniently and quickly obtained.

Description

Based on image processing method and the terminal of multi-scale transform
Technical field
The present invention relates to image processing techniques, particularly relate to a kind of image processing method based on multi-scale transform and terminal.
Background technology
Along with the intelligent development of terminal, intelligent terminal is used by increasing user.When user uses intelligent terminal, multiple application can be installed on the terminal device, realize IMAQ, image procossing as by an application of taking pictures, obtain the image of final imaging.Take terminal as mobile phone be example, the image iting is desirable to obtain final imaging is the dark image of panorama, and the field depth that the dark image of this panorama refers in image expands to infinity from very near scope, and it is clear that image parts is all focused.But because the focus adjustment of mobile phone is limited in scope, therefore, cause the field depth of gathered image large not, thus cannot obtain the dark image of panorama easily, the image definition obtained is inadequate.
Summary of the invention
The embodiment of the present invention, desirable to provide a kind of image processing method based on multi-scale transform and terminal, solve at least prior art Problems existing, can obtain the dark image of panorama quickly and easily.
Based on an image processing method for multi-scale transform, described method comprises:
For same destination object, gather multiple images of corresponding different focus distance;
Characteristic point in multiple images described is carried out pixel registration process, obtains multiple images after pixel registration process;
Multiple images after described registration process are carried out image co-registration process, to obtain the full depth image of imaging clearly.
In the embodiment of the present invention, described for same destination object, gather multiple images of corresponding different focus distance, comprising:
Preset focus adjustment scope in choose multiple different focusing from;
In the same distance of distance objective object, according to described multiple different focusing from each focusing from, gather an image of corresponding described same destination object respectively.
In the embodiment of the present invention, described characteristic point in multiple images described is carried out pixel registration process, obtains multiple images after pixel registration process, comprising:
Obtain multiple images described, extract the individual features point in each image respectively;
Individual features point in multiple images is carried out pixel registration process by the method based on image registration, obtains multiple images after described registration process.
In the embodiment of the present invention, described multiple images after described registration process are carried out image co-registration process, to obtain the full depth image of imaging clearly, comprising:
Obtain multiple images after described registration process, after extracting described registration process respectively, in multiple images, each opens the individual features point of image;
According to multi-scale transform method, the individual features point in multiple images after registration process is transformed from the time domain to frequency domain, to carry out the pixel fusion process of described individual features point at frequency domain, obtain the image after fusion treatment, according to multiple dimensioned inverse transformation method, the image after described fusion treatment is switched back to time domain from frequency domain, to obtain final full depth image.
In the embodiment of the present invention, describedly according to multi-scale transform method, the individual features point in multiple images after registration process is transformed from the time domain to frequency domain, to carry out the pixel fusion process of described individual features point at frequency domain, obtains the image after fusion treatment, comprising:
Utilize multi-scale transform instrument, be different frequency-portions by the individual features point in multiple images after described registration process according to high frequency index and low frequency index decomposition, obtain HFS and low frequency part, adopt different fusion methods to carry out the pixel fusion process of described individual features point respectively HFS and low frequency part, obtain the image after described fusion treatment.
A kind of terminal, described terminal comprises:
Collecting unit, for for same destination object, gathers multiple images of corresponding different focus distance;
First processing unit, for the characteristic point in multiple images described is carried out pixel registration process, obtains multiple images after pixel registration process;
Second processing unit, for multiple images after described registration process are carried out image co-registration process, to obtain the full depth image of imaging clearly.
In the embodiment of the present invention, described collecting unit, comprises further:
Choose subelement, for preset focus adjustment scope in choose multiple different focusing from;
Gather subelement, for the same distance at distance objective object, according to described multiple different focusing from each focusing from, gather an image of corresponding described same destination object respectively.
In the embodiment of the present invention, described first processing unit, comprises further:
First obtains subelement, for obtaining multiple images described, extracts the individual features point in each image respectively;
Pixel alignment subelement, carries out pixel registration process for the method based on image registration by the individual features point in multiple images, obtains multiple images after described registration process.
In the embodiment of the present invention, described second processing unit, comprises further:
Second obtains subelement, and for obtaining multiple images after described registration process, after extracting described registration process respectively, in multiple images, each opens the individual features point of image;
Varitron unit, for the individual features point in multiple images after registration process being transformed from the time domain to frequency domain according to multi-scale transform method, to carry out the pixel fusion process of described individual features point at frequency domain, obtain the image after fusion treatment, according to multiple dimensioned inverse transformation method, the image after described fusion treatment is switched back to time domain from frequency domain, to obtain final full depth image.
In the embodiment of the present invention, described varitron unit, is further used for:
Utilize multi-scale transform instrument, be different frequency-portions by the individual features point in multiple images after described registration process according to high frequency index and low frequency index decomposition, obtain HFS and low frequency part, adopt different fusion methods to carry out the pixel fusion process of described individual features point respectively HFS and low frequency part, obtain the image after described fusion treatment.
The image processing method based on multi-scale transform of the embodiment of the present invention comprises: for same destination object, gathers multiple images of corresponding different focus distance; Characteristic point in multiple images described is carried out pixel registration process, obtains multiple images after pixel registration process; Multiple images after described registration process are carried out image co-registration process, to obtain the full depth image of imaging clearly.Adopt the embodiment of the present invention, the dark image of panorama can be obtained quickly and easily.
Accompanying drawing explanation
Fig. 1 is the hardware configuration schematic diagram of the optional mobile terminal realizing each embodiment of the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is a realization flow schematic diagram of the embodiment of the present invention one;
Fig. 4 is a realization flow schematic diagram of the embodiment of the present invention two;
Fig. 5 is a composition structural representation of the embodiment of the present invention three;
Fig. 6 is a terminal hardware composition structural representation of the application embodiment of the present invention;
Fig. 7 is the method flow schematic diagram of an application scenarios of the application embodiment of the present invention;
Fig. 8 is video camera depth of field illustraton of model;
Fig. 9 is the image registration effect schematic diagram of an application scenarios of the application embodiment of the present invention;
The multiscale analysis image co-registration flow chart of one application scenarios of Figure 10 application embodiment of the present invention;
Figure 11 a-11c is the dark Images uniting schematic diagram of panorama of an application scenarios of the application embodiment of the present invention;
Figure 12 is the terminal composition structural representation of corresponding diagram 7.
The realization of the object of the invention, functional characteristics and advantage will in conjunction with the embodiments, are described further with reference to accompanying drawing.
Embodiment
Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
The terminal realizing each embodiment of the present invention is described referring now to accompanying drawing.In follow-up description, use the suffix of such as " module ", " parts " or " unit " for representing element only in order to be conducive to the explanation of the embodiment of the present invention, itself is specific meaning not.Therefore, " module " and " parts " can mixedly use.
Terminal can be implemented in a variety of manners.Such as, the terminal described in the embodiment of the present invention can comprise such as mobile phone, smart phone, notebook computer, digit broadcasting receiver, personal digital assistant (PDA, PersonalDigitalAssistant), the terminal of panel computer (PAD), portable media player (PMP, PortableMediaPlayer), guider etc. and the fixed terminal of such as digital TV, desktop computer etc.Below, suppose that terminal is mobile terminal.But it will be appreciated by those skilled in the art that except the element except being used in particular for mobile object, structure according to the embodiment of the present invention also can be applied to the terminal of fixed type.
Fig. 1 is the hardware configuration signal of the mobile terminal realizing each embodiment of the present invention.
Mobile terminal 100 can comprise wireless communication unit 110, image acquisition units 121, user input unit 130, output unit 150, memory 160, interface unit 170, controller 180, image registration unit 181, image co-registration unit 182 and power subsystem 190 etc.Fig. 1 shows the mobile terminal with various assembly, it should be understood that, does not require to implement all assemblies illustrated.Can alternatively implement more or less assembly.Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assembly, and it allows the radio communication between mobile terminal 100 and wireless communication system or network.Such as, wireless communication unit can comprise at least one in broadcast reception module 111, mobile communication module 112, wireless Internet module 113.
Broadcast reception module 111 via broadcast channel from external broadcasting management server receiving broadcast signal and/or broadcast related information.Broadcast channel can comprise satellite channel and/or terrestrial channel.Broadcast management server can be generate and send the server of broadcast singal and/or broadcast related information or the broadcast singal generated before receiving and/or broadcast related information and send it to the server of terminal.Broadcast singal can comprise TV broadcast singal, radio signals, data broadcasting signal etc.And broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast related information also can provide via mobile communications network, and in this case, broadcast related information can be received by mobile communication module 112.Broadcast singal can exist in a variety of manners, such as, it can with DMB (DMB, DigitalMultimediaBroadcasting) electronic program guides (EPG, ElectronicProgramGuide), digital video broadcast-handheld (DVB-H, the form of electronic service guidebooks (ESG, ElectronicServiceGuide) DigitalVideoBroadcasting-Handheld) etc. and existing.Broadcast reception module 111 can by using the broadcast of various types of broadcast system Received signal strength.Especially, broadcast reception module 111 can by using such as multimedia broadcasting-ground (DMB-T, DigitalMultimediaBroadcasting-Terrestrial), DMB-satellite (DMB-S, DigitalMultimediaBroadcasting-Satellite), digital video broadcast-handheld (DVB-H), forward link media (MediaFLO, MediaForwardLinkOnly) Radio Data System, received terrestrial digital broadcasting integrated service (ISDB-T, etc. IntegratedServicesDigitalBroadcasting-Terrestrial) digit broadcasting system receiving digital broadcast.Broadcast reception module 111 can be constructed to be applicable to providing the various broadcast system of broadcast singal and above-mentioned digit broadcasting system.The broadcast singal received via broadcast reception module 111 and/or broadcast related information can be stored in memory 160 (or storage medium of other type).
Radio signal is sent at least one in base station (such as, access point, Node B etc.), exterior terminal and server and/or receives radio signals from it by mobile communication module 112.Various types of data that such radio signal can comprise voice call signal, video calling signal or send according to text and/or Multimedia Message and/or receive.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.This module can be inner or be externally couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by this module can comprise WLAN (WLAN, WirelessLocalAreaNetworks) (Wi-Fi), WiMAX (Wibro), worldwide interoperability for microwave access (Wimax), high-speed downlink packet access (HSDPA, HighSpeedDownlinkPacketAccess) etc.
Image acquisition units 121, for gathering the multiple different images for Same Scene, can be camera, and the view data of camera to the static images obtained by image capture apparatus in Video Capture pattern or image capture mode or video processes.Picture frame after process, finally obtain fused images through the pixel registration process of image registration unit 181 and the pixel fusion process of image co-registration unit 182, this fused images may be displayed on display unit 151.Picture frame after camera processing can be stored in memory 160 (or other storage medium) or via wireless communication unit 110 and send, and can provide two or more cameras according to the structure of mobile terminal.
User input unit 130 can generate key input data to control the various operations of mobile terminal according to the order of user's input.User input unit 130 allows user to input various types of information, and keyboard, the young sheet of pot, touch pad (such as, detecting the touch-sensitive assembly of the change of the resistance, pressure, electric capacity etc. that cause owing to being touched), roller, rocking bar etc. can be comprised.Especially, when touch pad is superimposed upon on display unit 151 as a layer, touch-screen can be formed.
Interface unit 170 is used as at least one external device (ED) and is connected the interface that can pass through with mobile terminal 100.Such as, external device (ED) can comprise wired or wireless head-band earphone port, external power source (or battery charger) port, wired or wireless FPDP, memory card port, for connecting the port, audio frequency I/O (I/O) port, video i/o port, ear port etc. of the device with identification module.Identification module can be that storage uses the various information of mobile terminal 100 for authentication of users and can comprise subscriber identification module (UIM, UserIdentifyModule), client identification module (SIM, SubscriberIdentityModule), Universal Subscriber identification module (USIM, UniversalSubscriberIdentityModule) etc.In addition, the device (hereinafter referred to " recognition device ") with identification module can take the form of smart card, and therefore, recognition device can be connected with mobile terminal 100 via port or other jockey.Interface unit 170 may be used for receive from external device (ED) input (such as, data message, electric power etc.) and the input received be transferred to the one or more element in mobile terminal 100 or may be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 100 is connected with external base, interface unit 170 can be used as to allow by it electric power to be provided to the path of mobile terminal 100 from base or can be used as the path that allows to be transferred to mobile terminal by it from the various command signals of base input.The various command signal inputted from base or electric power can be used as and identify whether mobile terminal is arranged on the signal base exactly.Output unit 150 is constructed to provide output signal (such as, audio signal, vision signal, alarm signal, vibration signal etc.) with vision, audio frequency and/or tactile manner.Output unit 150 can comprise display unit 151, dio Output Modules 152 etc.
Display unit 151 may be displayed on the information of process in mobile terminal 100.Such as, when mobile terminal 100 is in telephone calling model, display unit 151 can show with call or other communicate (such as, text messaging, multimedia file are downloaded etc.) relevant user interface (UI, or graphic user interface (GUI, GraphicalUserInterface) UserInterface).When mobile terminal 100 is in video calling pattern or image capture mode, display unit 151 can the image of display capture and/or the image of reception, UI or GUI that video or image and correlation function are shown etc.
Meanwhile, when display unit 151 and touch pad as a layer superposed on one another to form touch-screen time, display unit 151 can be used as input unit and output device.Display unit 151 can comprise liquid crystal display (LCD, LiquidCrystalDisplay), thin-film transistor LCD (TFT-LCD, ThinFilmTransistor-LCD) at least one, in Organic Light Emitting Diode (OLED, OrganicLight-EmittingDiode) display, flexible display, three-dimensional (3D) display etc.Some in these displays can be constructed to transparence and watch from outside to allow user, and this can be called transparent display, and typical transparent display can be such as transparent organic light emitting diode (TOLED) display etc.According to the specific execution mode wanted, mobile terminal 100 can comprise two or more display units (or other display unit), such as, mobile terminal can comprise outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used for detecting touch input pressure and touch input position and touch and inputs area.
When dio Output Modules 152 can be under the isotypes such as call signal receiving mode, call mode, logging mode, speech recognition mode, broadcast reception mode at mobile terminal, voice data convert audio signals that is that wireless communication unit 110 is received or that store in memory 160 and exporting as sound.And dio Output Modules 152 can provide the audio frequency relevant to the specific function that mobile terminal 100 performs to export (such as, call signal receives sound, message sink sound etc.).Dio Output Modules 152 can comprise loud speaker, buzzer etc.
Memory 160 software program that can store process and the control operation performed by controller 180 etc., or temporarily can store oneself through exporting the data (such as, telephone directory, message, still image, video etc.) that maybe will export.And, memory 160 can store about when touch be applied to touch-screen time the vibration of various modes that exports and the data of audio signal.
Memory 160 can comprise the storage medium of at least one type, described storage medium comprises flash memory, hard disk, multimedia card, card-type memory (such as, SD or DX memory etc.), random access storage device (RAM, RandomAccessMemory), static random-access memory (SRAM, StaticRandomAccessMemory), read-only memory (ROM, ReadOnlyMemory), Electrically Erasable Read Only Memory (EEPROM, ElectricallyErasableProgrammableReadOnlyMemory), programmable read only memory (PROM, ProgrammableReadOnlyMemory), magnetic storage, disk, CD etc.And mobile terminal 100 can be connected the memory function of execute store 160 network storage device with by network cooperates.
Controller 180 controls the overall operation of mobile terminal usually.Such as, controller 180 performs the control relevant to voice call, data communication, video calling etc. and process.In addition, controller 180 can comprise the multi-media module 1810 for reproducing (or playback) multi-medium data, and multi-media module 1810 can be configured in controller 180, or can be configured to be separated with controller 180.Controller 180 can pattern recognition process, is identified as character or image so that input is drawn in the handwriting input performed on the touchscreen or picture.
Power subsystem 190 receives external power or internal power and provides each element of operation and the suitable electric power needed for assembly under the control of controller 180.
Various execution mode described herein can to use such as computer software, the computer-readable medium of hardware or its any combination implements.For hardware implementation, execution mode described herein can by using application-specific IC (ASIC, ApplicationSpecificIntegratedCircuit), digital signal processor (DSP, DigitalSignalProcessing), digital signal processing device (DSPD, DigitalSignalProcessingDevice), programmable logic device (PLD, ProgrammableLogicDevice), field programmable gate array (FPGA, FieldProgrammableGateArray), processor, controller, microcontroller, microprocessor, be designed at least one performed in the electronic unit of function described herein implement, in some cases, such execution mode can be implemented in controller 180.For implement software, the execution mode of such as process or function can be implemented with allowing the independent software module performing at least one function or operation.Software code can be implemented by the software application (or program) write with any suitable programming language, and software code can be stored in memory 160 and to be performed by controller 180.
So far, oneself is through the mobile terminal according to its functional description.Below, for the sake of brevity, by the slide type mobile terminal that describes in various types of mobile terminals of such as folded form, board-type, oscillating-type, slide type mobile terminal etc. exemplarily.Therefore, the present invention can be applied to the mobile terminal of any type, and is not limited to slide type mobile terminal.
Mobile terminal 100 as shown in Figure 1 can be constructed to utilize and send the such as wired and wireless communication system of data via frame or grouping and satellite-based communication system operates.
Describe wherein according to the communication system that the mobile terminal of the embodiment of the present invention can operate referring now to Fig. 2.
Such communication system can use different air interfaces and/or physical layer.Such as, the air interface used by communication system comprises such as frequency division multiple access (FDMA, FrequencyDivisionMultipleAccess), time division multiple access (TDMA, TimeDivisionMultipleAccess), code division multiple access (CDMA, and universal mobile telecommunications system (UMTS CodeDivisionMultipleAccess), UniversalMobileTelecommunicationsSystem) (especially, Long Term Evolution (LTE, LongTermEvolution)), global system for mobile communications (GSM) etc.As non-limiting example, description below relates to cdma communication system, but such instruction is equally applicable to the system of other type.
With reference to figure 2, cdma wireless communication system can comprise multiple mobile terminal 100, multiple base station (BS, BaseStation) 270, base station controller (BSC, BaseStationController) 275 and mobile switching centre (MSC, MobileSwitchingCenter) 280.MSC280 is constructed to form interface with Public Switched Telephony Network (PSTN, PublicSwitchedTelephoneNetwork) 290.MSC280 is also constructed to form interface with the BSC275 that can be couple to base station 270 via back haul link.Back haul link can construct according to any one in some interfaces that oneself knows, described interface comprises such as E1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL.Will be appreciated that system as shown in Figure 2 can comprise multiple BSC2750.
Each BS270 can serve one or more subregion (or region), by multidirectional antenna or point to specific direction each subregion of antenna cover radially away from BS270.Or each subregion can by two or more antenna covers for diversity reception.Each BS270 can be constructed to support multiple parallel compensate, and each parallel compensate has specific frequency spectrum (such as, 1.25MHz, 5MHz etc.).
Subregion can be called as CDMA Channel with intersecting of parallel compensate.BS270 also can be called as base station transceiver subsystem (BTS, BaseTransceiverStation) or other equivalent terms.Under these circumstances, term " base station " may be used for broadly representing single BSC275 and at least one BS270.Base station also can be called as " cellular station ".Or each subregion of particular B S270 can be called as multiple cellular station.
As shown in Figure 2, broadcast singal is sent to the mobile terminal 100 at operate within systems by broadcsting transmitter (BT, BroadcastTransmitter) 295.Broadcast reception module 111 as shown in Figure 1 is arranged on mobile terminal 100 and sentences the broadcast singal receiving and sent by BT295.In fig. 2, several global positioning system (GPS) satellite 300 is shown.Satellite 300 helps at least one in the multiple mobile terminal 100 in location.
In fig. 2, depict multiple satellite 300, but understand, the satellite of any number can be utilized to obtain useful locating information.GPS module 115 as shown in Figure 1 is constructed to coordinate to obtain the locating information wanted with satellite 300 usually.Substitute GPS tracking technique or outside GPS tracking technique, can use can other technology of position of tracking mobile terminal.In addition, at least one gps satellite 300 optionally or extraly can process satellite dmb transmission.
As a typical operation of wireless communication system, BS270 receives the reverse link signal from various mobile terminal 100.Mobile terminal 100 participates in call usually, information receiving and transmitting communicates with other type.Each reverse link signal that certain base station 270 receives is processed by particular B S270.The data obtained are forwarded to relevant BSC275.BSC provides call Resourse Distribute and comprises the mobile management function of coordination of the soft switching process between BS270.The data received also are routed to MSC280 by BSC275, and it is provided for the extra route service forming interface with PSTN290.Similarly, PSTN290 and MSC280 forms interface, and MSC and BSC275 forms interface, and BSC275 correspondingly control BS270 so that forward link signals is sent to mobile terminal 100.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of the inventive method is proposed.
Embodiment one:
The image processing method based on multi-scale transform of the embodiment of the present invention, as shown in Figure 3, comprises the following steps:
Step 301, for same destination object, gather multiple images of corresponding different focus distance.
Step 302, the characteristic point in multiple images described is carried out pixel registration process, obtain multiple images after pixel registration process.
Step 303, multiple images after described registration process are carried out image co-registration process, to obtain the full depth image of imaging clearly.
Adopt the embodiment of the present invention, the image in step 301 is original image, such as take a picture with scenes, focusing from span scope in, choose several different focusing from, take this picture with scenes in same position, multiple image can be obtained.Described focusing from span scope, having the example of zoom lens with terminal, for from wide-angle to the excursion of narrow angle (short Jiao is to focal length), is the span scope of maximum, the minimum focus of zoom lens.Image in step 302 is the image obtained after have passed through pixel registration process.By obtaining multiple sample objects (the multiple original images as Same Scene), following imaging clearly is made to become possibility after carrying out pixel alignment, afterwards, the fusion treatment of pixel is carried out according to multi-scale transform method, the dark image of panorama can be obtained quickly and easily, make the field depth in image expand to infinity from very near scope, it is clear that image parts is all focused.
Embodiment two:
The image processing method based on multi-scale transform of the embodiment of the present invention, as shown in Figure 4, comprises the following steps:
Step 401, for same destination object, preset focus adjustment scope in choose multiple different focusing from.
Step 402, same distance at distance objective object, according to described multiple different focusing from each focusing from, gather an image of corresponding described same destination object respectively.
Step 403, obtain multiple images, extract the individual features point in each image respectively.
Step 404, based on the method for image registration, the individual features point in multiple images is carried out pixel registration process, obtain multiple images after described registration process.
Step 405, multiple images after described registration process are carried out image co-registration process, to obtain the full depth image of imaging clearly.
In the embodiment of the present invention one practical application, the image in step 402 is original image, such as takes a picture with scenes, focusing from span scope in, choose several different focusing from, take this picture with scenes in same position, multiple original image can be obtained.Described focusing from span scope, having the example of zoom lens with terminal, for from wide-angle to the excursion of narrow angle (short Jiao is to focal length), is the span scope of maximum, the minimum focus of zoom lens.Image in step 404 is the image obtained after have passed through pixel registration process.
Adopt the embodiment of the present invention, for same destination object, preset focus adjustment scope in choose multiple different focusing from, distance objective object same distance according to described multiple different focusing from each focusing from, gather an image of corresponding described same destination object respectively, thus obtain multiple sample objects (the multiple original images as Same Scene), by extracting the individual features point in each image respectively, according to method for registering images, the individual features point in multiple images is carried out pixel registration process, obtain multiple images after described registration process, this carry out pixel alignment after make following imaging clearly become possibility, afterwards, the fusion treatment of pixel is carried out according to multi-scale transform method, the dark image of panorama can be obtained quickly and easily, the field depth in image is made to expand to infinity from very near scope, it is clear that image parts is all focused.
Based on above-described embodiment one and two, the image processing method based on multi-scale transform of the embodiment of the present invention, described multiple images after described registration process are carried out image co-registration process, to obtain the full depth image of imaging clearly, comprise: obtain multiple images after described registration process, after extracting described registration process respectively, in multiple images, each opens the individual features point of image; According to multi-scale transform method, the individual features point in multiple images after registration process is transformed from the time domain to frequency domain, to carry out the pixel fusion process of described individual features point at frequency domain, obtain the image after fusion treatment, according to multiple dimensioned inverse transformation method, the image after described fusion treatment is switched back to time domain from frequency domain, to obtain final full depth image.This by multi-scale transform and inverse transformation method, the reconstruct that image carries out inverse transformation can be obtained finally obtain merging full depth image clearly afterwards.
Based on above-described embodiment one and two, the image processing method based on multi-scale transform of the embodiment of the present invention, describedly according to multi-scale transform method, the individual features point in multiple images after registration process is transformed from the time domain to frequency domain, to carry out the pixel fusion process of described individual features point at frequency domain, obtain the image after fusion treatment, comprise: utilize multi-scale transform instrument, be different frequency-portions by the individual features point in multiple images after described registration process according to high frequency index and low frequency index decomposition, obtain HFS and low frequency part, different fusion methods is adopted to carry out the pixel fusion process of described individual features point respectively HFS and low frequency part, obtain the image after described fusion treatment.
Embodiment three:
A kind of terminal of the embodiment of the present invention, as shown in Figure 5, described terminal comprises:
Collecting unit 11, for for for same destination object, gathers multiple images of corresponding different focus distance;
First processing unit 12, for for the characteristic point in multiple images described is carried out pixel registration process, obtains multiple images after pixel registration process;
Second processing unit 13, for multiple images after described registration process are carried out image co-registration process, to obtain the full depth image of imaging clearly.
In the embodiment of the present invention one execution mode, described collecting unit, comprises further: choose subelement, for preset focus adjustment scope in choose multiple different focusing from; Gather subelement, for the same distance at distance objective object, according to described multiple different focusing from each focusing from, gather an image of corresponding described same destination object respectively.
In the embodiment of the present invention one execution mode, described first processing unit, comprises further: first obtains subelement, for obtaining multiple images described, extracts the individual features point in each image respectively; Pixel alignment subelement, carries out pixel registration process for the method based on image registration by the individual features point in multiple images, obtains multiple images after described registration process.
In the embodiment of the present invention one execution mode, described second processing unit, comprise further: second obtains subelement, for obtaining multiple images after described registration process, after extracting described registration process respectively, in multiple images, each opens the individual features point of image; Varitron unit, for the individual features point in multiple images after registration process being transformed from the time domain to frequency domain according to multi-scale transform method, to carry out the pixel fusion process of described individual features point at frequency domain, obtain the image after fusion treatment, according to multiple dimensioned inverse transformation method, the image after described fusion treatment is switched back to time domain from frequency domain, to obtain final full depth image.
In the embodiment of the present invention one execution mode, described varitron unit, be further used for: utilize multi-scale transform instrument, be different frequency-portions by the individual features point in multiple images after described registration process according to high frequency index and low frequency index decomposition, obtain HFS and low frequency part, adopt different fusion methods to carry out the pixel fusion process of described individual features point respectively HFS and low frequency part, obtain the image after described fusion treatment.
Here it is pointed out that the terminal mentioned in previous embodiment and execution mode thereof can be smart mobile phone, PC, PAD, panel computer, laptop computer, be not limited to description here.For realize each Elementary Function merge into one or the image information recognition process unit (electronic equipment) that arranges of each Elementary Function split at least comprise the database for storing data and the processor for data processing, or comprise and be arranged at storage medium in server or the independent storage medium arranged.
Wherein, for the processor for data processing, when performing process, microprocessor, central processing unit (CPU can be adopted, CentralProcessingUnit), digital signal processor (DSP, DigitalSingnalProcessor) or programmable logic array (FPGA, Field-ProgrammableGateArray) realize; For storage medium, comprise operational order, this operational order can be computer-executable code, realizes each step in the invention described above embodiment image information identifying processing method flow by described operational order.
The terminal mentioned in this previous embodiment and execution mode thereof as hardware entities S11 an example as shown in Figure 6.Described terminal comprises processor 31, storage medium 32 and at least one external communication interface 33; Described processor 31, storage medium 32 and external communication interface 33 are all connected by bus 34.
Here it is to be noted: the above-mentioned description relating to terminal, it is similar for describing with said method, and the beneficial effect with method describes, and does not repeat.For the ins and outs do not disclosed in terminal embodiment of the present invention, please refer to the description of the inventive method embodiment.
For a real world applications scene, the embodiment of the present invention is described below:
A concrete scene of the application embodiment of the present invention is the scene of taking pictures in IMAQ, can realize the dark image photographic method of mobile platform panorama based on multi-scale transform, as shown in Figure 7, comprise the following steps:
Step S100, according to the focusing that pre-sets from, the Same Scene image of multiple different focus planes of shooting continuously.
The specific implementation of this step comprises: as shown in Fig. 8 video camera depth of field illustraton of model, calculate depth of field Δ L by following formula (1), target object is in left side, and the user of photographic images is on right side, exceed the scope of Δ L, then image imaging can be caused unclear.The focal distance f of camera lens is less, and f-number F is larger, and field depth is larger.Inner in image depth scope, image focusing is clear, and outside field depth, image starts to thicken, and far away apart from this field depth, image is fuzzyyer.At present, focal length and the aperture adjustable range of mobile phone are limited, cannot obtain full depth image, are also that microspur object and the distant objects of image keeps effect clearly of focusing simultaneously.Therefore, can by the focusing that pre-sets from, the Same Scene image of multiple different focus planes of shooting continuously.In this sampled images, the pixel of different field depth all has record clearly, articulation points all in image can be remained, just can obtain full depth image subsequently through image interfusion method.
Δ L = Δ L 1 + Δ L 2 = 2 f 2 FσL 2 f 4 - F 2 σ 2 L 2 - - - ( 1 )
In formula (1), Δ L represents the depth of field, and Δ L1 represents the front depth of field, and Δ L2 represents the rear depth of field, and f represents the focal length of camera lens, and F represents f-number, and σ represents blur circle diameter, and L represents shooting distance.
Step S200, carry out image registration to the image of different focus plane, the image pixel of different focus plane is alignd, being convenient to the later stage merges.
The specific implementation of this step comprises: handheld mobile device is when successively taking the image of multiple different focus planes, easily to shake, before image co-registration, the image to these adjacent time are taken is needed to carry out image registration, main purpose is the image slices vegetarian refreshments alignment of Same Scene not taken in the same time, be convenient to the fusion of later pixel point, otherwise image co-registration just there will be deviation, fuzzy.Image registration algorithm is the method for registering based on image characteristic point, image proportioning is carried out by image registration algorithm, with image slices vegetarian refreshments Same Scene do not taken in the same time alignment, its basic process comprises: utilize feature interpretation operator to find out invariant feature point in image as surf, sift; The space conversion matrices of image subject to registration is simulated according to the corresponding relation of characteristic point; Utilize transformation matrix that image subject to registration is carried out image conversion, obtain the image after registration; Image is after registration, and all pixels have all alignd, and just can carry out image co-registration.
Fig. 9 is image registration effect schematic diagram, and before can seeing image registration, the pixel of different focus images has misplaced, and directly carry out image co-registration and there will be fuzzy, after image registration, pixel has alignd.
Step S300, utilize multi-scale transform instrument, by the different focus plane decomposition after image registration to domain space, take gradient self adaptation similarity method, carry out domain space image co-registration, carry out multiple dimensioned inverse transformation afterwards, transform to time domain space and obtain final full depth image.
The specific implementation of this step comprises: after the image registration flow process of the IMAQ in step S100 and step S200, next synthesize the image of different focus plane, the main purpose of image co-registration is the focusing pixel clearly picked out in each different images.Original image is decomposed into different frequency contents by the multiscale analysis of image, and low-frequency component mainly comprises image averaging part, and radio-frequency component comprises the edge details part of image.In a sub-picture, the edge details of the neighborhood of pixels clearly of focusing is more clear, radio-frequency component namely in image is many, therefore, be, after high and low frequency composition, different fusion rules is taked to different frequency contents by picture breakdown, the sharply defined image vegetarian refreshments in image can be remained, carry out multiscale analysis reconstruct afterwards, obtain the image after merging.The described multi-scale transform instrument that can adopt can be: the instruments such as wavelet transformation, curvelet conversion, contourlet conversion.
Figure 10 is multi-scale transform image co-registration flow chart, can find out, during multi-scale transform, first multiscale analysis is carried out, specifically HFS (the high-frequency sub-band coefficient as source images A and source images B) is carried out pixel fusion according to high frequency fusion rule, low frequency part (the low frequency sub-band coefficient as source images A and source images B) is carried out pixel fusion according to low frequency fusion rule, carries out inverse transformation afterwards and obtain final fused images.The selection of fusion rule is the key of blending algorithm, can pass through this simple different frequency composition fusion rule of following formula (2):
C L ( i , j ) = I 1 L ( i , j ) + I 2 L ( i , j ) 2 C H ( i , j ) = I 1 H ( i , j ) i f | I 1 H ( i , j ) | ≥ | I 2 H ( i , j ) | I 2 L ( i , j ) e l s e - - - ( 2 )
This fusion rule of formula (2) is fairly simple, I 1, I 2the multi-resolution decomposition result of two width images, for the low-frequency component of image, for the radio-frequency component of image, C lfor pixel low frequency fusion results, C hfor pixel high frequency fusion results, the coordinate of certain pixel that (i, j) is image.General principle is that low-frequency component is more close, is directly averaged; On radio-frequency component, the absolute value of the pixel radio-frequency component clearly of focusing is larger, selects the larger radio-frequency component of absolute value can retain the articulation point of image.
The above-mentioned fusion rule based on single pixel is easily subject to the impact of noise, and noise appears at radio-frequency component usually, utilizes neighborhood block local energy to get large thought herein and modifies as shown in formula (3) to high frequency fusion rule:
E 1 H ( i , j ) = Σ m = - M M Σ n = - N N ( I 1 H ( i + m , j + n ) ) 2 E 2 H ( i , j ) = Σ m = - M M Σ n = - N N ( I 2 H ( i + m , j + n ) ) 2 - - - ( 3 )
C H ( i , j ) = I 1 H ( i , j ) i f | E 1 H ( i , j ) | ≥ | E 2 H ( i , j ) | I 2 L ( i , j ) e l s e
In formula (3), M and N is height and the width of the local neighborhood window of image, the coordinate of certain pixel that (i, j) is image, for the high frequency neighborhood local energy of the pixel (i, j) of image 1, utilize local energy block can the impact of stress release treatment effectively, promote syncretizing effect.
Figure 11 a-11c is the dark Images uniting schematic diagram of panorama, in Figure 11 a, the left side is focusing flower on hand, being focusing house a long way off in the middle of Figure 11 b, be final synthetic effect on the right of Figure 11 c, can to see in composograph that the house of flower nearby and distant place is all very clear.
Corresponding said method example, a concrete scene of the application embodiment of the present invention is in the scene of taking pictures in IMAQ, the dark image photographic terminal of mobile platform panorama based on multi-scale transform can be realized, as shown in figure 12, comprise: image acquisition units 41, obtain the image of different focus plane.Image registration unit 42, carries out image alignment by the image of different focus plane.Image co-registration unit 43, utilizes multiscale analysis instrument to carry out Images uniting.
It will be appreciated by those skilled in the art that the associated description that function that each unit in above-mentioned terminal realizes can refer to aforementioned data restoration methods is understood.
It should be noted that, in this article, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or device and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or device.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the device comprising this key element and also there is other identical element.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be well understood to the mode that above-described embodiment method can add required general hardware platform by software and realize, hardware can certainly be passed through, but in a lot of situation, the former is better execution mode.Based on such understanding, technical scheme of the present invention can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product is stored in a storage medium (as ROM/RAM, magnetic disc, CD), comprising some instructions in order to make a station terminal equipment (can be mobile phone, computer, server, air conditioner, or the network equipment etc.) perform method described in each embodiment of the present invention.
These are only the preferred embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every utilize specification of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.

Claims (10)

1. based on an image processing method for multi-scale transform, it is characterized in that, described method comprises:
For same destination object, gather multiple images of corresponding different focus distance;
Characteristic point in multiple images described is carried out pixel registration process, obtains multiple images after pixel registration process;
Multiple images after described registration process are carried out image co-registration process, to obtain the full depth image of imaging clearly.
2. method according to claim 1, is characterized in that, described for same destination object, gathers multiple images of corresponding different focus distance, comprising:
Preset focus adjustment scope in choose multiple different focusing from;
In the same distance of distance objective object, according to described multiple different focusing from each focusing from, gather an image of corresponding described same destination object respectively.
3. method according to claim 1 and 2, is characterized in that, described characteristic point in multiple images described is carried out pixel registration process, obtains multiple images after pixel registration process, comprising:
Obtain multiple images described, extract the individual features point in each image respectively;
Individual features point in multiple images is carried out pixel registration process by the method based on image registration, obtains multiple images after described registration process.
4. method according to claim 1 and 2, is characterized in that, described multiple images after described registration process is carried out image co-registration process, to obtain the full depth image of imaging clearly, comprising:
Obtain multiple images after described registration process, after extracting described registration process respectively, in multiple images, each opens the individual features point of image;
According to multi-scale transform method, the individual features point in multiple images after registration process is transformed from the time domain to frequency domain, to carry out the pixel fusion process of described individual features point at frequency domain, obtain the image after fusion treatment, according to multiple dimensioned inverse transformation method, the image after described fusion treatment is switched back to time domain from frequency domain, to obtain final full depth image.
5. method according to claim 4, describedly according to multi-scale transform method, the individual features point in multiple images after registration process is transformed from the time domain to frequency domain, to carry out the pixel fusion process of described individual features point at frequency domain, obtain the image after fusion treatment, comprising:
Utilize multi-scale transform instrument, be different frequency-portions by the individual features point in multiple images after described registration process according to high frequency index and low frequency index decomposition, obtain HFS and low frequency part, adopt different fusion methods to carry out the pixel fusion process of described individual features point respectively HFS and low frequency part, obtain the image after described fusion treatment.
6. a terminal, is characterized in that, described terminal comprises:
Collecting unit, for for same destination object, gathers multiple images of corresponding different focus distance;
First processing unit, for the characteristic point in multiple images described is carried out pixel registration process, obtains multiple images after pixel registration process;
Second processing unit, for multiple images after described registration process are carried out image co-registration process, to obtain the full depth image of imaging clearly.
7. terminal according to claim 6, is characterized in that, described collecting unit, comprises further:
Choose subelement, for preset focus adjustment scope in choose multiple different focusing from;
Gather subelement, for the same distance at distance objective object, according to described multiple different focusing from each focusing from, gather an image of corresponding described same destination object respectively.
8. the terminal according to claim 6 or 7, is characterized in that, described first processing unit, comprises further:
First obtains subelement, for obtaining multiple images described, extracts the individual features point in each image respectively;
Pixel alignment subelement, carries out pixel registration process for the method based on image registration by the individual features point in multiple images, obtains multiple images after described registration process.
9. the terminal according to claim 6 or 7, is characterized in that, described second processing unit, comprises further:
Second obtains subelement, and for obtaining multiple images after described registration process, after extracting described registration process respectively, in multiple images, each opens the individual features point of image;
Varitron unit, for the individual features point in multiple images after registration process being transformed from the time domain to frequency domain according to multi-scale transform method, to carry out the pixel fusion process of described individual features point at frequency domain, obtain the image after fusion treatment, according to multiple dimensioned inverse transformation method, the image after described fusion treatment is switched back to time domain from frequency domain, to obtain final full depth image.
10. terminal according to claim 9, described varitron unit, is further used for:
Utilize multi-scale transform instrument, be different frequency-portions by the individual features point in multiple images after described registration process according to high frequency index and low frequency index decomposition, obtain HFS and low frequency part, adopt different fusion methods to carry out the pixel fusion process of described individual features point respectively HFS and low frequency part, obtain the image after described fusion treatment.
CN201510864122.XA 2015-11-30 2015-11-30 Image processing method based on multi-scale transform and terminal Pending CN105430266A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510864122.XA CN105430266A (en) 2015-11-30 2015-11-30 Image processing method based on multi-scale transform and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510864122.XA CN105430266A (en) 2015-11-30 2015-11-30 Image processing method based on multi-scale transform and terminal

Publications (1)

Publication Number Publication Date
CN105430266A true CN105430266A (en) 2016-03-23

Family

ID=55508169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510864122.XA Pending CN105430266A (en) 2015-11-30 2015-11-30 Image processing method based on multi-scale transform and terminal

Country Status (1)

Country Link
CN (1) CN105430266A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611181A (en) * 2016-03-30 2016-05-25 努比亚技术有限公司 Multi-frame photographed image synthesizer and method
CN105894486A (en) * 2016-06-29 2016-08-24 深圳市优象计算技术有限公司 Mobile phone night-shot method based on imu information
CN105979151A (en) * 2016-06-27 2016-09-28 深圳市金立通信设备有限公司 Image processing method and terminal
CN106101538A (en) * 2016-06-27 2016-11-09 深圳市金立通信设备有限公司 A kind of image processing method and terminal
CN106355569A (en) * 2016-08-29 2017-01-25 努比亚技术有限公司 Image generating device and method thereof
CN106954020A (en) * 2017-02-28 2017-07-14 努比亚技术有限公司 A kind of image processing method and terminal
CN108171743A (en) * 2017-12-28 2018-06-15 努比亚技术有限公司 Method, equipment and the computer for shooting image can storage mediums
CN110913131A (en) * 2019-11-21 2020-03-24 维沃移动通信有限公司 Moon shooting method and electronic equipment
CN111932476A (en) * 2020-08-04 2020-11-13 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2021017588A1 (en) * 2019-07-31 2021-02-04 茂莱(南京)仪器有限公司 Fourier spectrum extraction-based image fusion method
WO2021218536A1 (en) * 2020-04-28 2021-11-04 荣耀终端有限公司 High-dynamic range image synthesis method and electronic device
CN115170557A (en) * 2022-08-08 2022-10-11 中山大学中山眼科中心 Image fusion method and device for conjunctival goblet cell imaging
CN116883461A (en) * 2023-05-18 2023-10-13 珠海移科智能科技有限公司 Method for acquiring clear document image and terminal device thereof

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090141966A1 (en) * 2007-11-30 2009-06-04 Microsoft Corporation Interactive geo-positioning of imagery
CN101968883A (en) * 2010-10-28 2011-02-09 西北工业大学 Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
CN102063713A (en) * 2010-11-11 2011-05-18 西北工业大学 Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
US20130063485A1 (en) * 2011-09-13 2013-03-14 Casio Computer Co., Ltd. Image processing device that synthesizes image
CN103630072A (en) * 2013-10-25 2014-03-12 大连理工大学 Layout optimization method for camera in binocular vision measuring system
CN103778615A (en) * 2012-10-23 2014-05-07 西安元朔科技有限公司 Multi-focus image fusion method based on region similarity
CN104463817A (en) * 2013-09-12 2015-03-25 华为终端有限公司 Image processing method and device
CN104463822A (en) * 2014-12-11 2015-03-25 西安电子科技大学 Multi-focus image fusing method and device based on multi-scale overall filtering
CN104506771A (en) * 2014-12-18 2015-04-08 北京智谷睿拓技术服务有限公司 Image processing method and device
CN105046676A (en) * 2015-08-27 2015-11-11 上海斐讯数据通信技术有限公司 Image fusion method and equipment based on intelligent terminal

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090141966A1 (en) * 2007-11-30 2009-06-04 Microsoft Corporation Interactive geo-positioning of imagery
CN101968883A (en) * 2010-10-28 2011-02-09 西北工业大学 Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
CN102063713A (en) * 2010-11-11 2011-05-18 西北工业大学 Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
US20130063485A1 (en) * 2011-09-13 2013-03-14 Casio Computer Co., Ltd. Image processing device that synthesizes image
CN103778615A (en) * 2012-10-23 2014-05-07 西安元朔科技有限公司 Multi-focus image fusion method based on region similarity
CN104463817A (en) * 2013-09-12 2015-03-25 华为终端有限公司 Image processing method and device
CN103630072A (en) * 2013-10-25 2014-03-12 大连理工大学 Layout optimization method for camera in binocular vision measuring system
CN104463822A (en) * 2014-12-11 2015-03-25 西安电子科技大学 Multi-focus image fusing method and device based on multi-scale overall filtering
CN104506771A (en) * 2014-12-18 2015-04-08 北京智谷睿拓技术服务有限公司 Image processing method and device
CN105046676A (en) * 2015-08-27 2015-11-11 上海斐讯数据通信技术有限公司 Image fusion method and equipment based on intelligent terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘贵喜,杨万海: "基于多尺度对比度塔的图像融合方法及性能评价", 《光学学报》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611181A (en) * 2016-03-30 2016-05-25 努比亚技术有限公司 Multi-frame photographed image synthesizer and method
CN105979151B (en) * 2016-06-27 2019-05-14 深圳市金立通信设备有限公司 A kind of image processing method and terminal
CN105979151A (en) * 2016-06-27 2016-09-28 深圳市金立通信设备有限公司 Image processing method and terminal
CN106101538A (en) * 2016-06-27 2016-11-09 深圳市金立通信设备有限公司 A kind of image processing method and terminal
CN106101538B (en) * 2016-06-27 2019-05-14 深圳市金立通信设备有限公司 A kind of image processing method and terminal
CN105894486A (en) * 2016-06-29 2016-08-24 深圳市优象计算技术有限公司 Mobile phone night-shot method based on imu information
CN105894486B (en) * 2016-06-29 2018-08-03 深圳市优象计算技术有限公司 A kind of mobile phone night shooting method based on imu information
CN106355569A (en) * 2016-08-29 2017-01-25 努比亚技术有限公司 Image generating device and method thereof
CN106954020B (en) * 2017-02-28 2019-10-15 努比亚技术有限公司 A kind of image processing method and terminal
CN106954020A (en) * 2017-02-28 2017-07-14 努比亚技术有限公司 A kind of image processing method and terminal
CN108171743A (en) * 2017-12-28 2018-06-15 努比亚技术有限公司 Method, equipment and the computer for shooting image can storage mediums
WO2021017588A1 (en) * 2019-07-31 2021-02-04 茂莱(南京)仪器有限公司 Fourier spectrum extraction-based image fusion method
CN110913131A (en) * 2019-11-21 2020-03-24 维沃移动通信有限公司 Moon shooting method and electronic equipment
CN110913131B (en) * 2019-11-21 2021-05-11 维沃移动通信有限公司 Moon shooting method and electronic equipment
US11871123B2 (en) 2020-04-28 2024-01-09 Honor Device Co., Ltd. High dynamic range image synthesis method and electronic device
WO2021218536A1 (en) * 2020-04-28 2021-11-04 荣耀终端有限公司 High-dynamic range image synthesis method and electronic device
CN114827487A (en) * 2020-04-28 2022-07-29 荣耀终端有限公司 High dynamic range image synthesis method and electronic equipment
CN114827487B (en) * 2020-04-28 2024-04-12 荣耀终端有限公司 High dynamic range image synthesis method and electronic equipment
CN111932476A (en) * 2020-08-04 2020-11-13 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115170557A (en) * 2022-08-08 2022-10-11 中山大学中山眼科中心 Image fusion method and device for conjunctival goblet cell imaging
CN116883461A (en) * 2023-05-18 2023-10-13 珠海移科智能科技有限公司 Method for acquiring clear document image and terminal device thereof
CN116883461B (en) * 2023-05-18 2024-03-01 珠海移科智能科技有限公司 Method for acquiring clear document image and terminal device thereof

Similar Documents

Publication Publication Date Title
CN105430266A (en) Image processing method based on multi-scale transform and terminal
CN106303225A (en) A kind of image processing method and electronic equipment
CN105744159A (en) Image synthesizing method and device
CN105227837A (en) A kind of image combining method and device
CN105898159A (en) Image processing method and terminal
CN106485689A (en) A kind of image processing method and device
CN105141833A (en) Terminal photographing method and device
CN105956999A (en) Thumbnail generating device and method
CN105100775A (en) Image processing method and apparatus, and terminal
CN105120164B (en) The processing means of continuous photo and method
CN105488756B (en) Picture synthetic method and device
CN105472241B (en) Image split-joint method and mobile terminal
CN105338242A (en) Image synthesis method and device
CN105187724A (en) Mobile terminal and method for processing images
CN105227865A (en) A kind of image processing method and terminal
CN104917965A (en) Shooting method and device
CN105100642A (en) Image processing method and apparatus
CN106851113A (en) A kind of photographic method and mobile terminal based on dual camera
CN104951549A (en) Mobile terminal and photo/video sort management method thereof
CN105578269A (en) Mobile terminal and video processing method thereof
CN106373110A (en) Method and device for image fusion
CN106372607A (en) Method for reading pictures from videos and mobile terminal
CN106657782A (en) Picture processing method and terminal
CN105306787A (en) Image processing method and device
CN105205361A (en) Image screening method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160323