WO2018019128A1 - 一种夜景图像的处理方法和移动终端 - Google Patents

一种夜景图像的处理方法和移动终端 Download PDF

Info

Publication number
WO2018019128A1
WO2018019128A1 PCT/CN2017/092664 CN2017092664W WO2018019128A1 WO 2018019128 A1 WO2018019128 A1 WO 2018019128A1 CN 2017092664 W CN2017092664 W CN 2017092664W WO 2018019128 A1 WO2018019128 A1 WO 2018019128A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
night scene
frame
processing
scene image
Prior art date
Application number
PCT/CN2017/092664
Other languages
English (en)
French (fr)
Inventor
戴向东
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2018019128A1 publication Critical patent/WO2018019128A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • This application relates to, but is not limited to, the field of image processing.
  • the night scene enhancement photographing processing generally uses the image brightness algorithm or the contrast enhancement algorithm to enhance the photographed night scene image after the image is taken, thereby improving the brightness and contrast of the dark portion of the image.
  • the noise of the night scene image and the noise may be enhanced, the enhancement effect of the entire night scene image is not good.
  • the single-frame image denoising method can be used to denoise the night scene image, but the edge detail of the night scene image processed by the single-frame image denoising method is blurred, the denoising effect is not good, and the user experience is not good.
  • the present invention provides a method for processing a night scene image and a mobile terminal to enhance the definition and contrast of the night scene image and enhance the user experience.
  • a method for processing a night scene image comprising:
  • the mobile terminal acquires a multi-frame night scene image of the same preview picture
  • Night scene enhancement processing is performed on the image after the fusion denoising process.
  • the performing image registration processing on the acquired multi-frame night scene image includes:
  • the images other than the reference image from the acquired multi-frame night scene image are respectively aligned with the reference image.
  • the performing a fusion denoising process on the multi-frame night scene image after the image registration processing comprises:
  • D(x, y) represents the pixel value of the image after the fused denoising process at the point (x, y);
  • I i (x, y) represents the multi-frame after the image registration process The pixel value of the ith frame night scene image in the night scene image at the point (x, y);
  • n represents the number of multi-frame night scene images after the image registration processing.
  • the performing night scene enhancement processing on the image after the fused denoising processing comprises:
  • the night scene enhancement processing is performed on the image after the fused denoising process according to the calculated gradation average value and the determined enhancement coefficient.
  • the night scene enhancement processing is performed on the image after the fusion denoising processing according to the following formula:
  • f(x, y) represents the pixel value at the point (x, y) of the image after the night scene enhancement processing
  • average H (x, y) represents the gray level average at the point (x, y)
  • G(x, y) represents the enhancement coefficient at point (x, y)
  • H(x, y) represents the gray value at point (x, y).
  • the gradation average value is calculated according to the following formula:
  • 2m+1 represents the length of the preset area
  • 2k+1 represents the width of the preset area
  • H(l,j) represents the gray value at the point (l,j)
  • point (x,y ) represents the center point of the preset area
  • m and k are both positive integers.
  • the local standard deviation is calculated according to the following formula:
  • ⁇ H (x, y) represents the local standard deviation at point (x, y).
  • the enhancement coefficient is determined according to the following formula:
  • M represents the length of the image after the fusion denoising process
  • N represents the width of the image after the fusion denoising process
  • a mobile terminal includes: an acquisition module, a registration module, a denoising module, and an enhancement module;
  • the acquiring module is configured to: acquire a multi-frame night scene image of the same preview image;
  • the registration module is configured to: perform image registration processing on the multi-frame night scene image acquired by the acquiring module;
  • the denoising module is configured to perform a fusion denoising process on the multi-frame night scene image after the image registration processing is performed on the registration module;
  • the enhancement module is configured to: perform the night scene enhancement processing on the image after the fusion denoising process on the denoising module.
  • the registration module acquires the acquired module.
  • the multi-frame night scene image performs image registration processing, including:
  • the images other than the reference image in the multi-frame night scene image acquired by the acquisition module are respectively aligned with the reference image.
  • the denoising module performs the merging and denoising processing on the multi-frame night scene image after the image registration processing by the registration module, including:
  • D(x, y) represents the pixel value of the image after the fused denoising process at the point (x, y);
  • I i (x, y) represents the multi-frame after the image registration process The pixel value of the ith frame night scene image in the night scene image at the point (x, y);
  • n represents the number of multi-frame night scene images after the image registration processing.
  • the enhancement module performs the night scene enhancement processing on the image after the fusion denoising process on the denoising module, including:
  • the night scene enhancement processing is performed on the image after the fused denoising process according to the calculated gradation average value and the determined enhancement coefficient.
  • a mobile terminal includes: a processor, a memory, and a communication bus;
  • the communication bus is configured to: implement connection communication between the processor and the memory;
  • the processor is configured to: execute a processing program of a night scene image stored in the memory, To achieve the following steps:
  • Night scene enhancement processing is performed on the image after the fusion denoising process.
  • the processor is further configured to: execute the processing program of the night scene image to implement The following steps:
  • the images other than the reference image from the acquired multi-frame night scene image are respectively aligned with the reference image.
  • the processor is further configured to: execute the night scene image Handler to implement the following steps:
  • D(x, y) represents the pixel value of the image after the fused denoising process at the point (x, y);
  • I i (x, y) represents the multi-frame after the image registration process The pixel value of the ith frame night scene image in the night scene image at the point (x, y);
  • n represents the number of multi-frame night scene images after the image registration processing.
  • the processor in the step of performing night scene enhancement processing on the image after the fused denoising process, is further configured to: execute a processing program of the night scene image, to Implement the following steps:
  • the night scene enhancement processing is performed on the image after the fused denoising process according to the calculated gradation average value and the determined enhancement coefficient.
  • a computer readable storage medium storing one or more programs for execution by one or more processors to implement the following steps:
  • Night scene enhancement processing is performed on the image after the fusion denoising process.
  • the one or more programs are further used to be the one or more
  • the processor executes to implement the following steps:
  • the images other than the reference image from the acquired multi-frame night scene image are respectively aligned with the reference image.
  • the one or more programs are further used to be
  • the one or more processors execute to implement the following steps:
  • D(x, y) represents the pixel value of the image after the fused denoising process at the point (x, y);
  • I i (x, y) represents the multi-frame after the image registration process The pixel value of the ith frame night scene image in the night scene image at the point (x, y);
  • n represents the number of multi-frame night scene images after the image registration processing.
  • the one or more programs are further used to be used by the one or Multiple processors execute to implement the following steps:
  • the night scene enhancement processing is performed on the image after the fused denoising process according to the calculated gradation average value and the determined enhancement coefficient.
  • the method for processing a night scene image and the mobile terminal acquires a multi-frame night scene image of the same preview picture through the mobile terminal; performs image registration processing on the acquired multi-frame night scene image;
  • the frame night scene image is subjected to fusion denoising processing;
  • the night scene enhancement processing is performed on the image after the denoising processing;
  • the embodiment of the invention improves the definition and contrast of the night scene image, and enhances the user experience.
  • FIG. 1 is a schematic structural diagram of hardware of a mobile terminal according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a communication system supporting communication between mobile terminals provided by an embodiment of the present invention
  • FIG. 3 is a flowchart of a method for processing a night scene image according to an embodiment of the present invention
  • FIG. 4 is a flowchart of another method for processing a night scene image according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of still another method for processing a night scene image according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of another mobile terminal according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of still another mobile terminal according to an embodiment of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminal described in the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet), a PMP (Portable Multimedia Player), a navigation device, etc.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic structural diagram of hardware of a mobile terminal according to an embodiment of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an A/V (Audio/Video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190. and many more.
  • Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that allow mobile terminal 100 to be Radio communication between line communication systems or networks.
  • the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
  • the broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may also include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112.
  • the broadcast signal may exist in various forms, for example, it may exist in the form of Digital Multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), and the like.
  • the broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast systems.
  • the broadcast receiving module 111 can use forward link media (MediaFLO) by using, for example, multimedia broadcast-terrestrial (DMB-T), digital multimedia broadcast-satellite (DMB-S), digital video broadcast-handheld (DVB-H)
  • MediaFLO forward link media
  • the digital broadcasting system of the @ ) data broadcasting system, the terrestrial digital broadcasting integrated service (ISDB-T), and the like receives digital broadcasting.
  • the broadcast receiving module 111 can be constructed as various broadcast systems suitable for providing broadcast signals as well as the above-described digital broadcast system.
  • the broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless Broadband), Wimax (Worldwide Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), etc. .
  • the short range communication module 114 is a module for supporting short range communication.
  • Some embodiments illustrating the short-range communication technology include Bluetooth TM, a radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB), ZigBee, etc. TM.
  • the location information module 115 is a module for checking or acquiring location information of the mobile terminal.
  • a typical example of a location information module is GPS (Global Positioning System).
  • GPS Global Positioning System
  • the GPS module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate three-dimensional current position information according to longitude, latitude, and altitude.
  • the method for calculating position and time information uses three satellites and corrects the error of the calculated position and time information by using another satellite. Further, the GPS module 115 is capable of calculating speed information by continuously calculating current position information in real time.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 122 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display module 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal.
  • the sensing unit 140 can sense whether the slide type phone is turned on or off.
  • the sensing unit 140 can detect whether the power supply unit 190 provides power or whether the interface unit 170 is coupled to an external device.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • the device having the identification module may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display module 151, an audio output module 152, an alarm module 153, and the like.
  • the display module 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display module 151 can display a user interface (UI) or graphical user interface (GUI) associated with a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capture mode, the display module 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display module 151 can function as an input device and an output device.
  • the display module 151 can include liquid crystal display At least one of a display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a speaker, a buzzer, and the like.
  • the alert module 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alert module 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm module 153 can provide an output in the form of vibrations that, when receiving a call, message, or some other incoming communication, can provide a haptic output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm module 153 can also provide an output of the notification event occurrence via the display module 151 or the audio output module 152.
  • a haptic output ie, vibration
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), random access Memory (RAM), static random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, etc. .
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present application can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • the mobile terminal 100 as shown in FIG. 1 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • FIG. 2 A schematic diagram of a communication system of a letter, and a communication system in which a mobile terminal provided in accordance with an embodiment of the present invention is operable is depicted in FIG.
  • Such communication systems may use different air interfaces and/or physical layers.
  • air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc.
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
  • a CDMA wireless communication system can include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a public switched telephone network (PSTN) 290.
  • PSTN public switched telephone network
  • the MSC 280 is also configured to interface with a BSC 275 that can be coupled to the base station 270 via a backhaul line.
  • the backhaul line can be constructed in accordance with any of a number of well known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 2 can include multiple BSCs 275.
  • Each BS 270 can serve one or more partitions (or regions), each of which is covered by a multi-directional antenna or an antenna directed to a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology.
  • BTS Base Transceiver Subsystem
  • the term "base station” can be used to generally refer to a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as a plurality of cellular stations.
  • a broadcast transmitter (BT) 295 transmits a broadcast signal to the mobile terminal 100 operating within the system.
  • a broadcast receiving module 111 as shown in FIG. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295.
  • GPS Global Positioning System
  • the satellite 300 helps locate at least one of the plurality of mobile terminals 100.
  • a plurality of satellites 300 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites.
  • the GPS module 115 as shown in Figure 1 is typically configured to cooperate with the satellite 300 to obtain desired positioning information.
  • Alternative GPS tracking technology or GPS tracking technology In addition to the technique, other techniques that can track the location of the mobile terminal can be used.
  • at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
  • BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in calls, messaging, and other types of communications.
  • Each reverse link signal received by a particular base station 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • the BSC provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270.
  • the BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • FIG. 3 is a flowchart of a method for processing a night scene image according to an embodiment of the present invention.
  • the method for processing a night scene image according to an embodiment of the present invention may include the following steps, that is, step 310 to step 340:
  • Step 310 The mobile terminal acquires a multi-frame night scene image of the same preview picture.
  • step 310 may include the following steps:
  • the multi-frame night scene image of the same preview picture is acquired according to the set shooting parameters.
  • the time interval of any two adjacent night scene images in the multi-frame night scene image acquired in step 310 may be a preset duration.
  • the acquired shooting parameters of the multi-frame night scene image in the same preview picture are the same.
  • the shooting parameters include, but are not limited to, one or more of sensitivity (ISO), exposure, focus parameters, and the like.
  • the preset duration may be a default value set by the system of the mobile terminal, or a human-computer interaction interface may be provided by the mobile terminal, and the user may set according to his own needs, such as 50 milliseconds. , 35 milliseconds, or 100 milliseconds.
  • the method for the night scene image provided by the embodiment of the present invention may further include the following steps before step 310:
  • the processing night image function is turned off.
  • the method for the night scene image provided by the embodiment of the present invention may further include the following steps before step 310:
  • step 310 When the function of turning on the night scene image is detected, and an instruction to confirm the photographing by the user is detected, the process proceeds to step 310.
  • Step 320 Perform image registration processing on the acquired multi-frame night scene image.
  • Step 330 Perform fusion and denoising processing on the multi-frame night scene image after image registration processing.
  • Step 340 Perform night scene enhancement processing on the image after the fused denoising process.
  • the method for processing a night scene image provided by the implementation of the present invention performs image registration processing and fusion denoising processing on a multi-frame night scene image of the same preview image, and performs night scene enhancement processing on the image after the fusion denoising process, thereby improving The sharpness and contrast of the night scene image enhances the user experience.
  • FIG. 4 is a flowchart of another method for processing a night scene image according to an embodiment of the present invention.
  • step 320 in the embodiment of the present invention may include the following steps, that is, steps 321 to 322:
  • Step 321 Select one frame image as the reference image in the acquired multi-frame night scene image
  • Step 322 respectively, aligning the acquired image other than the reference image with the reference image in the multi-frame night scene image.
  • the first frame image may be selected as the reference image, or the second frame image may be selected as the reference image, or the last frame image may be selected as the reference image.
  • Lucas-Kanade optical flow method can be used to perform alignment between different frame images.
  • the registration (or alignment) method is not limited in the example.
  • the multi-frame night scene image after the image registration processing in the above step 320 may include an image in which a plurality of frames are aligned and the reference image.
  • step 330 may include the following steps, that is, steps 331 to 332:
  • Step 331 respectively acquire pixel values of each pixel of each night scene image in the multi-frame night scene image after image registration processing
  • Step 332 Calculate, according to formula (1), the pixel value of each pixel of the image subjected to the fusion denoising process according to the obtained pixel value of each pixel of each night scene image:
  • D(x, y) represents the pixel value at the point (x, y) of the image after the fused denoising process
  • I i (x, y) represents the ith in the multi-frame night scene image after the image registration process
  • n represents the number of multi-frame night scene images after the image registration process.
  • the derivation process of the formula (1) is as follows: It is assumed that the image after the registration processing operation is an n frame, which is [I 1 , I 2 . . . In] in turn, and the image after the fusion denoising process is The pixel value at point (x, y) is D(x, y), assuming I i (x, y) is the true pixel value of the night scene image of the ith frame at point (x, y), N i (x, y) is the pixel value at the point (x, y) after the night scene image of the i-th frame is disturbed by noise, and there is a formula (1.0)
  • the pixel value after the denoising processing at the point (x, y) can be calculated by the formula (1).
  • step 340 may include the following steps, that is, steps 341 to 344:
  • Step 341 Obtain a gray value of each point in a preset area of the image after the denoising processing
  • Step 342 Calculate a gray average value of the preset region of the image after the denoising process and a local standard deviation according to the acquired gray value of each point;
  • Step 343 determining an enhancement coefficient according to the calculated local standard deviation
  • Step 344 Perform night scene enhancement processing on the image after the fused denoising process according to the calculated gradation average value and the determined enhancement coefficient.
  • the night scene enhancement processing may be performed on the image after the fusion denoising process according to formula (2):
  • f(x, y) represents the pixel value at the point (x, y) of the image after the night scene enhancement processing
  • average H (x, y) represents the gray level average at the point (x, y)
  • (x, y) represents the enhancement coefficient at point (x, y)
  • H(x, y) represents the gray value at point (x, y).
  • the gray average value may be calculated according to formula (3):
  • the local standard deviation may be calculated according to formula (4):
  • ⁇ H (x, y) represents the local standard deviation at point (x, y).
  • the enhancement coefficient may be determined according to formula (5):
  • M represents the length of the image after the fused denoising process
  • N represents the width of the image after the fused denoising process
  • the method for processing a night scene image provided by the implementation of the present invention performs image registration processing and fusion denoising processing on a multi-frame night scene image of the same preview image, and performs night scene enhancement processing on the image after the fusion denoising process, thereby improving The sharpness and contrast of the night scene image enhances the user experience.
  • FIG. 5 is a flowchart of still another method for processing a night scene image according to an embodiment of the present invention.
  • the method for processing a night scene image according to an embodiment of the present invention may include the following steps, that is, steps 401 to 410:
  • Step 401 Add a configuration item for turning on or off the function of processing the night scene image on the shooting page of the mobile terminal.
  • Step 402 The mobile terminal detects whether there is an opening operation for the configuration item for turning on or off the function of processing the night scene image.
  • step 403 is performed; when the opening operation is not detected, the flow is ended.
  • Step 403 The mobile terminal starts the function of processing the night scene image.
  • step 404 the mobile terminal detects whether there is an instruction for the user to confirm the shooting.
  • step 405 is performed; when the user confirms that the shooting instruction is not detected, step 404 is continued.
  • Step 405 The mobile terminal acquires a multi-frame night scene image of the same preview picture.
  • step 405 may include the following steps:
  • any adjacent two of the multi-frame night scene images acquired in step 405 can be a preset duration.
  • the acquired shooting parameters of the multi-frame night scene image in the same preview picture are the same.
  • the shooting parameters include, but are not limited to, one or more of sensitivity (ISO), exposure, focus parameters, and the like.
  • the preset duration may be a default value set by the system of the mobile terminal, or a human-computer interaction interface may be provided by the mobile terminal, and the user may set according to his own needs, such as 50 milliseconds. , 35 milliseconds, or 100 milliseconds.
  • Step 406 The mobile terminal performs image registration processing on the obtained multi-frame night scene image.
  • step 406 may include the following steps:
  • the images other than the reference image in the acquired multi-frame night scene image are respectively aligned with the reference image.
  • the first frame image may be selected as the reference image, or the second frame image may be selected as the reference image, or the last frame image may be selected as the reference image.
  • Lucas-Kanade optical flow method can be used to perform alignment between different frame images.
  • the registration (or alignment) method is not limited in the example.
  • Step 407 The mobile terminal performs fusion denoising processing on the multi-frame night scene image after the image registration processing.
  • the multi-frame night view after the image registration processing in the above step 406 may include an image in which a plurality of frames are aligned and the reference image.
  • step 407 may include the following steps:
  • D(x, y) represents the pixel value at the point (x, y) of the image after the fused denoising process
  • I i (x, y) represents the ith in the multi-frame night scene image after the image registration process
  • n represents the number of multi-frame night scene images after the image registration process.
  • the derivation process of the formula (1) is as follows: It is assumed that the image after the registration processing operation is an n frame, which is [I 1 , I 2 . . . In] in turn, and the image after the fusion denoising process is The pixel value at point (x, y) is D(x, y), assuming I i (x, y) is the true pixel value of the night scene image of the ith frame at point (x, y), N i (x, y) is the pixel value at the point (x, y) after the night scene image of the i-th frame is disturbed by noise, and there is a formula (1.0)
  • the pixel value after the denoising processing at the point (x, y) can be calculated by the formula (1).
  • Step 408 The mobile terminal performs night scene enhancement processing on the image after the denoising processing.
  • step 408 may include the following steps:
  • the enhancement coefficient is determined according to the calculated local standard deviation
  • the night scene enhancement processing is performed on the image after the fusion denoising process according to the calculated gray level average value and the determined enhancement coefficient.
  • the night scene enhancement processing may be performed on the image after the fusion denoising process according to formula (2):
  • f(x, y) represents the pixel value at the point (x, y) of the image after the night scene enhancement processing
  • average H (x, y) represents the gray level average at the point (x, y)
  • (x, y) represents the enhancement coefficient at point (x, y)
  • H(x, y) represents the gray value at point (x, y).
  • the gray average value may be calculated according to formula (3):
  • the local standard deviation may be calculated according to formula (4):
  • ⁇ H (x, y) represents the local standard deviation at point (x, y).
  • the enhancement coefficient may be determined according to formula (5):
  • M represents the length of the image after the fused denoising process
  • N represents the width of the image after the fused denoising process
  • Step 409 The mobile terminal detects whether there is a closing operation for the configuration item for turning on or off the function of processing the night scene image.
  • step 410 is performed; when the closing operation is not detected, the flow is ended.
  • Step 410 The mobile terminal turns off the function of processing the night scene image.
  • the embodiment of the present invention provides a mobile terminal for performing the processing method of the night scene image described above.
  • FIG. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
  • the mobile terminal provided by the embodiment of the present invention may include: an obtaining module 50, a registration module 51, a denoising module 52, and an enhancement module 53. .
  • the obtaining module 50 is configured to: acquire a multi-frame night scene image of the same preview screen.
  • the obtaining module 50 obtains a multi-frame night scene image of the same preview image, which may include:
  • the multi-frame night scene image of the same preview picture is acquired according to the set shooting parameters.
  • the time interval of any two adjacent night scene images in the multi-frame night scene image acquired by the acquiring module 50 may be a preset duration.
  • the acquisition module 50 can be, for example, a camera.
  • the preset duration may be a default value set by the system of the mobile terminal, or a human-computer interaction interface may be provided by the mobile terminal, and the user may set according to his own needs, such as 50 milliseconds. , 35 milliseconds, or 100 milliseconds.
  • the registration module 51 is configured to perform image registration processing on the multi-frame night scene image acquired by the acquisition module 50.
  • the denoising module 52 is configured to perform merging and denoising processing on the multi-frame night scene image after the image registration processing is performed on the registration module 51.
  • the enhancement module 53 is configured to perform night scene enhancement processing on the image after the denoising process is performed on the denoising module 52.
  • the mobile terminal provided by the implementation of the present invention enhances the night scene image by performing image registration processing and fusion denoising processing on the multi-frame night scene image of the same preview picture, and performing night scene enhancement processing on the image after the fusion denoising process.
  • the clarity and contrast enhance the user experience.
  • the implementation manner of the image registration processing performed by the registration module 51 on the multi-frame night scene image acquired by the obtaining module 50 may include:
  • the images other than the reference image in the multi-frame night scene image acquired by the acquisition module 50 are respectively aligned with the reference image.
  • the registration module 51 can use, for example, the Lucas-Kanade optical flow method to perform alignment between different frame images.
  • the implementation manner of the denoising processing performed by the denoising module 52 on the image processing of the multi-frame night scene image after the image registration processing may include:
  • D(x, y) represents the pixel value at the point (x, y) of the image after the fused denoising process
  • I i (x, y) represents the ith in the multi-frame night scene image after the image registration process
  • n represents the number of multi-frame night scene images after the image registration process.
  • the enhancement module 53 performs the implementation of the night scene enhancement processing on the image after the denoising processing by the denoising module 52, and may include:
  • the enhancement coefficient is determined according to the calculated local standard deviation
  • the night scene enhancement processing is performed on the image after the fusion denoising process according to the calculated gray level average value and the determined enhancement coefficient.
  • the enhancement module 53 may perform fusion according to formula (2).
  • the image after noise processing is enhanced by night scene:
  • f(x, y) represents the pixel value at the point (x, y) of the image after the night scene enhancement processing
  • average H (x, y) represents the gray level average at the point (x, y)
  • (x, y) represents the enhancement coefficient at point (x, y)
  • H(x, y) represents the gray value at point (x, y).
  • the enhancement module 53 may calculate the gray average value according to formula (3):
  • the enhancement module 53 may calculate the local standard deviation according to formula (4):
  • ⁇ H (x, y) represents the local standard deviation at point (x, y).
  • the enhancement module 53 may determine the enhancement coefficient according to formula (5):
  • M represents the length of the image after the fusion denoising process
  • N represents the fusion denoising process The width of the image.
  • FIG. 7 is a schematic structural diagram of another mobile terminal according to an embodiment of the present invention.
  • the mobile terminal provided by the embodiment of the present invention may further include:
  • the setting module 54 is configured to: add a configuration item for turning on or off the function of processing the night scene image on the shooting page of the mobile terminal.
  • the mobile terminal in the embodiment of the present invention may further include:
  • the detecting module 55 is configured to: when the opening operation of the configuration item for turning on or off the processing night image function is detected, the processing night image function is turned on.
  • the detecting module 55 is further configured to: when detecting a closing operation for opening or closing a configuration item for processing the night scene image function, turning off the processing night scene image function.
  • the detecting module 55 is further configured to notify the acquiring module 50 when detecting that the night scene image function is turned on, and detecting that the user confirms the shooting instruction.
  • the mobile terminal in the embodiment shown in FIG. 7 of the present invention is used to execute the method.
  • the mobile terminal includes: a setting module 54, a detecting module 55, an obtaining module 50, a registration module 51, a denoising module 52, and an enhancement module 53.
  • the setting module 54 is configured to: add a configuration item for turning on or off the function of processing the night scene image on the shooting page of the mobile terminal.
  • the detecting module 55 is configured to: when detecting an opening operation for opening or closing a configuration item for processing a night scene image function, turning on a processing night scene image function; when detecting that the processing night scene image function is turned on, and detecting that the user confirms the shooting When the instruction is issued, the acquisition module 50 is notified.
  • the detecting module 55 is further configured to: when the closing operation of the configuration item for turning on or off the processing night image function is detected, turn off the processing night image function.
  • the obtaining module 50 is configured to: acquire a multi-frame night scene image of the same preview picture.
  • the obtaining module 50 obtains a multi-frame night scene image of the same preview image, which may include:
  • the multi-frame night scene image of the same preview picture is acquired according to the set shooting parameters.
  • the time interval of any two adjacent night scene images in the multi-frame night scene image acquired by the acquiring module 50 may be a preset duration.
  • the acquisition module 50 can be, for example, a camera.
  • the preset duration may be a default value set by the system of the mobile terminal, or a human-computer interaction interface may be provided by the mobile terminal, and the user may set according to his own needs, such as 50 milliseconds. , 35 milliseconds, or 100 milliseconds.
  • the registration module 51 is configured to perform image registration processing on the multi-frame night scene image acquired by the acquisition module 50.
  • the implementation manner of the image registration processing performed by the registration module 51 on the multi-frame night scene image acquired by the obtaining module 50 may include:
  • the images other than the reference image in the multi-frame night scene image acquired by the acquisition module 50 are respectively aligned with the reference image.
  • the registration module 51 can use, for example, the Lucas-Kanade optical flow method to perform alignment between different frame images.
  • the denoising module 52 is configured to perform merging and denoising processing on the multi-frame night scene image after the image registration processing is performed on the registration module 51.
  • the implementation manner of the denoising processing performed by the denoising module 52 on the image processing of the multi-frame night scene image after the image registration processing may include:
  • D(x, y) represents the pixel value of the image after the fused denoising process at the point (x, y);
  • I i (x, y) represents the number of the multi-frame night scene image after the image registration process The pixel value of the i-frame night scene image at the point (x, y);
  • n represents the number of multi-frame night scene images after the image registration processing.
  • the enhancement module 53 is configured to perform night scene enhancement processing on the image after the denoising process is performed on the denoising module 52.
  • the enhancement module 53 performs the implementation of the night scene enhancement processing on the image after the denoising processing by the denoising module 52, and may include:
  • the enhancement coefficient is determined according to the calculated local standard deviation
  • the night scene enhancement processing is performed on the image after the fusion denoising process according to the calculated gray level average value and the determined enhancement coefficient.
  • the enhancement module 53 may perform night scene enhancement processing on the image after the fusion denoising process according to formula (2):
  • f(x, y) represents the pixel value at the point (x, y) of the image after the night scene enhancement processing
  • average H (x, y) represents the gray level average at the point (x, y)
  • (x, y) represents the enhancement coefficient at point (x, y)
  • H(x, y) represents the gray value at point (x, y).
  • the enhancement module 53 may calculate the gray average value according to formula (3):
  • the enhancement module 53 may calculate the local standard deviation according to formula (4):
  • ⁇ H (x, y) represents the local standard deviation at point (x, y).
  • the enhancement module 53 may determine the enhancement coefficient according to formula (5):
  • M represents the length of the image after the fused denoising process
  • N represents the width of the image after the fused denoising process
  • the embodiment of the present invention further provides a mobile terminal for performing the processing method of the night scene image described above.
  • FIG. 8 is a schematic structural diagram of still another mobile terminal according to an embodiment of the present invention.
  • the mobile terminal 60 provided by the embodiment of the present invention may include a processor 61, a memory 62, and a communication bus 63.
  • the communication bus 63 is configured to: implement connection communication between the processor 61 and the memory 62;
  • the processor 61 is configured to execute a processing program of the night scene image stored in the memory 62 to implement the following steps, that is, steps 710 to 740:
  • Step 710 Acquire a multi-frame night scene image of the same preview picture
  • Step 720 Perform image registration processing on the acquired multi-frame night scene image.
  • Step 730 performing merging and denoising processing on the multi-frame night scene image after the image registration processing
  • Step 740 Perform night scene enhancement processing on the image after the fused denoising process.
  • the mobile terminal provided by the implementation of the present invention enhances the night scene image by performing image registration processing and fusion denoising processing on the multi-frame night scene image of the same preview picture, and performing night scene enhancement processing on the image after the fusion denoising process.
  • the clarity and contrast enhance the user experience.
  • the processor 61 is further configured to: execute a processing program of the night scene image to implement the following steps, that is, steps 711 to 712:
  • Step 711 setting shooting parameters
  • Step 712 Acquire a multi-frame night scene image of the same preview picture according to the set shooting parameters.
  • the processor 61 is further configured to: execute a processing program of the night scene image to implement the following steps, that is, steps 721 to 722:
  • Step 721 Select one frame image as the reference image in the acquired multi-frame night scene image
  • Step 722 align the acquired image other than the reference image with the reference image in the acquired multi-frame night scene image.
  • the processor 61 is further configured to: execute a processing program of the night scene image to implement the following steps, that is, steps 731 to 732:
  • Step 731 respectively acquiring pixel values of each pixel of each night scene image in the multi-frame night scene image after image registration processing
  • Step 732 Calculate, according to the acquired pixel value of each pixel of each frame of the night scene image, a pixel value of each pixel of the image subjected to the fusion denoising process according to the following formula:
  • D(x, y) represents the pixel value at the point (x, y) of the image after the fused denoising process
  • I i (x, y) represents the multi-frame night scene image after the image registration process
  • n represents the number of multi-frame night scene images after the image registration process.
  • the processor 61 is further configured to: perform The processing program of the night scene image is implemented to implement the following steps, namely, steps 741 to 744:
  • Step 741 Obtain a gray value of each point in a preset area of the image after the denoising processing
  • Step 742 Calculate a gray average value of the preset region of the image after the denoising process and a local standard deviation according to the acquired gray value of each point;
  • Step 743 determining an enhancement coefficient according to the calculated local standard deviation
  • Step 744 Perform night scene enhancement processing on the image after the fusion denoising process according to the calculated gray level average value and the determined enhancement coefficient.
  • the processor 61 executes the processing program of the night scene image to implement the implementation of the step 740, which may include:
  • Night scene enhancement processing is performed on the image after fusion denoising according to the following formula:
  • f(x, y) represents the pixel value at the point (x, y) of the image after the night scene enhancement processing
  • average H (x, y) represents the gray level average at the point (x, y)
  • (x, y) represents the enhancement coefficient at point (x, y)
  • H(x, y) represents the gray value at point (x, y).
  • the processor 61 performs a processing program of the night scene image to calculate an implementation manner of the gray level average, which may include:
  • the gray level average is calculated according to the following formula:
  • 2m+1 represents the length of the preset area
  • 2k+1 represents the width of the preset area
  • H(l,j) represents the gray value at the point (l,j)
  • point (x,y ) indicates the center point of the preset area
  • m and k are both positive integers.
  • the processing of the night scene image by the processor 61 to calculate the implementation of the local standard deviation may include:
  • ⁇ H (x, y) represents the local standard deviation at point (x, y).
  • the processor 61 performs a processing procedure of the night scene image to determine an implementation manner of the enhancement coefficient, which may include:
  • the enhancement factor is determined according to the following formula:
  • M represents the length of the image after the fused denoising process
  • N represents the width of the image after the fused denoising process
  • An embodiment of the present invention further provides a computer readable storage medium storing one or more programs for being executed by one or more processors to implement the following steps, That is, steps 810 to 840:
  • Step 810 acquiring a multi-frame night scene image of the same preview picture
  • Step 820 Perform image registration processing on the acquired multi-frame night scene image.
  • Step 830 performing merging and denoising processing on the multi-frame night scene image after image registration processing
  • Step 840 performing night scene enhancement processing on the image after the fused denoising process.
  • the computer readable storage medium provided by the implementation of the present invention performs image registration processing and fusion denoising processing on a multi-frame night scene image of the same preview picture, and performs night scene enhancement processing on the image after the fused denoising process, thereby improving The sharpness and contrast of the night scene image enhances the user experience.
  • the one or more programs are further configured to be executed by one or more processors to implement the following steps, that is, steps 811 to 812:
  • Step 811 setting shooting parameters
  • Step 812 Acquire a multi-frame night scene image of the same preview picture according to the set shooting parameters.
  • step 820 the one or more programs are further configured to be executed by one or more processors to implement the following steps, that is, steps 821 to 822:
  • Step 821 selecting one frame image as the reference image in the acquired multi-frame night scene image
  • Step 822 respectively, aligning the acquired image other than the reference image with the reference image in the multi-frame night scene image.
  • step 830 the one or more programs are further configured to be executed by one or more processors to implement the following steps, that is, steps 831 to 832:
  • Step 831 respectively acquiring pixel values of each pixel of each night scene image in the multi-frame night scene image after image registration processing
  • Step 832 Calculate, according to the obtained pixel value of each pixel of each night scene image, the pixel value of each pixel of the image subjected to the fusion denoising process according to the following formula:
  • D(x, y) represents the pixel value at the point (x, y) of the image after the fused denoising process
  • I i (x, y) represents the multi-frame night scene image after the image registration process
  • n represents the number of multi-frame night scene images after the image registration process.
  • step 840 the one or more programs are further configured to be executed by one or more processors to implement the following steps, that is, steps 841 to 844:
  • Step 841 Obtain a gray value of each point in a preset area of the image after the denoising process is merged;
  • Step 842 calculating, according to the acquired gray value of each point, a gray level average value of the preset region of the image after the denoising processing and a local standard deviation;
  • Step 843 determining an enhancement coefficient according to the calculated local standard deviation
  • Step 844 Perform night scene enhancement processing on the image after the fusion denoising process according to the calculated gray level average value and the determined enhancement coefficient.
  • the one or more programs are used by one or more processors to implement step 840.
  • the method may include:
  • Night scene enhancement processing is performed on the image after fusion denoising according to the following formula:
  • f(x, y) represents the pixel value at the point (x, y) of the image after the night scene enhancement processing
  • average H (x, y) represents the gray level average at the point (x, y)
  • (x, y) represents the enhancement coefficient at point (x, y)
  • H(x, y) represents the gray value at point (x, y).
  • the one or more programs are used by one or more processors to calculate an implementation of a grayscale average, which may include:
  • the gray level average is calculated according to the following formula:
  • 2m+1 represents the length of the preset area
  • 2k+1 represents the width of the preset area
  • H(l,j) represents the gray value at the point (l,j)
  • point (x,y ) indicates the center point of the preset area
  • m and k are both positive integers.
  • the one or more programs are used by one or more processors to calculate the implementation of the local standard deviation, which may include:
  • ⁇ H (x, y) represents the local standard deviation at point (x, y).
  • the foregoing one or more programs are used by one or more processors to determine an implementation manner of the enhancement coefficient, which may include:
  • the enhancement factor is determined according to the following formula:
  • M represents the length of the image after the fused denoising process
  • N represents the width of the image after the fused denoising process
  • all or part of the steps of the above embodiments may also be implemented by using an integrated circuit. These steps may be separately fabricated into individual integrated circuit modules, or multiple modules or steps may be fabricated into a single integrated circuit module. achieve.
  • the devices/function modules/functional units in the above embodiments may be implemented by a general-purpose computing device, which may be centralized on a single computing device or distributed over a network of multiple computing devices.
  • the device/function module/functional unit in the above embodiment When the device/function module/functional unit in the above embodiment is implemented in the form of a software function module and sold or used as a stand-alone product, it can be stored in a computer readable storage medium.
  • the above mentioned computer readable storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
  • a multi-frame night scene image of the same preview picture is acquired by the mobile terminal; image registration processing is performed on the acquired multi-frame night scene image; and the multi-frame night scene image after image registration processing is subjected to fusion denoising processing;
  • the image after the denoising process performs night scene enhancement processing; the embodiment of the present invention improves the definition and contrast of the night scene image, and enhances the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Image Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

一种夜景图像的处理方法和移动终端,该方法包括:移动终端获取同一预览画面的多帧夜景图像;对所获取的多帧夜景图像进行图像配准处理;对图像配准处理后的多帧夜景图像进行融合去噪处理;对融合去噪处理后的图像进行夜景增强处理。

Description

一种夜景图像的处理方法和移动终端 技术领域
本申请涉及但不限于图像处理领域。
背景技术
相关技术中的夜景增强拍照处理,一般都是拍摄图像后,利用图像亮度算法或者对比度增强算法将已拍摄的夜景图像进行增强处理,这样就会提升图像暗部的亮度和对比度。但是,由于夜景图像的噪点比较多,并且噪点也可能会被增强,导致整个夜景图像的增强效果不佳。
因此,相关技术中可以采用单帧图像去噪方法对夜景图像进行去噪处理,但是采用单帧图像去噪方法处理后的夜景图像的边缘细节模糊,去噪效果不好,用户体验不佳。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本文提供一种夜景图像的处理方法和移动终端,以实现提升夜景图像的清晰度和对比度,增强用户体验。
一种夜景图像的处理方法,包括:
移动终端获取同一预览画面的多帧夜景图像;
对所获取的多帧夜景图像进行图像配准处理;
对所述图像配准处理后的多帧夜景图像进行融合去噪处理;
对所述融合去噪处理后的图像进行夜景增强处理。
可选地,如上所述的夜景图像的处理方法中,所述对所获取的多帧夜景图像进行图像配准处理,包括:
在所述获取的多帧夜景图像中选取一帧图像作为基准图像;
分别将所述获取的多帧夜景图像中除去所述基准图像以外的其它图像与所述基准图像对齐。
可选地,如上所述的夜景图像的处理方法中,所述对所述图像配准处理后的多帧夜景图像进行融合去噪处理,包括:
分别获取所述图像配准处理后的多帧夜景图像中每帧夜景图像的每个像素的像素值;
根据所获取的每帧夜景图像的每个像素的像素值,按照以下公式计算进行所述融合去噪处理后的图像的每个像素的像素值为:
Figure PCTCN2017092664-appb-000001
其中,D(x,y)表示所述融合去噪处理后的所述图像在点(x,y)处的像素值;Ii(x,y)表示所述图像配准处理后的多帧夜景图像中的第i帧夜景图像在点(x,y)处的像素值;n表示所述图像配准处理后的多帧夜景图像的数量。
可选地,如上所述的夜景图像的处理方法中,所述对所述融合去噪处理后的图像进行夜景增强处理,包括:
获取所述融合去噪处理后的图像的预设区域内每个点的灰度值;
根据所获取的每个点的灰度值计算所述融合去噪处理后的图像的预设区域的灰度平均值以及局部标准方差;
根据计算出的所述局部标准方差确定增强系数;
根据计算出的所述灰度平均值以及确定出的增强系数对所述融合去噪处理后的图像进行夜景增强处理。
可选地,如上所述的夜景图像的处理方法中,根据以下公式对所述融合去噪处理后的图像进行夜景增强处理:
f(x,y)=averageH(x,y)+G(x,y)[H(x,y)-averageH(x,y)];
其中,f(x,y)表示所述夜景增强处理后的图像在点(x,y)处的像素值;averageH(x,y)表示在点(x,y)处的灰度平均值;G(x,y)表示在点(x,y)处的增强系数;H(x,y)表示点(x,y)处的灰度值。
可选地,如上所述的夜景图像的处理方法中,根据以下公式计算所述灰度平均值:
Figure PCTCN2017092664-appb-000002
其中,2m+1表示所述预设区域的长度;2k+1表示所述预设区域的宽度,H(l,j)表示点(l,j)处的灰度值;点(x,y)表示所述预设区域的中心点;m和k均为正整数。
可选地,如上所述的夜景图像的处理方法中,根据以下公式计算所述局部标准方差:
Figure PCTCN2017092664-appb-000003
其中,σH(x,y)表示在点(x,y)处的局部标准方差。
可选地,如上所述的夜景图像的处理方法中,根据以下公式确定所述增强系数:
Figure PCTCN2017092664-appb-000004
其中,M表示所述融合去噪处理后的图像的长度;N表示所述融合去噪处理后的图像的宽度。
一种移动终端,包括:获取模块、配准模块、去噪模块和增强模块;
其中,所述获取模块,设置为:获取同一预览画面的多帧夜景图像;
所述配准模块,设置为:对所述获取模块获取的所述多帧夜景图像进行图像配准处理;
所述去噪模块,设置为:对所述配准模块进行所述图像配准处理后的多帧夜景图像进行融合去噪处理;
所述增强模块,设置为:对所述去噪模块进行所述融合去噪处理后的图像进行夜景增强处理。
可选地,如上所述的移动终端中,所述配准模块对所述获取模块获取的 所述多帧夜景图像进行图像配准处理,包括:
在所述获取模块获取的所述多帧夜景图像中选取一帧图像作为基准图像;
分别将所述获取模块获取的所述多帧夜景图像中除去所述基准图像以外的其它图像与所述基准图像对齐。
可选地,如上所述的移动终端中,所述去噪模块对所述配准模块进行所述图像配准处理后的多帧夜景图像进行融合去噪处理,包括:
分别获取所述图像配准处理后的多帧夜景图像中每帧夜景图像的每个像素的像素值;
根据所获取的每帧夜景图像的每个像素的像素值,按照以下公式计算进行所述融合去噪处理后的图像的每个像素的像素值为:
Figure PCTCN2017092664-appb-000005
其中,D(x,y)表示所述融合去噪处理后的所述图像在点(x,y)处的像素值;Ii(x,y)表示所述图像配准处理后的多帧夜景图像中的第i帧夜景图像在点(x,y)处的像素值;n表示所述图像配准处理后的多帧夜景图像的数量。
可选地,如上所述的移动终端中,所述增强模块对所述去噪模块进行所述融合去噪处理后的图像进行夜景增强处理,包括:
获取所述融合去噪处理后的图像的预设区域内每个点的灰度值;
根据所获取的每个点的灰度值计算所述融合去噪处理后的图像的预设区域的灰度平均值以及局部标准方差;
根据计算出的所述局部标准方差确定增强系数;
根据计算出的所述灰度平均值以及确定出的增强系数对所述融合去噪处理后的图像进行夜景增强处理。
一种移动终端,包括:处理器、存储器及通信总线;
所述通信总线,设置为:实现所述处理器和所述存储器之间的连接通信;
所述处理器,设置为:执行所述存储器中存储的夜景图像的处理程序, 以实现以下步骤:
获取同一预览画面的多帧夜景图像;
对所获取的多帧夜景图像进行图像配准处理;
对所述图像配准处理后的多帧夜景图像进行融合去噪处理;
对所述融合去噪处理后的图像进行夜景增强处理。
可选地,如上所述的移动终端中,所述对所获取的多帧夜景图像进行图像配准处理的步骤中,所述处理器还设置为:执行所述夜景图像的处理程序,以实现以下步骤:
在所述获取的多帧夜景图像中选取一帧图像作为基准图像;
分别将所述获取的多帧夜景图像中除去所述基准图像以外的其它图像与所述基准图像对齐。
可选地,如上所述的移动终端中,所述对所述图像配准处理后的多帧夜景图像进行融合去噪处理的步骤中,所述处理器还设置为:执行所述夜景图像的处理程序,以实现以下步骤:
分别获取所述图像配准处理后的多帧夜景图像中每帧夜景图像的每个像素的像素值;
根据所获取的每帧夜景图像的每个像素的像素值,按照以下公式计算进行所述融合去噪处理后的图像的每个像素的像素值为:
Figure PCTCN2017092664-appb-000006
其中,D(x,y)表示所述融合去噪处理后的所述图像在点(x,y)处的像素值;Ii(x,y)表示所述图像配准处理后的多帧夜景图像中的第i帧夜景图像在点(x,y)处的像素值;n表示所述图像配准处理后的多帧夜景图像的数量。
可选地,如上所述的移动终端中,所述对所述融合去噪处理后的图像进行夜景增强处理的步骤中,所述处理器还设置为:执行所述夜景图像的处理程序,以实现以下步骤:
获取所述融合去噪处理后的图像的预设区域内每个点的灰度值;
根据所获取的每个点的灰度值计算所述融合去噪处理后的图像的预设区域的灰度平均值以及局部标准方差;
根据计算出的所述局部标准方差确定增强系数;
根据计算出的所述灰度平均值以及确定出的增强系数对所述融合去噪处理后的图像进行夜景增强处理。
一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序用于被一个或者多个处理器执行,以实现以下步骤:
获取同一预览画面的多帧夜景图像;
对所获取的多帧夜景图像进行图像配准处理;
对所述图像配准处理后的多帧夜景图像进行融合去噪处理;
对所述融合去噪处理后的图像进行夜景增强处理。
可选地,如上所述的计算机可读存储介质中,所述对所获取的多帧夜景图像进行图像配准处理的步骤中,所述一个或者多个程序还用于被所述一个或者多个处理器执行,以实现以下步骤:
在所述获取的多帧夜景图像中选取一帧图像作为基准图像;
分别将所述获取的多帧夜景图像中除去所述基准图像以外的其它图像与所述基准图像对齐。
可选地,如上所述的计算机可读存储介质中,所述对所述图像配准处理后的多帧夜景图像进行融合去噪处理的步骤中,所述一个或者多个程序还用于被所述一个或者多个处理器执行,以实现以下步骤:
分别获取所述图像配准处理后的多帧夜景图像中每帧夜景图像的每个像素的像素值;
根据所获取的每帧夜景图像的每个像素的像素值,按照以下公式计算进行所述融合去噪处理后的图像的每个像素的像素值为:
Figure PCTCN2017092664-appb-000007
其中,D(x,y)表示所述融合去噪处理后的所述图像在点(x,y)处的像素值;Ii(x,y)表示所述图像配准处理后的多帧夜景图像中的第i帧夜景图像在点(x,y)处的像素值;n表示所述图像配准处理后的多帧夜景图像的数量。
可选地,如上所述的计算机可读存储介质中,所述对所述融合去噪处理后的图像进行夜景增强处理的步骤中,所述一个或者多个程序还用于被所述一个或者多个处理器执行,以实现以下步骤:
获取所述融合去噪处理后的图像的预设区域内每个点的灰度值;
根据所获取的每个点的灰度值计算所述融合去噪处理后的图像的预设区域的灰度平均值以及局部标准方差;
根据计算出的所述局部标准方差确定增强系数;
根据计算出的所述灰度平均值以及确定出的增强系数对所述融合去噪处理后的图像进行夜景增强处理。
本发明实施例提供的夜景图像的处理方法和移动终端,通过移动终端获取同一预览画面的多帧夜景图像;对所获取的多帧夜景图像进行图像配准处理;对图像配准处理后的多帧夜景图像进行融合去噪处理;对融合去噪处理后的图像进行夜景增强处理;本发明实施例提升了夜景图像的清晰度和对比度,增强了用户体验。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1为实现本发明实施例提供的一种移动终端的硬件结构示意图;
图2为支持本发明实施例提供的移动终端之间进行通信的通信系统的示意图;
图3为本发明实施例提供的一种夜景图像的处理方法的流程图;
图4为本发明实施例提供的另一种夜景图像的处理方法的流程图;
图5为本发明实施例提供的又一种夜景图像的处理方法的流程图;
图6为本发明实施例提供的一种移动终端的结构示意图;
图7为本发明实施例提供的另一种移动终端的结构示意图;
图8为本发明实施例提供的又一种移动终端的结构示意图。
本发明的实施方式
应当理解,以下所描述的实施例仅仅用以解释本发明,并不用于限定本发明。
下文中将结合附图对本发明的实施方式进行详细说明。需要说明的是,在不冲突的情况下,本文中的实施例及实施例中的特征可以相互任意组合。
在附图的流程图示出的步骤可以在诸根据一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
现在将参考附图描述实现本申请各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。
图1为实现本发明实施例提供的一种移动终端的硬件结构示意图。
移动终端100可以包括无线通信单元110、A/V(音频/视频)输入单元120、用户输入单元130、感测单元140、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无 线通信系统或网络之间的无线电通信。例如,无线通信单元可以包括广播接收模块111、移动通信模块112、无线互联网模块113、短程通信模块114和位置信息模块115中的至少一个。
广播接收模块111经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播信道可以包括卫星信道和/或地面信道。广播管理服务器可以是生成并发送广播信号和/或广播相关信息的服务器或者接收之前生成的广播信号和/或广播相关信息并且将其发送给终端的服务器。广播信号可以包括TV广播信号、无线电广播信号、数据广播信号等等。而且,广播信号还可以包括与TV或无线电广播信号组合的广播信号。广播相关信息也可以经由移动通信网络提供,并且在该情况下,广播相关信息可以由移动通信模块112来接收。广播信号可以以各种形式存在,例如,其可以以数字多媒体广播(DMB)的电子节目指南(EPG)、数字视频广播手持(DVB-H)的电子服务指南(ESG)等等的形式而存在。广播接收模块111可以通过使用各种类型的广播系统接收信号广播。特别地,广播接收模块111可以通过使用诸如多媒体广播-地面(DMB-T)、数字多媒体广播-卫星(DMB-S)、数字视频广播-手持(DVB-H),前向链路媒体(MediaFLO@)的数据广播系统、地面数字广播综合服务(ISDB-T)等等的数字广播系统接收数字广播。广播接收模块111可以被构造为适合提供广播信号的各种广播系统以及上述数字广播系统。经由广播接收模块111接收的广播信号和/或广播相关信息可以存储在存储器160(或者其它类型的存储介质)中。
移动通信模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种类型的数据。
无线互联网模块113支持移动终端的无线互联网接入。该模块可以内部或外部地耦接到终端。该模块所涉及的无线互联网接入技术可以包括WLAN(无线LAN)(Wi-Fi)、Wibro(无线宽带)、Wimax(全球微波互联接入)、HSDPA(高速下行链路分组接入)等等。
短程通信模块114是用于支持短程通信的模块。短程通信技术的一些示 例包括蓝牙TM、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)、紫蜂TM等等。
位置信息模块115是用于检查或获取移动终端的位置信息的模块。位置信息模块的典型示例是GPS(全球定位系统)。根据相关的技术,GPS模块115计算来自三个或更多卫星的距离信息和准确的时间信息并且对于计算的信息应用三角测量法,从而根据经度、纬度和高度准确地计算三维当前位置信息。相关技术中,用于计算位置和时间信息的方法使用三颗卫星并且通过使用另外的一颗卫星校正计算出的位置和时间信息的误差。此外,GPS模块115能够通过实时地连续计算当前位置信息来计算速度信息。
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风122,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示模块151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机121。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信模块112发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示模块151上时,可以形成触摸屏。
感测单元140检测移动终端100的当前状态,(例如,移动终端100的打开或关闭状态)、移动终端100的位置、用户对于移动终端100的接触(即,触摸输入)的有无、移动终端100的取向、移动终端100的加速或减速移动和方向等等,并且生成用于控制移动终端100的操作的命令或信号。例如, 当移动终端100实施为滑动型移动电话时,感测单元140可以感测该滑动型电话是打开还是关闭。另外,感测单元140能够检测电源单元190是否提供电力或者接口单元170是否与外部装置耦接。
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为"识别装置")可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端和外部装置之间传输数据。
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示模块151、音频输出模块152、警报模块153等等。
显示模块151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示模块151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示模块151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
同时,当显示模块151和触摸板以层的形式彼此叠加以形成触摸屏时,显示模块151可以用作输入装置和输出装置。显示模块151可以包括液晶显 示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触摸输入位置和触摸输入面积。
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括扬声器、蜂鸣器等等。
警报模块153可以提供输出以将事件的发生通知给移动终端100。典型的事件可以包括呼叫接收、消息接收、键信号输入、触摸输入等等。除了音频或视频输出之外,警报模块153可以以不同的方式提供输出以通知事件的发生。例如,警报模块153可以以振动的形式提供输出,当接收到呼叫、消息或一些其它进入通信(incoming communication)时,警报模块153可以提供触觉输出(即,振动)以将其通知给用户。通过提供这样的触觉输出,即使在用户的移动电话处于用户的口袋中时,用户也能够识别出各种事件的发生。警报模块153也可以经由显示模块151或音频输出模块152提供通知事件的发生的输出。
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问 存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180可以包括用于再现(或回放)多媒体数据的多媒体模块181,多媒体模块181可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,己经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本申请能够应用于任何类型的移动终端,并且不限于滑动型移动终端。
如图1中所示的移动终端100可以被构造为利用经由帧或分组发送数据的诸如有线和无线通信系统以及基于卫星的通信系统来操作。
现在将参考图2所示,为支持本发明实施例提供的移动终端之间进行通 信的通信系统的示意图,图2中描述其中根据本发明实施例提供的移动终端能够操作的通信系统。
这样的通信系统可以使用不同的空中接口和/或物理层。例如,由通信系统使用的空中接口包括例如频分多址(FDMA)、时分多址(TDMA)、码分多址(CDMA)和通用移动通信系统(UMTS)(特别地,长期演进(LTE))、全球移动通信系统(GSM)等等。作为非限制性示例,下面的描述涉及CDMA通信系统,但是这样的教导同样适用于其它类型的系统。
参考图2,CDMA无线通信系统可以包括多个移动终端100、多个基站(BS)270、基站控制器(BSC)275和移动交换中心(MSC)280。MSC280被构造为与公共电话交换网络(PSTN)290形成接口。MSC280还被构造为与可以经由回程线路耦接到基站270的BSC275形成接口。回程线路可以根据若干己知的接口中的任一种来构造,所述接口包括例如E1/T1、ATM,IP、PPP、帧中继、HDSL、ADSL或xDSL。将理解的是,如图2中所示的系统可以包括多个BSC275。
每个BS270可以服务一个或多个分区(或区域),由多向天线或指向特定方向的天线覆盖的每个分区放射状地远离BS270。或者,每个分区可以由用于分集接收的两个或更多天线覆盖。每个BS270可以被构造为支持多个频率分配,并且每个频率分配具有特定频谱(例如,1.25MHz,5MHz等等)。
分区与频率分配的交叉可以被称为CDMA信道。BS270也可以被称为基站收发器子系统(BTS)或者其它等效术语。在这样的情况下,术语"基站"可以用于笼统地表示单个BSC275和至少一个BS270。基站也可以被称为"蜂窝站"。或者,特定BS270的各分区可以被称为多个蜂窝站。
如图2中所示,广播发射器(BT)295将广播信号发送给在系统内操作的移动终端100。如图1中所示的广播接收模块111被设置在移动终端100处以接收由BT295发送的广播信号。在图2中,示出了几个全球定位系统(GPS)卫星300。卫星300帮助定位多个移动终端100中的至少一个。
在图2中,描绘了多个卫星300,但是理解的是,可以利用任何数目的卫星获得有用的定位信息。如图1中所示的GPS模块115通常被构造为与卫星300配合以获得想要的定位信息。替代GPS跟踪技术或者在GPS跟踪技 术之外,可以使用可以跟踪移动终端的位置的其它技术。另外,至少一个GPS卫星300可以选择性地或者额外地处理卫星DMB传输。
作为无线通信系统的一个典型操作,BS270接收来自各种移动终端100的反向链路信号。移动终端100通常参与通话、消息收发和其它类型的通信。特定基站270接收的每个反向链路信号被在特定BS270内进行处理。获得的数据被转发给相关的BSC275。BSC提供通话资源分配和包括BS270之间的软切换过程的协调的移动管理功能。BSC275还将接收到的数据路由到MSC280,其提供用于与PSTN290形成接口的额外的路由服务。类似地,PSTN290与MSC280形成接口,MSC与BSC275形成接口,并且BSC275相应地控制BS270以将正向链路信号发送到移动终端100。
基于上述移动终端的硬件结构以及通信系统,提出本发明实施例的夜景图像的处理方法和移动终端。
图3为本发明实施例提供的一种夜景图像的处理方法的流程图,如图3所示,本发明实施例提供的夜景图像的处理方法可以包括如下步骤,即步骤310~步骤340:
步骤310,移动终端获取同一预览画面的多帧夜景图像。
在本发明实施例中,步骤310可以包括如下步骤:
设置拍摄参数;
根据设置的拍摄参数获取同一预览画面的多帧夜景图像。
在本发明实施例中,步骤310中所获取的多帧夜景图像中任意相邻的两帧夜景图像的时间间隔可以为预设时长。
在实际应用中,所获取的同一预览画面中的多帧夜景图像的拍摄参数是相同的。
在本发明实施例中,拍摄参数包括但不限于:感光度(ISO),曝光度,对焦参数等中的一项或多项。
在本发明实施例中,预设时长可以是由移动终端的系统设定的默认值,也可以是由移动终端提供一人机交互界面,由用户根据自身的需求进行设定,如可以是50毫秒,35毫秒,也可以是100毫秒。
可选地,本发明实施例提供的夜景图像的方法,在步骤310之前还可以包括如下步骤:
在移动终端的拍摄页面添加用于开启或关闭处理夜景图像功能的配置项;
当检测到对用于开启或关闭处理夜景图像功能的配置项的开启操作时,开启处理夜景图像功能;
当检测到对用于开启或关闭处理夜景图像功能的配置项的关闭操作时,关闭处理夜景图像功能。
可选地,本发明实施例提供的夜景图像的方法,在步骤310之前还可以包括如下步骤:
当检测到开启处理夜景图像功能,且检测到用户确认拍摄的指令时,转入步骤310。
步骤320,对所获取的多帧夜景图像进行图像配准处理。
步骤330,对图像配准处理后的多帧夜景图像进行融合去噪处理。
步骤340,对融合去噪处理后的图像进行夜景增强处理。
本发明实施提供的夜景图像的处理方法,通过对所获取同一预览画面的多帧夜景图像进行图像配准处理和融合去噪处理,以及对融合去噪处理后的图像进行夜景增强处理,从而提升了夜景图像的清晰度和对比度,增强了用户体验。
可选地,图4为本发明实施例提供的另一种夜景图像的处理方法的流程图。在图3所示实施例的基础上,本发明实施例中的步骤320可以包括如下步骤,即步骤321~步骤322:
步骤321,在所获取的多帧夜景图像中选取一帧图像作为基准图像;
步骤322,分别将所获取的多帧夜景图像中除去基准图像以外的其它图像与基准图像对齐。
在实际应用中,可以选取第一帧图像作为基准图像,也可以选取第二帧图像作为基准图像,也可以选取最后一帧图像作为基准图像。
需要说明的是,如何将某一帧图像和其它帧图像进行对齐属于本领域技术人员所熟知的惯用技术手段,例如可以采用Lucas-Kanade光流法进行不同帧图像之间的对齐,本发明实施例中并不限制采用何种配准(或对齐)方法。
在本发明实施例中,上述步骤320进行图像配准处理后的多帧夜景图可以包括多帧进行对齐处理的图像和上述基准图像。
可选地,在本发明实施例中,上述步骤330可以包括如下步骤,即步骤331~步骤332:
步骤331,分别获取图像配准处理后的多帧夜景图像中每帧夜景图像的每个像素的像素值;
步骤332,根据所获取的每帧夜景图像的每个像素的像素值,按照公式(1)计算进行融合去噪处理后的图像的每个像素的像素值为:
Figure PCTCN2017092664-appb-000008
其中,D(x,y)表示融合去噪处理后的图像在点(x,y)处的像素值;Ii(x,y)表示图像配准处理后的多帧夜景图像中的第i帧夜景图像在点(x,y)处的像素值;n表示图像配准处理后的多帧夜景图像的数量。
其中,公式(1)的推导过程如下:假设经过配准处理操作后的图像是n帧,依次是[I1,I2.......In],融合去噪处理后的图像在点(x,y)处的像素值是D(x,y),假设Ii(x,y)为第i帧夜景图像在点(x,y)处的真实像素值,Ni(x,y)为第i帧夜景图像受到噪声干扰后在点(x,y)处的像素值,则有公式(1.0)
Figure PCTCN2017092664-appb-000009
由于噪声通常满足高斯模型,所以
Figure PCTCN2017092664-appb-000010
约等于0,由公式(1.0)可以得到公式(1),可以通过公式(1)计算在点(x,y)处融合去噪处理后的像素值。
可选地,在本发明实施例中,上述步骤340可以包括如下步骤,即步骤 341~步骤344:
步骤341,获取融合去噪处理后的图像的预设区域内每个点的灰度值;
步骤342,根据所获取的每个点的灰度值计算融合去噪处理后的图像的预设区域的灰度平均值以及局部标准方差;
步骤343,根据计算出的局部标准方差确定增强系数;
步骤344,根据计算出的灰度平均值以及确定出的增强系数对融合去噪处理后的图像进行夜景增强处理。
可选地,在本发明实施例中,可以根据公式(2)对融合去噪处理后的图像进行夜景增强处理:
f(x,y)=averageH(x,y)+G(x,y)[H(x,y)-averageH(x,y)];    (2)
其中,f(x,y)表示夜景增强处理后的图像在点(x,y)处的像素值;averageH(x,y)表示在点(x,y)处的灰度平均值;G(x,y)表示在点(x,y)处的增强系数;H(x,y)表示点(x,y)处的灰度值。
可选地,在本发明实施例中,可以根据公式(3)计算灰度平均值:
Figure PCTCN2017092664-appb-000011
其中,2m+1表示预设区域的长度;2k+1表示预设区域的宽度,H(l,j)表示点(l,j)处的灰度值;点(x,y)表示预设区域的中心点;m和k均为正整数。
可选地,在本发明实施例中,预设区域可以是长方形,也可以是正方形,当预设区域是正方形时,此处的m=k。需要说明的是,预设区域也可以是一个以点(x,y)为中心点的圆形,也可以是以点(x,y)为中心点的菱形等,当预设区域是一个以点(x,y)为中心点的圆形时,可以根据圆形的半径确定预设区域的长度和宽度等。
可选地,在本发明实施例中,可以根据公式(4)计算局部标准方差:
Figure PCTCN2017092664-appb-000012
其中,σH(x,y)表示在点(x,y)处的局部标准方差。
可选地,在本发明实施例中,可以根据公式(5)确定增强系数:
Figure PCTCN2017092664-appb-000013
其中,M表示融合去噪处理后的图像的长度;N表示融合去噪处理后的图像的宽度。
本发明实施提供的夜景图像的处理方法,通过对所获取同一预览画面的多帧夜景图像进行图像配准处理和融合去噪处理,以及对融合去噪处理后的图像进行夜景增强处理,从而提升了夜景图像的清晰度和对比度,增强了用户体验。
图5为本发明实施例提供的又一种夜景图像的处理方法的流程图,如图5所示,本发明实施例提供的夜景图像的处理方法可以包括如下步骤,即步骤401~步骤410:
步骤401,在移动终端的拍摄页面添加用于开启或关闭处理夜景图像功能的配置项。
步骤402,移动终端检测是否存在对用于开启或关闭处理夜景图像功能的配置项的开启操作。当检测到对用于开启或关闭处理夜景图像功能的配置项的开启操作时,执行步骤403;当未检测到开启操作时,结束本流程。
步骤403,移动终端开启处理夜景图像功能。
步骤404,移动终端检测是否存在用户确认拍摄的指令。当检测到用户确认拍摄的指令时,执行步骤405;当未检测到用户确认拍摄的指令时,继续执行步骤404。
步骤405,移动终端获取同一预览画面的多帧夜景图像。
可选地,在本发明实施例中,步骤405可以包括如下步骤:
设置拍摄参数;
根据设置的拍摄参数获取同一预览画面的多帧夜景图像;
在本发明实施例中,步骤405中所获取的多帧夜景图像中任意相邻的两 帧夜景图像的时间间隔可以为预设时长。
在实际应用中,所获取的同一预览画面中的多帧夜景图像的拍摄参数是相同的。
在本发明实施例中,拍摄参数包括但不限于:感光度(ISO),曝光度,对焦参数等中的一项或多项。
在本发明实施例中,预设时长可以是由移动终端的系统设定的默认值,也可以是由移动终端提供一人机交互界面,由用户根据自身的需求进行设定,如可以是50毫秒,35毫秒,也可以是100毫秒。
步骤406,移动终端对获得的多帧夜景图像进行图像配准处理。
可选地,在本发明实施例中,步骤406可以包括如下步骤:
在所获取的多帧夜景图像中选取一帧图像作为基准图像;
分别将所获取的多帧夜景图像中除去基准图像以外的其它图像与基准图像对齐。
在实际应用中,可以选取第一帧图像作为基准图像,也可以选取第二帧图像作为基准图像,也可以选取最后一帧图像作为基准图像。
需要说明的是,如何将某一帧图像和其它帧图像进行对齐属于本领域技术人员所熟知的惯用技术手段,例如可以采用Lucas-Kanade光流法进行不同帧图像之间的对齐,本发明实施例中并不限制采用何种配准(或对齐)方法。
步骤407,移动终端对图像配准处理后的多帧夜景图像进行融合去噪处理。
在本发明实施例中,上述步骤406进行图像配准处理后的多帧夜景图可以包括多帧进行对齐处理的图像和上述基准图像。
可选地,在本发明实施例中,步骤407可以包括如下步骤:
分别获取图像配准处理后的多帧夜景图像中每帧夜景图像的每个像素的像素值;
根据所获取的每帧夜景图像的每个像素的像素值,按照公式(1)计算 进行融合去噪处理后的图像的每个像素的像素值为:
Figure PCTCN2017092664-appb-000014
其中,D(x,y)表示融合去噪处理后的图像在点(x,y)处的像素值;Ii(x,y)表示图像配准处理后的多帧夜景图像中的第i帧夜景图像在点(x,y)处的像素值;n表示图像配准处理后的多帧夜景图像的数量。
其中,公式(1)的推导过程如下:假设经过配准处理操作后的图像是n帧,依次是[I1,I2.......In],融合去噪处理后的图像在点(x,y)处的像素值是D(x,y),假设Ii(x,y)为第i帧夜景图像在点(x,y)处的真实像素值,Ni(x,y)为第i帧夜景图像受到噪声干扰后在点(x,y)处的像素值,则有公式(1.0)
Figure PCTCN2017092664-appb-000015
由于噪声通常满足高斯模型,所以
Figure PCTCN2017092664-appb-000016
约等于0,由公式(1.0)可以得到公式(1),可以通过公式(1)计算在点(x,y)处融合去噪处理后的像素值。
步骤408,移动终端对融合去噪处理后的图像进行夜景增强处理。
可选地,在本发明实施例中,步骤408可以包括如下步骤:
获取融合去噪处理后的图像的预设区域内每个点的灰度值;
根据所获取的每个点的灰度值计算融合去噪处理后的图像的预设区域的灰度平均值以及局部标准方差;
根据计算出的局部标准方差确定增强系数;
根据计算出的灰度平均值以及确定出的增强系数对融合去噪处理后的图像进行夜景增强处理。
可选地,在本发明实施例中,可以根据公式(2)对融合去噪处理后的图像进行夜景增强处理:
f(x,y)=averageH(x,y)+G(x,y)[H(x,y)-averageH(x,y)];      (2)
其中,f(x,y)表示夜景增强处理后的图像在点(x,y)处的像素值;averageH(x,y)表示在点(x,y)处的灰度平均值;G(x,y)表示在点(x,y)处的增强系数;H(x,y)表示点(x,y)处的灰度值。
可选地,在本发明实施例中,可以根据公式(3)计算灰度平均值:
Figure PCTCN2017092664-appb-000017
其中,2m+1表示预设区域的长度;2k+1表示预设区域的宽度,H(l,j)表示点(l,j)处的灰度值;点(x,y)表示预设区域的中心点;m和k均为正整数。
可选地,在本发明实施例中,预设区域可以是长方形,也可以是正方形,当预设区域是正方形时,此处的m=k。需要说明的是,预设区域也可以是一个以点(x,y)为中心点的圆形,也可以是以点(x,y)为中心点的菱形等,当预设区域是一个以点(x,y)为中心点的圆形时,可以根据圆形的半径确定预设区域的长度和宽度等。
可选地,在本发明实施例中,可以根据公式(4)计算局部标准方差:
Figure PCTCN2017092664-appb-000018
其中,σH(x,y)表示在点(x,y)处的局部标准方差。
可选地,在本发明实施例中,可以根据公式(5)确定增强系数:
Figure PCTCN2017092664-appb-000019
其中,M表示融合去噪处理后的图像的长度;N表示融合去噪处理后的图像的宽度。
步骤409,移动终端检测是否存在对用于开启或关闭处理夜景图像功能的配置项的关闭操作。当检测到对用于开启或关闭处理夜景图像功能的配置项的关闭操作时,执行步骤410;当未检测到关闭操作时,结束本流程。
步骤410,移动终端关闭处理夜景图像功能。
针对于图3到图5所示任一实施例提供的夜景图像的处理方法,本发明实施例提供了用于执行上述夜景图像的处理方法的移动终端。
图6为本发明实施例提供的一种移动终端的结构示意图,如图6所示,发明实施例提供的移动终端可以包括:获取模块50、配准模块51、去噪模块52和增强模块53。
其中,获取模块50,设置为:获取同一预览画面的多帧夜景图像。
在本发明实施例中,获取模块50获取同一预览画面的多帧夜景图像的实现方式,可以包括:
设置拍摄参数;
根据设置的拍摄参数获取同一预览画面的多帧夜景图像。
在本发明实施例中,获取模块50所获取的多帧夜景图像中任意相邻的两帧夜景图像的时间间隔可以为预设时长。
在本发明实施例中,获取模块50例如可以是摄像头。
在本发明实施例中,预设时长可以是由移动终端的系统设定的默认值,也可以是由移动终端提供一人机交互界面,由用户根据自身的需求进行设定,如可以是50毫秒,35毫秒,也可以是100毫秒。
配准模块51,设置为:对获取模块50获取的多帧夜景图像进行图像配准处理。
去噪模块52,设置为:对配准模块51进行图像配准处理后的多帧夜景图像进行融合去噪处理。
增强模块53,设置为:对去噪模块52进行融合去噪处理后的图像进行夜景增强处理。
本发明实施提供的移动终端,通过对所获取同一预览画面的多帧夜景图像进行图像配准处理和融合去噪处理,以及对融合去噪处理后的图像进行夜景增强处理,从而提升了夜景图像的清晰度和对比度,增强了用户体验。
可选地,在本发明实施例中,配准模块51对获取模块50获取的多帧夜景图像进行图像配准处理的实现方式,可以包括:
在获取模块50获取的多帧夜景图像中选取一帧图像作为基准图像;
分别将获取模块50获取的多帧夜景图像中除去基准图像以外的其它图像与基准图像对齐。
在实际应用中,配准模块51例如可以采用Lucas-Kanade光流法进行不同帧图像之间的对齐。
可选地,在本发明实施例中,去噪模块52对配准模块51进行图像配准处理后的多帧夜景图像进行融合去噪处理的实现方式,可以包括:
分别获取图像配准处理后的多帧夜景图像中每帧夜景图像的每个像素的像素值;
根据所获取的每帧夜景图像的每个像素的像素值,根据公式(1)计算进行融合去噪处理后的图像的每个像素的像素值为:
Figure PCTCN2017092664-appb-000020
其中,D(x,y)表示融合去噪处理后的图像在点(x,y)处的像素值;Ii(x,y)表示图像配准处理后的多帧夜景图像中的第i帧夜景图像在点(x,y)处的像素值;n表示图像配准处理后的多帧夜景图像的数量。
可选地,在本发明实施例中,增强模块53对去噪模块52进行融合去噪处理后的图像进行夜景增强处理的实现方式,可以包括:
获取融合去噪处理后的图像的预设区域内每个点的灰度值;
根据所获取的每个点的灰度值计算融合去噪处理后的图像的预设区域的灰度平均值以及局部标准方差;
根据计算出的局部标准方差确定增强系数;
根据计算出的灰度平均值以及确定出的增强系数对融合去噪处理后的图像进行夜景增强处理。
可选地,在本发明实施例中,增强模块53可以根据公式(2)对融合去 噪处理后的图像进行夜景增强处理:
f(x,y)=averageH(x,y)+G(x,y)[H(x,y)-averageH(x,y)];      (2)
其中,f(x,y)表示夜景增强处理后的图像在点(x,y)处的像素值;averageH(x,y)表示在点(x,y)处的灰度平均值;G(x,y)表示在点(x,y)处的增强系数;H(x,y)表示点(x,y)处的灰度值。
可选地,在本发明实施例中,增强模块53可以根据公式(3)计算灰度平均值:
Figure PCTCN2017092664-appb-000021
其中,2m+1表示预设区域的长度;2k+1表示预设区域的宽度,H(l,j)表示点(l,j)处的灰度值;点(x,y)表示预设区域的中心点;m和k均为正整数。
可选地,在本发明实施例中,预设区域可以是长方形,也可以是正方形,当预设区域是正方形时,此处的m=k。需要说明的是,预设区域也可以是一个以点(x,y)为中心点的圆形,也可以是以点(x,y)为中心点的菱形等,当预设区域是一个以点(x,y)为中心点的圆形时,可以根据圆形的半径确定预设区域的长度和宽度等。
可选地,在本发明实施例中,增强模块53可以根据公式(4)计算局部标准方差:
Figure PCTCN2017092664-appb-000022
其中,σH(x,y)表示在点(x,y)处的局部标准方差。
可选地,在本发明实施例中,增强模块53可以根据公式(5)确定增强系数:
Figure PCTCN2017092664-appb-000023
其中,M表示融合去噪处理后的图像的长度;N表示融合去噪处理后的 图像的宽度。
可选地,图7为本发明实施例提供的另一种移动终端的结构示意图。在图6所示移动终端的结构基础上,本发明实施例提供的移动终端还可以包括:
设置模块54,设置为:在移动终端的拍摄页面添加用于开启或关闭处理夜景图像功能的配置项。
可选地,本发明实施例中的移动终端还可以包括:
检测模块55,设置为:当检测到对用于开启或关闭处理夜景图像功能的配置项的开启操作时,开启处理夜景图像功能。
可选地,在本发明实施例中,检测模块55,还设置为:当检测到对用于开启或关闭处理夜景图像功能的配置项的关闭操作时,关闭处理夜景图像功能。
可选地,在本发明实施例中,检测模块55,还设置为:当检测到开启处理夜景图像功能,且检测到用户确认拍摄的指令时,通知获取模块50。
针对于图5所示的夜景图像的处理方法,本发明图7所示实施例中的移动终端用于执行该方法。
该移动终端包括:设置模块54、检测模块55、获取模块50、配准模块51、去噪模块52和增强模块53。
其中,设置模块54,设置为:在移动终端的拍摄页面添加用于开启或关闭处理夜景图像功能的配置项。
检测模块55,设置为:当检测到对用于开启或关闭处理夜景图像功能的配置项的开启操作时,开启处理夜景图像功能;当检测到开启处理夜景图像功能,且检测到用户确认拍摄的指令时,通知获取模块50。
可选地,检测模块55,还设置为:当检测到对用于开启或关闭处理夜景图像功能的配置项的关闭操作时,关闭处理夜景图像功能。
获取模块50,设置为:获取同一预览画面的多帧夜景图像。
在本发明实施例中,获取模块50获取同一预览画面的多帧夜景图像的实现方式,可以包括:
设置拍摄参数;
根据设置的拍摄参数获取同一预览画面的多帧夜景图像。
在本发明实施例中,获取模块50所获取的多帧夜景图像中任意相邻的两帧夜景图像的时间间隔可以为预设时长。
在本发明实施例中,获取模块50例如可以是摄像头。
在本发明实施例中,预设时长可以是由移动终端的系统设定的默认值,也可以是由移动终端提供一人机交互界面,由用户根据自身的需求进行设定,如可以是50毫秒,35毫秒,也可以是100毫秒。
配准模块51,设置为:对获取模块50获取的多帧夜景图像进行图像配准处理。
可选地,在本发明实施例中,配准模块51对获取模块50获取的多帧夜景图像进行图像配准处理的实现方式,可以包括:
在获取模块50获取的多帧夜景图像中选取一帧图像作为基准图像;
分别将获取模块50获取的多帧夜景图像中除去基准图像以外的其它图像与基准图像对齐。
在实际应用中,配准模块51例如可以采用Lucas-Kanade光流法进行不同帧图像之间的对齐。
去噪模块52,设置为:对配准模块51进行图像配准处理后的多帧夜景图像进行融合去噪处理。
可选地,在本发明实施例中,去噪模块52对配准模块51进行图像配准处理后的多帧夜景图像进行融合去噪处理的实现方式,可以包括:
分别获取图像配准处理后的多帧夜景图像中每帧夜景图像的每个像素的像素值;
根据所获取的每帧夜景图像的每个像素的像素值,根据公式(1)计算进行融合去噪处理后的图像的每个像素的像素值为:
Figure PCTCN2017092664-appb-000024
其中,D(x,y)表示进行融合去噪处理后的图像在点(x,y)处的像素值;Ii(x,y)表示图像配准处理后的多帧夜景图像中的第i帧夜景图像在点(x,y)处的像素值;n表示图像配准处理后的多帧夜景图像的数量。
增强模块53,设置为:对去噪模块52进行融合去噪处理后的图像进行夜景增强处理。
可选地,在本发明实施例中,增强模块53对去噪模块52进行融合去噪处理后的图像进行夜景增强处理的实现方式,可以包括:
获取融合去噪处理后的图像的预设区域内每个点的灰度值;
根据所获取的每个点的灰度值计算融合去噪处理后的图像的预设区域的灰度平均值以及局部标准方差;
根据计算出的局部标准方差确定增强系数;
根据计算出的灰度平均值以及确定出的增强系数对融合去噪处理后的图像进行夜景增强处理。
可选地,在本发明实施例中,增强模块53可以根据公式(2)对融合去噪处理后的图像进行夜景增强处理:
f(x,y)=averageH(x,y)+G(x,y)[H(x,y)-averageH(x,y)];      (2)
其中,f(x,y)表示夜景增强处理后的图像在点(x,y)处的像素值;averageH(x,y)表示在点(x,y)处的灰度平均值;G(x,y)表示在点(x,y)处的增强系数;H(x,y)表示点(x,y)处的灰度值。
可选地,在本发明实施例中,增强模块53可以根据公式(3)计算灰度平均值:
Figure PCTCN2017092664-appb-000025
其中,2m+1表示预设区域的长度;2k+1表示预设区域的宽度,H(l,j)表示点(l,j)处的灰度值;点(x,y)表示预设区域的中心点;m和k均为正整数。
可选地,在本发明实施例中,预设区域可以是长方形,也可以是正方形,当预设区域是正方形时,此处的m=k。需要说明的是,预设区域也可 以是一个以点(x,y)为中心点的圆形,也可以是以点(x,y)为中心点的菱形等,当预设区域是一个以点(x,y)为中心点的圆形时,可以根据圆形的半径确定预设区域的长度和宽度等。
可选地,在本发明实施例中,增强模块53可以根据公式(4)计算局部标准方差:
Figure PCTCN2017092664-appb-000026
其中,σH(x,y)表示在点(x,y)处的局部标准方差。
可选地,在本发明实施例中,增强模块53可以根据公式(5)确定增强系数:
Figure PCTCN2017092664-appb-000027
其中,M表示融合去噪处理后的图像的长度;N表示融合去噪处理后的图像的宽度。
针对于图3到图5所示任一实施例提供的夜景图像的处理方法,本发明实施例还提供一种用于执行上述夜景图像的处理方法的移动终端。
图8为本发明实施例提供的又一种移动终端的结构示意图。如图8所示,本发明实施例提供的移动终端60可以包括:处理器61、存储器62及通信总线63。
其中,通信总线63,设置为:实现处理器61和存储器62之间的连接通信;
处理器61,设置为:执行存储器62中存储的夜景图像的处理程序,以实现以下步骤,即步骤710~步骤740:
步骤710,获取同一预览画面的多帧夜景图像;
步骤720,对所获取的多帧夜景图像进行图像配准处理;
步骤730,对图像配准处理后的多帧夜景图像进行融合去噪处理;
步骤740,对融合去噪处理后的图像进行夜景增强处理。
本发明实施提供的移动终端,通过对所获取同一预览画面的多帧夜景图像进行图像配准处理和融合去噪处理,以及对融合去噪处理后的图像进行夜景增强处理,从而提升了夜景图像的清晰度和对比度,增强了用户体验。
可选地,在本发明实施例中,在步骤710中,处理器61还设置为:执行夜景图像的处理程序,以实现以下步骤,即步骤711~步骤712:
步骤711,设置拍摄参数;
步骤712,根据设置的拍摄参数获取同一预览画面的多帧夜景图像。
可选地,在本发明实施例中,在步骤720中,处理器61还设置为:执行夜景图像的处理程序,以实现以下步骤,即步骤721~步骤722:
步骤721,在获取的多帧夜景图像中选取一帧图像作为基准图像;
步骤722,分别将获取的多帧夜景图像中除去基准图像以外的其它图像与基准图像对齐。
可选地,在本发明实施例中,在步骤730中,处理器61还设置为:执行夜景图像的处理程序,以实现以下步骤,即步骤731~步骤732:
步骤731,分别获取图像配准处理后的多帧夜景图像中每帧夜景图像的每个像素的像素值;
步骤732,根据所获取的每帧夜景图像的每个像素的像素值,按照以下公式计算进行所述融合去噪处理后的图像的每个像素的像素值为:
Figure PCTCN2017092664-appb-000028
其中,D(x,y)表示融合去噪处理后的所述图像在点(x,y)处的像素值;Ii(x,y)表示图像配准处理后的多帧夜景图像中的第i帧夜景图像在点(x,y)处的像素值;n表示图像配准处理后的多帧夜景图像的数量。
可选地,在本发明实施例中,在步骤740中,处理器61还设置为:执 行夜景图像的处理程序,以实现以下步骤,即步骤741~步骤744:
步骤741,获取融合去噪处理后的图像的预设区域内每个点的灰度值;
步骤742,根据所获取的每个点的灰度值计算融合去噪处理后的图像的预设区域的灰度平均值以及局部标准方差;
步骤743,根据计算出的局部标准方差确定增强系数;
步骤744,根据计算出的灰度平均值以及确定出的增强系数对融合去噪处理后的图像进行夜景增强处理。
可选地,在本发明实施例中,处理器61执行夜景图像的处理程序,以实现步骤740的实现方式,可以包括:
根据以下公式对融合去噪处理后的图像进行夜景增强处理:
f(x,y)=averageH(x,y)+G(x,y)[H(x,y)-averageH(x,y)];
其中,f(x,y)表示夜景增强处理后的图像在点(x,y)处的像素值;averageH(x,y)表示在点(x,y)处的灰度平均值;G(x,y)表示在点(x,y)处的增强系数;H(x,y)表示点(x,y)处的灰度值。
可选地,在本发明实施例中,处理器61执行夜景图像的处理程序,以计算灰度平均值的实现方式,可以包括:
根据以下公式计算所述灰度平均值:
Figure PCTCN2017092664-appb-000029
其中,2m+1表示所述预设区域的长度;2k+1表示所述预设区域的宽度,H(l,j)表示点(l,j)处的灰度值;点(x,y)表示预设区域的中心点;m和k均为正整数。
可选地,在本发明实施例中,处理器61执行夜景图像的处理程序,以计算局部标准方差的实现方式,可以包括:
根据以下公式计算局部标准方差:
Figure PCTCN2017092664-appb-000030
其中,σH(x,y)表示在点(x,y)处的局部标准方差。
可选地,在本发明实施例中,处理器61执行夜景图像的处理程序,以确定增强系数的实现方式,可以包括:
根据以下公式确定增强系数:
Figure PCTCN2017092664-appb-000031
其中,M表示融合去噪处理后的图像的长度;N表示融合去噪处理后的图像的宽度。
本发明实施例还提供一种计算机可读存储介质,该计算机可读存储介质存储有一个或者多个程序,该一个或者多个程序用于被一个或者多个处理器执行,以实现以下步骤,即步骤810~步骤840:
步骤810,获取同一预览画面的多帧夜景图像;
步骤820,对所获取的多帧夜景图像进行图像配准处理;
步骤830,对图像配准处理后的多帧夜景图像进行融合去噪处理;
步骤840,对融合去噪处理后的图像进行夜景增强处理。
本发明实施提供的计算机可读存储介质,通过对所获取同一预览画面的多帧夜景图像进行图像配准处理和融合去噪处理,以及对融合去噪处理后的图像进行夜景增强处理,从而提升了夜景图像的清晰度和对比度,增强了用户体验。
可选地,在本发明实施例中,在步骤810中,上述一个或者多个程序还用于被一个或者多个处理器执行,以实现以下步骤,即步骤811~步骤812:
步骤811,设置拍摄参数;
步骤812,根据设置的拍摄参数获取同一预览画面的多帧夜景图像。
可选地,在本发明实施例中,在步骤820中,上述一个或者多个程序还用于被一个或者多个处理器执行,以实现以下步骤,即步骤821~步骤822:
步骤821,在获取的多帧夜景图像中选取一帧图像作为基准图像;
步骤822,分别将获取的多帧夜景图像中除去基准图像以外的其它图像与基准图像对齐。
可选地,在本发明实施例中,在步骤830中,上述一个或者多个程序还用于被一个或者多个处理器执行,以实现以下步骤,即步骤831~步骤832:
步骤831,分别获取图像配准处理后的多帧夜景图像中每帧夜景图像的每个像素的像素值;
步骤832,根据所获取的每帧夜景图像的每个像素的像素值,按照以下公式计算进行所述融合去噪处理后的图像的每个像素的像素值为:
Figure PCTCN2017092664-appb-000032
其中,D(x,y)表示融合去噪处理后的所述图像在点(x,y)处的像素值;Ii(x,y)表示图像配准处理后的多帧夜景图像中的第i帧夜景图像在点(x,y)处的像素值;n表示图像配准处理后的多帧夜景图像的数量。
可选地,在本发明实施例中,在步骤840中,上述一个或者多个程序还用于被一个或者多个处理器执行,以实现以下步骤,即步骤841~步骤844:
步骤841,获取融合去噪处理后的图像的预设区域内每个点的灰度值;
步骤842,根据所获取的每个点的灰度值计算融合去噪处理后的图像的预设区域的灰度平均值以及局部标准方差;
步骤843,根据计算出的局部标准方差确定增强系数;
步骤844,根据计算出的灰度平均值以及确定出的增强系数对融合去噪处理后的图像进行夜景增强处理。
可选地,在本发明实施例中,上述一个或者多个程序用于被一个或者多个处理器执行,以实现步骤840的方式可以包括:
根据以下公式对融合去噪处理后的图像进行夜景增强处理:
f(x,y)=averageH(x,y)+G(x,y)[H(x,y)-averageH(x,y)];
其中,f(x,y)表示夜景增强处理后的图像在点(x,y)处的像素值; averageH(x,y)表示在点(x,y)处的灰度平均值;G(x,y)表示在点(x,y)处的增强系数;H(x,y)表示点(x,y)处的灰度值。
可选地,在本发明实施例中,上述一个或者多个程序用于被一个或者多个处理器执行,以计算灰度平均值的实现方式,可以包括:
根据以下公式计算所述灰度平均值:
Figure PCTCN2017092664-appb-000033
其中,2m+1表示所述预设区域的长度;2k+1表示所述预设区域的宽度,H(l,j)表示点(l,j)处的灰度值;点(x,y)表示预设区域的中心点;m和k均为正整数。
可选地,在本发明实施例中,上述一个或者多个程序用于被一个或者多个处理器执行,以计算局部标准方差的实现方式,可以包括:
根据以下公式计算局部标准方差:
Figure PCTCN2017092664-appb-000034
其中,σH(x,y)表示在点(x,y)处的局部标准方差。
可选地,在本发明实施例中,上述一个或者多个程序用于被一个或者多个处理器执行,以确定增强系数的实现方式,可以包括:
根据以下公式确定增强系数:
Figure PCTCN2017092664-appb-000035
其中,M表示融合去噪处理后的图像的长度;N表示融合去噪处理后的图像的宽度。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的 情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
以上仅为本发明的实施例和可选实施例方式,并非因此限制本发明实施例的保护范围,凡是利用本文说明书及说明书附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。
本领域普通技术人员可以理解上述实施例的全部或部分步骤可以使用计算机程序流程来实现,所述计算机程序可以存储于一计算机可读存储介质中,所述计算机程序在相应的硬件平台上(根据系统、设备、装置、器件等)执行,在执行时,包括方法实施例的步骤之一或其组合。
可选地,上述实施例的全部或部分步骤也可以使用集成电路来实现,这些步骤可以被分别制作成一个个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。
上述实施例中的装置/功能模块/功能单元可以采用通用的计算装置来实现,它们可以集中在单个的计算装置上,也可以分布在多个计算装置所组成的网络上。
上述实施例中的装置/功能模块/功能单元以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。上述提到的计算机可读取存储介质可以是只读存储器,磁盘或光盘等。
工业实用性
本发明实施例通过移动终端获取同一预览画面的多帧夜景图像;对所获取的多帧夜景图像进行图像配准处理;对图像配准处理后的多帧夜景图像进行融合去噪处理;对融合去噪处理后的图像进行夜景增强处理;本发明实施例提升了夜景图像的清晰度和对比度,增强了用户体验。

Claims (20)

  1. 一种夜景图像的处理方法,包括:
    移动终端获取同一预览画面的多帧夜景图像;
    对所获取的多帧夜景图像进行图像配准处理;
    对所述图像配准处理后的多帧夜景图像进行融合去噪处理;
    对所述融合去噪处理后的图像进行夜景增强处理。
  2. 根据权利要求1所述的夜景图像的处理方法,其中,所述对所获取的多帧夜景图像进行图像配准处理,包括:
    在所述获取的多帧夜景图像中选取一帧图像作为基准图像;
    分别将所述获取的多帧夜景图像中除去所述基准图像以外的其它图像与所述基准图像对齐。
  3. 根据权利要求1所述的夜景图像的处理方法,其中,所述对所述图像配准处理后的多帧夜景图像进行融合去噪处理,包括:
    分别获取所述图像配准处理后的多帧夜景图像中每帧夜景图像的每个像素的像素值;
    根据所获取的每帧夜景图像的每个像素的像素值,按照以下公式计算进行所述融合去噪处理后的图像的每个像素的像素值为:
    Figure PCTCN2017092664-appb-100001
    其中,D(x,y)表示所述融合去噪处理后的所述图像在点(x,y)处的像素值;Ii(x,y)表示所述图像配准处理后的多帧夜景图像中的第i帧夜景图像在点(x,y)处的像素值;n表示所述图像配准处理后的多帧夜景图像的数量。
  4. 根据权利要求1所述的夜景图像的处理方法,其中,所述对所述融合去噪处理后的图像进行夜景增强处理,包括:
    获取所述融合去噪处理后的图像的预设区域内每个点的灰度值;
    根据所获取的每个点的灰度值计算所述融合去噪处理后的图像的预设区域的灰度平均值以及局部标准方差;
    根据计算出的所述局部标准方差确定增强系数;
    根据计算出的所述灰度平均值以及确定出的增强系数对所述融合去噪处理后的图像进行夜景增强处理。
  5. 根据权利要求4所述的夜景图像的处理方法,其中,根据以下公式对所述融合去噪处理后的图像进行夜景增强处理:
    f(x,y)=averageH(x,y)+G(x,y)[H(x,y)-averageH(x,y)];
    其中,f(x,y)表示所述夜景增强处理后的图像在点(x,y)处的像素值;averageH(x,y)表示在点(x,y)处的灰度平均值;G(x,y)表示在点(x,y)处的增强系数;H(x,y)表示点(x,y)处的灰度值。
  6. 根据权利要求5所述的夜景图像的处理方法,其中,根据以下公式计算所述灰度平均值:
    Figure PCTCN2017092664-appb-100002
    其中,2m+1表示所述预设区域的长度;2k+1表示所述预设区域的宽度,H(l,j)表示点(l,j)处的灰度值;点(x,y)表示所述预设区域的中心点;m和k均为正整数。
  7. 根据权利要求6所述的夜景图像的处理方法,其中,根据以下公式计算所述局部标准方差:
    Figure PCTCN2017092664-appb-100003
    其中,σH(x,y)表示在点(x,y)处的局部标准方差。
  8. 根据权利要求7所述的夜景图像的处理方法,其中,根据以下公式确定所述增强系数:
    Figure PCTCN2017092664-appb-100004
    其中,M表示所述融合去噪处理后的图像的长度;N表示所述融合去噪处理后的图像的宽度。
  9. 一种移动终端,包括:获取模块、配准模块、去噪模块和增强模块;
    其中,所述获取模块,设置为:获取同一预览画面的多帧夜景图像;
    所述配准模块,设置为:对所述获取模块获取的所述多帧夜景图像进行图像配准处理;
    所述去噪模块,设置为:对所述配准模块进行所述图像配准处理后的多帧夜景图像进行融合去噪处理;
    所述增强模块,设置为:对所述去噪模块进行所述融合去噪处理后的图像进行夜景增强处理。
  10. 根据权利要求9所述的移动终端,其中,所述配准模块对所述获取模块获取的所述多帧夜景图像进行图像配准处理,包括:
    在所述获取模块获取的所述多帧夜景图像中选取一帧图像作为基准图像;
    分别将所述获取模块获取的所述多帧夜景图像中除去所述基准图像以外的其它图像与所述基准图像对齐。
  11. 根据权利要求9所述的移动终端,其中,所述去噪模块对所述配准模块进行所述图像配准处理后的多帧夜景图像进行融合去噪处理,包括:
    分别获取所述图像配准处理后的多帧夜景图像中每帧夜景图像的每个像素的像素值;
    根据所获取的每帧夜景图像的每个像素的像素值,按照以下公式计算进行所述融合去噪处理后的图像的每个像素的像素值为:
    Figure PCTCN2017092664-appb-100005
    其中,D(x,y)表示所述融合去噪处理后的所述图像在点(x,y)处的像素值;Ii(x,y)表示所述图像配准处理后的多帧夜景图像中的第i帧夜景图像在点(x,y)处的像素值;n表示所述图像配准处理后的多帧夜景图像的数量。
  12. 根据权利要求9所述的移动终端,其中,所述增强模块对所述去噪模块进行所述融合去噪处理后的图像进行夜景增强处理,包括:
    获取所述融合去噪处理后的图像的预设区域内每个点的灰度值;
    根据所获取的每个点的灰度值计算所述融合去噪处理后的图像的预设区域的灰度平均值以及局部标准方差;
    根据计算出的所述局部标准方差确定增强系数;
    根据计算出的所述灰度平均值以及确定出的增强系数对所述融合去噪处理后的图像进行夜景增强处理。
  13. 一种移动终端,包括:处理器、存储器及通信总线;
    所述通信总线,设置为:实现所述处理器和所述存储器之间的连接通信;
    所述处理器,设置为:执行所述存储器中存储的夜景图像的处理程序,以实现以下步骤:
    获取同一预览画面的多帧夜景图像;
    对所获取的多帧夜景图像进行图像配准处理;
    对所述图像配准处理后的多帧夜景图像进行融合去噪处理;
    对所述融合去噪处理后的图像进行夜景增强处理。
  14. 根据权利要求13所述的移动终端,其中,所述对所获取的多帧夜景图像进行图像配准处理的步骤中,所述处理器还设置为:执行所述夜景图像的处理程序,以实现以下步骤:
    在所述获取的多帧夜景图像中选取一帧图像作为基准图像;
    分别将所述获取的多帧夜景图像中除去所述基准图像以外的其它图像与所述基准图像对齐。
  15. 根据权利要求13所述的移动终端,其中,所述对所述图像配准处理后的多帧夜景图像进行融合去噪处理的步骤中,所述处理器还设置为:执行所述夜景图像的处理程序,以实现以下步骤:
    分别获取所述图像配准处理后的多帧夜景图像中每帧夜景图像的每个像素的像素值;
    根据所获取的每帧夜景图像的每个像素的像素值,按照以下公式计算进行所述融合去噪处理后的图像的每个像素的像素值为:
    Figure PCTCN2017092664-appb-100006
    其中,D(x,y)表示所述融合去噪处理后的所述图像在点(x,y)处的像素值;Ii(x,y)表示所述图像配准处理后的多帧夜景图像中的第i帧夜景图像在点(x,y)处的像素值;n表示所述图像配准处理后的多帧夜景图像的数量。
  16. 根据权利要求13所述的移动终端,其中,所述对所述融合去噪处理后的图像进行夜景增强处理的步骤中,所述处理器还设置为:执行所述夜景图像的处理程序,以实现以下步骤:
    获取所述融合去噪处理后的图像的预设区域内每个点的灰度值;
    根据所获取的每个点的灰度值计算所述融合去噪处理后的图像的预设区域的灰度平均值以及局部标准方差;
    根据计算出的所述局部标准方差确定增强系数;
    根据计算出的所述灰度平均值以及确定出的增强系数对所述融合去噪处理后的图像进行夜景增强处理。
  17. 一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序用于被一个或者多个处理器执行,以实现以下步骤:
    获取同一预览画面的多帧夜景图像;
    对所获取的多帧夜景图像进行图像配准处理;
    对所述图像配准处理后的多帧夜景图像进行融合去噪处理;
    对所述融合去噪处理后的图像进行夜景增强处理。
  18. 根据权利要求17所述的计算机可读存储介质,其中,所述对所获取的多帧夜景图像进行图像配准处理的步骤中,所述一个或者多个程序还用于被所述一个或者多个处理器执行,以实现以下步骤:
    在所述获取的多帧夜景图像中选取一帧图像作为基准图像;
    分别将所述获取的多帧夜景图像中除去所述基准图像以外的其它图像与所述基准图像对齐。
  19. 根据权利要求17所述的计算机可读存储介质,其中,所述对所述图像配准处理后的多帧夜景图像进行融合去噪处理的步骤中,所述一个或者多个程序还用于被所述一个或者多个处理器执行,以实现以下步骤:
    分别获取所述图像配准处理后的多帧夜景图像中每帧夜景图像的每个像素的像素值;
    根据所获取的每帧夜景图像的每个像素的像素值,按照以下公式计算进行所述融合去噪处理后的图像的每个像素的像素值为:
    Figure PCTCN2017092664-appb-100007
    其中,D(x,y)表示所述融合去噪处理后的所述图像在点(x,y)处的像素值;Ii(x,y)表示所述图像配准处理后的多帧夜景图像中的第i帧夜景图像在点(x,y)处的像素值;n表示所述图像配准处理后的多帧夜景图像的数量。
  20. 根据权利要求17所述的计算机可读存储介质,其中,所述对所述融合去噪处理后的图像进行夜景增强处理的步骤中,所述一个或者多个程序还用于被所述一个或者多个处理器执行,以实现以下步骤:
    获取所述融合去噪处理后的图像的预设区域内每个点的灰度值;
    根据所获取的每个点的灰度值计算所述融合去噪处理后的图像的预设区域的灰度平均值以及局部标准方差;
    根据计算出的所述局部标准方差确定增强系数;
    根据计算出的所述灰度平均值以及确定出的增强系数对所述融合去噪处理后的图像进行夜景增强处理。
PCT/CN2017/092664 2016-07-29 2017-07-12 一种夜景图像的处理方法和移动终端 WO2018019128A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610618707.8 2016-07-29
CN201610618707.8A CN106097284B (zh) 2016-07-29 2016-07-29 一种夜景图像的处理方法和移动终端

Publications (1)

Publication Number Publication Date
WO2018019128A1 true WO2018019128A1 (zh) 2018-02-01

Family

ID=57479647

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/092664 WO2018019128A1 (zh) 2016-07-29 2017-07-12 一种夜景图像的处理方法和移动终端

Country Status (2)

Country Link
CN (1) CN106097284B (zh)
WO (1) WO2018019128A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110516680A (zh) * 2019-08-05 2019-11-29 上海摩软通讯技术有限公司 图像处理方法及装置
CN110958401A (zh) * 2019-12-16 2020-04-03 北京迈格威科技有限公司 一种超级夜景图像颜色校正方法、装置和电子设备
CN112823374A (zh) * 2020-03-30 2021-05-18 深圳市大疆创新科技有限公司 红外图像处理方法、装置、设备及存储介质
CN116913033A (zh) * 2023-05-29 2023-10-20 东莞市众可智能科技有限公司 一种火灾大数据远程探测与预警系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097284B (zh) * 2016-07-29 2019-08-30 努比亚技术有限公司 一种夜景图像的处理方法和移动终端
CN107240081A (zh) * 2017-06-20 2017-10-10 长光卫星技术有限公司 夜景影像去噪与增强处理方法
CN108053369A (zh) * 2017-11-27 2018-05-18 努比亚技术有限公司 一种图像处理的方法、设备及存储介质
CN112927144A (zh) * 2019-12-05 2021-06-08 北京迈格威科技有限公司 图像增强方法、图像增强装置、介质和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796625A (zh) * 2015-04-21 2015-07-22 努比亚技术有限公司 照片的合成方法及照片合成装置
CN104869309A (zh) * 2015-05-15 2015-08-26 广东欧珀移动通信有限公司 一种拍照方法及装置
CN105488756A (zh) * 2015-11-26 2016-04-13 努比亚技术有限公司 图片合成方法及装置
CN105611181A (zh) * 2016-03-30 2016-05-25 努比亚技术有限公司 多帧拍摄图像合成装置和方法
CN106097284A (zh) * 2016-07-29 2016-11-09 努比亚技术有限公司 一种夜景图像的处理方法和移动终端

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427263A (zh) * 2015-12-21 2016-03-23 努比亚技术有限公司 一种实现图像配准的方法及终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796625A (zh) * 2015-04-21 2015-07-22 努比亚技术有限公司 照片的合成方法及照片合成装置
CN104869309A (zh) * 2015-05-15 2015-08-26 广东欧珀移动通信有限公司 一种拍照方法及装置
CN105488756A (zh) * 2015-11-26 2016-04-13 努比亚技术有限公司 图片合成方法及装置
CN105611181A (zh) * 2016-03-30 2016-05-25 努比亚技术有限公司 多帧拍摄图像合成装置和方法
CN106097284A (zh) * 2016-07-29 2016-11-09 努比亚技术有限公司 一种夜景图像的处理方法和移动终端

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110516680A (zh) * 2019-08-05 2019-11-29 上海摩软通讯技术有限公司 图像处理方法及装置
CN110516680B (zh) * 2019-08-05 2023-04-07 上海摩软通讯技术有限公司 图像处理方法及装置
CN110958401A (zh) * 2019-12-16 2020-04-03 北京迈格威科技有限公司 一种超级夜景图像颜色校正方法、装置和电子设备
CN112823374A (zh) * 2020-03-30 2021-05-18 深圳市大疆创新科技有限公司 红外图像处理方法、装置、设备及存储介质
CN116913033A (zh) * 2023-05-29 2023-10-20 东莞市众可智能科技有限公司 一种火灾大数据远程探测与预警系统
CN116913033B (zh) * 2023-05-29 2024-04-05 深圳市兴安消防工程有限公司 一种火灾大数据远程探测与预警系统

Also Published As

Publication number Publication date
CN106097284B (zh) 2019-08-30
CN106097284A (zh) 2016-11-09

Similar Documents

Publication Publication Date Title
WO2018019128A1 (zh) 一种夜景图像的处理方法和移动终端
WO2018019124A1 (zh) 一种图像处理方法及电子设备、存储介质
CN106454121B (zh) 双摄像头拍照方法及装置
WO2018076935A1 (zh) 图像虚化处理方法、装置、移动终端和计算机存储介质
WO2017020836A1 (zh) 一种虚化处理深度图像的装置和方法
WO2017050115A1 (zh) 一种图像合成方法和装置
US8780258B2 (en) Mobile terminal and method for generating an out-of-focus image
WO2017067526A1 (zh) 图像增强方法及移动终端
WO2017045650A1 (zh) 一种图片处理方法及终端
CN106909274B (zh) 一种图像显示方法和装置
WO2017071481A1 (zh) 一种移动终端及其实现分屏的方法
WO2016180325A1 (zh) 图像处理方法及装置
WO2017143855A1 (zh) 具有截屏功能的装置和截屏方法
WO2018050014A1 (zh) 对焦方法及拍照设备、存储介质
WO2017071476A1 (zh) 一种图像合成方法和装置、存储介质
CN106534693B (zh) 一种照片处理方法、装置及终端
WO2017041714A1 (zh) 一种获取rgb数据的方法和装置
WO2018076938A1 (zh) 图像处理装置及方法和计算机存储介质
WO2017071475A1 (zh) 一种图像处理方法及终端、存储介质
WO2017071532A1 (zh) 一种自拍合影的方法和装置
WO2017071542A1 (zh) 图像处理方法及装置
CN106657782B (zh) 一种图片处理方法和终端
WO2017071469A1 (zh) 一种移动终端和图像拍摄方法、计算机存储介质
WO2017067523A1 (zh) 图像处理方法、装置及移动终端
CN106911881B (zh) 一种基于双摄像头的动态照片拍摄装置、方法和终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17833431

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04/07/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17833431

Country of ref document: EP

Kind code of ref document: A1