WO2017067526A1 - 图像增强方法及移动终端 - Google Patents

图像增强方法及移动终端 Download PDF

Info

Publication number
WO2017067526A1
WO2017067526A1 PCT/CN2016/103085 CN2016103085W WO2017067526A1 WO 2017067526 A1 WO2017067526 A1 WO 2017067526A1 CN 2016103085 W CN2016103085 W CN 2016103085W WO 2017067526 A1 WO2017067526 A1 WO 2017067526A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
local
target area
mobile terminal
Prior art date
Application number
PCT/CN2016/103085
Other languages
English (en)
French (fr)
Inventor
戴向东
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017067526A1 publication Critical patent/WO2017067526A1/zh

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive

Definitions

  • This document relates to, but is not limited to, the field of mobile terminal technologies, and in particular, to an image enhancement method and a mobile terminal.
  • the mobile terminal In the process of capturing an image by the mobile terminal and performing 2D plane processing on the image, if the image captured by the mobile terminal encounters a backlight environment, the target area in the image becomes dim due to the backlight, so that the background of the entire screen is too Prominently, the shading effect of the target area is suppressed.
  • the mobile terminal directly performs image enhancement on the scene in the entire image when performing image processing. With this method, the target area in the image can be enhanced, but the background area in the image is clear and dark under such shooting conditions.
  • the enhancement algorithm will overexpose the background area and will not improve the shooting.
  • Embodiments of the present invention provide an image enhancement method and a mobile terminal, which can effectively improve an image capturing effect in a backlight environment.
  • An embodiment of the present invention provides an image enhanced mobile terminal, including:
  • Obtaining a module configured to acquire a plurality of images, acquiring a depth image according to the plurality of images, and acquiring location information of the object in the scene;
  • a dividing module configured to perform, according to the location information and pixel information of the depth image, Depth image performing image segmentation processing to obtain a target area;
  • An enhancement module configured to perform local enhancement processing on the target area.
  • the acquisition module is configured to acquire a plurality of images through the multi-vision platform.
  • the location information includes depth information and distance information.
  • the segmentation module is set to:
  • the depth image is divided into regions according to the distance information and the pixel information of the depth image to obtain a plurality of regions, and a region having the smallest distance information among the plurality of regions is used as a target region.
  • the depth information is a disparity between corresponding imaging points in the plurality of images at the same point in the shooting scene.
  • the distance information is a distance between the same point in the shooting scene and the camera.
  • the acquiring module acquires the distance information by:
  • f is the focal length of the left and right cameras in the binocular vision platform
  • T is the spacing between the two cameras
  • d is the depth information.
  • the enhancement module includes:
  • a dividing unit configured to acquire a local average value of the target area pixel as a low frequency part according to a preset local area in the depth image, and use an area in the preset partial area from which the low frequency part is removed as a high frequency of the target area section;
  • An amplifying unit configured to amplify the high frequency portion to locally enhance the target area.
  • the amplifying unit is configured to acquire a local standard deviation of the pixel of the target area according to the local average of the preset local area and the pixel of the target area; and acquiring the local standard deviation according to the local standard deviation An average value of the local variance of each pixel of the target region, and obtaining an amplification factor according to the local standard deviation and the average value of the local variance; and the high frequency portion according to the amplification factor Zoom in.
  • the dividing unit is configured to acquire the low frequency portion and the high frequency portion by:
  • the preset local area is: an area with a window size of (2n+1)*(2n+1) centered on any pixel point x(i,j) in the preset local area, where n is an integer, then
  • the low frequency part m x (i, j) is
  • x(k, l) is a gray value of a pixel point in the preset local area, k ranges from i-n to i+n, and l ranges from j-n to j+n;
  • the high frequency portion is: x(i,j)-m x (i,j).
  • an embodiment of the present invention further provides an image enhancement method, including:
  • the mobile terminal acquires a plurality of images, and acquires a depth image according to the plurality of images and position information of the object in the shooting scene;
  • Local enhancement processing is performed on the target area.
  • the mobile terminal acquires a plurality of images through a multi-vision visual platform.
  • the location information includes depth information and distance information.
  • performing image segmentation processing on the depth image to obtain a target area includes:
  • the depth image is divided into regions according to the distance information and the pixel information of the depth image to obtain a plurality of regions, and a region having the smallest distance information among the plurality of regions is used as a target region.
  • the depth information is a disparity between corresponding imaging points in the plurality of images at the same point in the shooting scene.
  • the distance information is a distance between the same point in the shooting scene and the camera.
  • the distance information is obtained by:
  • f is the focal length of the left and right cameras in the binocular vision platform
  • T is the spacing between the two cameras
  • d is the depth information.
  • the performing local enhancement processing on the target area includes:
  • the high frequency portion is amplified to locally enhance the target area.
  • the amplifying the high frequency portion to locally enhance the target area comprises:
  • the high frequency portion is amplified according to the amplification factor.
  • the low frequency portion and the high frequency portion are obtained by:
  • the preset local area is: an area with a window size of (2n+1)*(2n+1) centered on any pixel point x(i,j) in the preset local area, where n is an integer, then
  • the low frequency part m x (i, j) is
  • x(k,l) is the gray value of the pixel in the preset local region, k ranges from i-n to i+n, and l ranges from j-n to j+n;
  • the high frequency portion is: x(i,j)-m x (i,j).
  • the mobile terminal performs image capturing through the multi-vision visual platform, and performs image segmentation processing on the image according to the position information of the object in the image to obtain the target region, and then the target region. Perform local enhancement processing.
  • the mobile terminal realizes accurate segmentation of the target region in the image acquired by the multi-vision platform according to the position information of the object in the shooting scene, and locally enhances the target region, thereby effectively improving the image capturing effect in the backlight environment.
  • FIG. 1 is a schematic structural diagram of hardware of a mobile terminal that implements various embodiments of the present invention
  • FIG. 2 is a schematic diagram of a wireless communication device of the mobile terminal shown in FIG. 1;
  • FIG. 3 is a schematic structural diagram of a first embodiment of an image enhanced mobile terminal according to the present invention.
  • FIG. 4 is a schematic diagram of an imaging principle of a binocular vision platform according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an effect before performing image enhancement according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an effect of performing image enhancement according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a second embodiment of an image enhanced mobile terminal according to the present invention.
  • FIG. 8 is a schematic flowchart diagram of a first embodiment of an image enhancement method according to the present invention.
  • FIG. 9 is a schematic flow chart of a second embodiment of an image enhancement method according to the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminal described in the embodiment of the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal Mobile terminals of digital assistants), PADs (tablets), PMPs (portable multimedia players), navigation devices, and the like, and fixed terminals such as digital TVs, desktop computers, and the like.
  • PDA personal Mobile terminals of digital assistants
  • PADs tablets
  • PMPs portable multimedia players
  • navigation devices and the like
  • fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram showing the hardware structure of a mobile terminal embodying various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an A/V (Audio/Video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190. and many more.
  • Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication device or network.
  • the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
  • the broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112.
  • the broadcast signal may exist in various forms, for example, it may exist in the form of Digital Multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), and the like.
  • the broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast apparatuses.
  • the broadcast receiving module 111 can use forward link media (MediaFLO) by using, for example, multimedia broadcast-terrestrial (DMB-T), digital multimedia broadcast-satellite (DMB-S), digital video broadcast-handheld (DVB-H)
  • MediaFLO forward link media
  • the digital broadcasting device of the @ ) data broadcasting device, the terrestrial digital broadcasting integrated service (ISDB-T), and the like receives the digital broadcasting.
  • the broadcast receiving module 111 can be constructed as various broadcast apparatuses suitable for providing broadcast signals as well as the above-described digital broadcast apparatuses.
  • the broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of storage medium).
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless Broadband), Wimax (Worldwide Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), etc. .
  • the short range communication module 114 is a module that is configured to support short range communication.
  • Some examples of short-range communication technology include Bluetooth TM, a radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB), ZigBee, etc. TM.
  • the location information module 115 is a module configured to check or acquire location information of the mobile terminal.
  • a typical example of a location information module is a GPS (Global Positioning Device).
  • the GPS module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate three-dimensional current position information based on longitude, latitude, and altitude.
  • the method for calculating position and time information uses three satellites and corrects the calculated position and time information errors by using another satellite.
  • the GPS module 115 is capable of calculating speed information by continuously calculating current position information in real time.
  • the A/V input unit 120 is arranged to receive an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 122 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a trigger board (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), and a scroll wheel. , rocker, etc.
  • a trigger board eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact
  • a scroll wheel e.g, a scroll wheel, rocker, etc.
  • a trigger screen can be formed.
  • the sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, trigger input) by the user with the mobile terminal 100, and the mobile terminal.
  • the sensing unit 140 can sense whether the slide type phone is turned on or off.
  • the sensing unit 140 can detect whether the power supply unit 190 provides power or whether the interface unit 170 is coupled to an external device.
  • Sensing unit 140 may include proximity sensor 141 to be described below in conjunction with a trigger screen.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port configured to connect a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • the device having the identification module may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 may be arranged to receive input (eg, data information, power, etc.) from an external device and transmit the received input to one or more components within the mobile terminal 100 or may be configured to be at the mobile terminal and external device Transfer data between.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, Audio output module 152, alarm unit 153, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can be used as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the trigger screen can be set to detect the trigger input pressure as well as the trigger input position and trigger input area.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a pickup, a buzzer, and the like.
  • the alarm unit 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, trigger input, and the like. In addition to audio or video output, the alert unit 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations, and when a call, message, or some other incoming communication is received, the alarm unit 153 can provide a tactile output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 can also provide an output of the notification event occurrence via the display unit 151 or the audio output module 152.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding various modes of vibration and audio signals that are output when triggered to be applied to the trigger screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may include a multimedia module 181 configured to reproduce (or play back) multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the trigger screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal has been described in terms of its function.
  • various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described.
  • the mobile terminal 100 as shown in FIG. 1 may be configured to operate using, for example, wired and wireless communication devices and satellite-based communication devices that transmit data via frames or packets.
  • a communication device in which a mobile terminal is operable according to an embodiment of the present invention will now be described with reference to FIG.
  • Such communication devices may use different air interfaces and/or physical layers.
  • air interfaces used by communication devices include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications Equipment (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc.
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • UMTS Universal Mobile Telecommunications Equipment
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • a CDMA wireless communication device can include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a public switched telephone network (PSTN) 290.
  • PSTN public switched telephone network
  • the MSC 280 is also configured to interface with a BSC 275 that can be coupled to the base station 270 via a backhaul line.
  • the backhaul line can be constructed in accordance with any of a number of well known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be understood that the apparatus as shown in FIG. 2 may include a plurality of BSCs 275.
  • Each BS 270 can serve one or more partitions (or regions), each of which is covered by a multi-directional antenna or an antenna directed to a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas that are set to receive diversity. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Sub-Device (BTS) or other equivalent terminology.
  • BTS Base Transceiver Sub-Device
  • the term "base station” can be used to generally refer to a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as a plurality of cellular stations.
  • a broadcast transmitter (BT) 295 transmits a broadcast signal to the mobile terminal 100 operating within the device.
  • a broadcast receiving module 111 as shown in FIG. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295.
  • GPS Global Positioning Device
  • the satellite 300 helps locate at least one of the plurality of mobile terminals 100.
  • a plurality of satellites 300 are depicted, but it will be appreciated that useful positioning information can be obtained using any number of satellites.
  • the GPS module 115 as shown in Figure 1 is typically configured to cooperate with the satellite 300 to obtain desired positioning information. Instead of GPS tracking technology or in addition to GPS tracking technology, other techniques that can track the location of the mobile terminal can be used. Additionally, at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
  • the BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in calls, messaging, and other types of communications.
  • Each reverse link signal received by a particular base station 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • the BSC provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270.
  • the BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • the image enhanced mobile terminal of this embodiment includes:
  • the acquiring module 10 is configured to acquire a plurality of images by using a multi-vision visual platform, acquire a depth image according to the plurality of images, and acquire position information of an object in the scene;
  • the image enhancement scheme is applied to the mobile terminal, and the type of the mobile terminal may be set according to actual needs.
  • the mobile terminal may include a mobile phone, a camera, and an iPad.
  • the mobile terminal is provided with a multi-vision visual platform, and the multi-vision visual platform is composed of two or more digital cameras. The relative positions of the digital cameras are fixed, and images can be acquired from different viewing angles at the same time. The following takes an example of acquiring an image by a binocular vision platform as an example.
  • Binocular vision is a method of simulating human vision. Using a computer to passively perceive a distance, an object is observed from two or more points, and images at different viewing angles are acquired. According to the matching relationship of pixels between images, the principle of triangulation is calculated. Offset between pixels to obtain three-dimensional information of the object.
  • the binocular vision platform comprises two digital cameras, and the acquisition module 10 can be at the same time Two images are obtained from the binocular vision platform, and the two images can be stereo-corrected and stereo-matched to obtain a depth image. Stereo matching mainly obtains the disparity map according to the triangulation principle by finding the correspondence between each pair of images. After obtaining the disparity information, the depth information and the three-dimensional information of the original image can be easily obtained according to the projection model.
  • the acquisition of multiple images may be performed by using a plurality of different mobile terminals to capture an image of the same shooting scene on the same horizontal line, and then the obtained image is uniformly sent to a mobile terminal through a communication module such as Bluetooth or WiFi. Subsequent processing.
  • the acquisition module 10 needs to acquire location information of the object in the shooting scene, and the location information may include depth information and distance information.
  • the depth information is a parallax between corresponding image points in the plurality of images at the same point in the shooting scene
  • the distance information is a distance between the object and the camera in the shooting scene.
  • Fig. 4 it is a schematic diagram of the imaging principle of the binocular vision platform. It is assumed that P is a point in space, and O l and O r are respectively the centers of the left and right cameras in the binocular vision platform, and x l and x r are respectively left and right sides. Imaging point.
  • the distance information may also be that a ranging sensor is disposed on the binocular vision platform, and the ranging sensor actively emits infrared light to reflect in the scene to obtain distance information of the object in the scene.
  • the depth information or distance information provides a spatial reference feature for the separation of the target area and the background area, which is beneficial to the accuracy of the image segmentation algorithm.
  • the segmentation module 20 is configured to perform image segmentation processing on the depth image to obtain a target region according to the position information and pixel information of the depth image;
  • the target area may be flexibly set according to the specific situation of the shooting scene.
  • the target area may be a person, an animal, or the like in the shooting scene, or may be other objects. Since the target area and the background area in the shooting scene are different from the distance information of the camera, the depth information of the target area and the background area are also different, which provides a spatial reference feature for the separation of the target area and the background area.
  • the segmentation module 20 may perform the depth image according to the depth information and the pixel information of the depth image in combination with the traditional image segmentation algorithm or the meanshift algorithm. The target area and the background area are divided to obtain a target area.
  • the segmentation module 20 combines the pixel information of the depth information and the depth image with the conventional image segmentation algorithm or the meanshift algorithm to segment the target region and the background region to obtain the target region.
  • the image information is generally difficult to accurately separate the target area and the background area in the shooting scene.
  • Image segmentation combined with depth information or distance information is beneficial to the accuracy of the image segmentation algorithm.
  • the enhancement module 30 is configured to perform local enhancement processing on the target area.
  • the enhancement module 30 needs to perform local contrast enhancement processing on the target region in the image.
  • the original image obtained is as shown in FIG. 5.
  • the person in the target area looks dim due to the interference of the background strong light pixels.
  • the embodiment can enhance the separated target area by using the local contrast enhancement algorithm, and the enhanced image is as shown in FIG. 6.
  • the character of the target area has been enhanced. It is very clear.
  • an adaptive contrast enhancement algorithm ACE algorithm
  • ACE algorithm adaptive contrast enhancement algorithm
  • the mobile terminal performs image capturing through the binocular vision platform, and according to the depth information of the object in the shooting scene and the pixel information of the image, or according to the distance information and the pixel information of the image, the image is segmented and accurately separated.
  • the target area and the background area to obtain the target area.
  • the image local contrast enhancement algorithm is used to enhance the target area, while the background area remains unchanged, so that the target area and background area of the image are clear.
  • the mobile terminal realizes accurate segmentation of the target region in the image acquired by the multi-vision platform according to the depth information or the distance information, and locally enhances the target region, thereby effectively improving the image capturing effect in the backlight environment.
  • the location information includes depth information and distance information
  • the segmentation module 20 is configured to: according to the depth information and the depth map.
  • the pixel information of the image is divided into regions by the region, and the region with the largest depth information in the plurality of regions is used as the target region; or
  • the depth image is divided into regions according to the distance information and the pixel information of the depth image to obtain a plurality of regions, and a region having the smallest distance information among the plurality of regions is used as a target region.
  • the segmentation module 20 combines pixels having similar properties to form an area, and then divides the regions according to information having similar or identical depths, or divides the regions according to information having similar or identical distances, which can be obtained.
  • Different image areas According to the a priori condition that the distance of the target area of the captured image from the camera is generally closer to the distance of the background area from the camera, combined with the depth information in the shooting scene, according to the principle of binocular imaging, the closer the camera is, the greater the depth value is. Therefore, the segmentation region having the largest depth information may be selected as the target region, or the segmentation region having the smallest distance information may be selected as the target region, so that the target region of the partial image to be enhanced may be finally determined.
  • the area in which the depth information or the distance information in the plurality of divided regions is a preset value may be selected as the target area according to actual needs, and the preset value is flexibly set according to a specific situation.
  • the following is an example.
  • a white puppy there is a white wall at a far distance behind the puppy.
  • the white puppies are very similar to the pixel information of the white wall, classifying them into the same area.
  • the wall farther away from the wall and the puppy closer to the distance are divided into different regions, and the dog with the closer distance is located.
  • the area is used as the target area to achieve accurate division of the image.
  • image segmentation is performed by combining depth information or distance information, which greatly improves the accuracy of segmenting an image and extracting a target region.
  • the enhancement module 30 may include:
  • the dividing unit 31 is configured to acquire a local average value of the target area pixel as a low frequency part according to a preset local area in the depth image, and to remove an area of the preset partial area from the low frequency part as a high of the target area Frequency part
  • the amplifying unit 32 is arranged to amplify the high frequency portion to locally enhance the target area.
  • the ACE algorithm When performing local enhancement processing on the target area, the ACE algorithm employs an unsharp masking technique: first, the dividing area 31 divides the target area into two parts. One part is a low frequency part, which can be obtained by low-pass filtering the target area, for example, a relatively smooth part of the pixel in the target area. The other part is a high frequency part, which can be obtained by subtracting the low frequency part from the original target area, for example, an edge part in the target area where the pixel variation is relatively large, that is, a portion where the adjacent pixel point has a large deviation.
  • a low frequency part which can be obtained by low-pass filtering the target area, for example, a relatively smooth part of the pixel in the target area.
  • the other part is a high frequency part, which can be obtained by subtracting the low frequency part from the original target area, for example, an edge part in the target area where the pixel variation is relatively large, that is, a portion where the adjacent pixel point has a large deviation.
  • the low frequency portion may be implemented by calculating a pixel average value of the preset local region centered on the pixel, for example, assuming that the gray value of a point in the image is x(i, j), and the preset local region is defined as : An area with a window size of (2n+1)*(2n+1) centered on any pixel in the preset local area, where n is an integer.
  • the preset partial area is not necessarily a square, and can be flexibly set according to specific conditions.
  • the local average value m x (i, j) of the pixel of this region is taken as the low frequency portion and can be calculated by the following formula (1):
  • the size of the preset local region is (2n+1) 2
  • x(k, l) is the gray value of the pixel in the preset local region
  • the range of k is in ⁇ i+n.
  • the range of l is jn ⁇ j + n, so that the high frequency part is: x (i, j) - m x (i, j).
  • the amplifying unit 32 then amplifies the high frequency portion to obtain an enhanced image to achieve local enhancement of the target area.
  • the pixel points in the target area are accurately divided into the low frequency part and the high frequency part, so as to enlarge the part where the adjacent pixel points change greatly, and the image capturing effect in the backlight environment is improved.
  • the amplifying unit 32 is configured to acquire a part of the target area pixel according to the local average of the preset local area and the target area pixel. Standard deviation; obtaining a local variance average value of each pixel point of the target area according to the local standard deviation, and obtaining an amplification factor according to the local standard deviation and the local variance average value; The high frequency part is amplified.
  • the amplifying unit 32 enlarges the high frequency portion in the target area.
  • the first amplifying unit 32 firstly sets the window according to the predetermined preset area: (2n+1)*(2n+1)
  • the local average of the region, and the pixel of the target region is m x (i, j), and the local variance of the pixel of the target region is calculated as As shown in formula (2):
  • (2n+1) 2 is the size of the preset local region
  • x(k, l) is the pixel point in the preset local region
  • the range of k is in ⁇ i+n
  • the range of l is Jn ⁇ j+n
  • m x (i,j) is the local average of the pixels in the target area
  • the local standard deviation ⁇ x (i, j) can be obtained.
  • M and N are the width and height of the region, and the specific values can be set according to actual needs, and ⁇ x (i, j) is the local standard deviation.
  • the amplification unit 32 calculates the amplification factor G(i,j) for amplifying the high-frequency portion based on the local standard deviation ⁇ x (i, j) and the local variance average G ⁇ as follows:
  • R is a constant, and the specific value of R can be flexibly set according to a specific situation, for example, R can be 10.
  • the min and max operations limit the value of the amplification factor G(i,j) to the range of [1,R] to prevent the target region from being over-enhanced or not enhanced.
  • the amplification factor needs to be greater than 1, and the high-frequency component Can be enhanced.
  • the amplification unit 32 can enhance the high frequency portion in the target region according to the amplification factor G(i,j) to: G(i,j)[x(i, j)-m x (i,j)].
  • the definition f(i,j) represents the enhanced pixel value corresponding to the pixel point x(i,j). Then the ACE algorithm can be expressed as follows:
  • the local mean square error at the edge of the target area or the sharp portion of other pixels is relatively large, and the ratio of the average standard deviation to the entire target area is also large, and the portion needs to be enhanced.
  • the local mean square error in the smooth portion of the target area will be small relative to the entire target area The ratio of the standard deviation will also be small and this part will not be enhanced.
  • the high-frequency portion of the target region is enlarged according to the calculated amplification factor, so that the target region of the image is clearly visible, and the image capturing effect in the backlight environment is effectively improved.
  • the image enhancement method of this embodiment includes:
  • Step S10 The mobile terminal acquires multiple images through the multi-vision visual platform, and acquires the depth image according to the plurality of images and the position information of the object in the shooting scene;
  • the image enhancement scheme is applied to the mobile terminal, and the type of the mobile terminal may be set according to actual needs.
  • the mobile terminal may include a mobile phone, a camera, and an iPad.
  • the mobile terminal is provided with a multi-vision visual platform, and the multi-vision visual platform is composed of two or more digital cameras. The relative positions of the digital cameras are fixed, and images can be acquired from different viewing angles at the same time. The following takes an example of acquiring an image by a binocular vision platform as an example.
  • Binocular vision is a method of simulating human vision. Using a computer to passively perceive a distance, an object is observed from two or more points, and images at different viewing angles are acquired. According to the matching relationship of pixels between images, the principle of triangulation is calculated. Offset between pixels to obtain three-dimensional information of the object.
  • the binocular vision platform comprises two digital cameras, which can obtain two images at the same time, and the two images can be stereo-corrected and stereo-matched to obtain a depth image. Stereo matching mainly obtains the disparity map according to the triangulation principle by finding the correspondence between each pair of images. After obtaining the disparity information, the depth information and the three-dimensional information of the original image can be easily obtained according to the projection model.
  • the acquisition of multiple images may be performed by using a plurality of different mobile terminals to capture an image of the same shooting scene on the same horizontal line, and then the obtained image is uniformly sent to a mobile terminal through a communication module such as Bluetooth or WiFi. Subsequent processing.
  • the position information may include depth information and distance information.
  • the depth information is a parallax between corresponding image points in the two images
  • the distance information is a distance between the object and the camera in the shooting scene.
  • Fig. 4 it is a schematic diagram of the imaging principle of the binocular vision platform. It is assumed that P is a certain point in the space, and O l and O r are respectively the centers of the left and right cameras in the binocular vision platform, and x l and x r are respectively left and right. Imaging points on both sides.
  • the distance information may also be that a ranging sensor is disposed on the binocular vision platform, and the ranging sensor actively emits infrared light to reflect in the scene to obtain distance information of the object in the scene.
  • the depth information or distance information provides a spatial reference feature for the separation of the target area and the background area, which is beneficial to the accuracy of the image segmentation algorithm.
  • Step S20 Perform image segmentation processing on the depth image to obtain a target area according to the position information and the pixel information of the depth image;
  • the target area may be flexibly set according to the specific situation of the shooting scene.
  • the target area may be a person, an animal, or the like in the shooting scene, or may be other objects. Since the target area and the background area in the shooting scene are different from the distance information of the camera, the depth information of the target area and the background area are also different, which provides a spatial reference feature for the separation of the target area and the background area.
  • the target region and the background region may be segmented according to the depth information and the pixel information of the depth image, and the target image region may be obtained by combining the traditional image segmentation algorithm or the meanshift algorithm.
  • the target area and the background area may be segmented according to the distance information and the pixel information of the depth image according to the conventional image segmentation algorithm or the meanshift algorithm to obtain the target region.
  • the image information is generally difficult to accurately separate the target area and the background area in the shooting scene.
  • Image segmentation combined with depth information or distance information is beneficial to the accuracy of the image segmentation algorithm.
  • Step S30 performing local enhancement processing on the target area.
  • the image segmentation process After the image segmentation process is performed to obtain the target region, it is necessary to perform local contrast enhancement processing on the target region in the image.
  • the image is taken in a backlit environment, the original is obtained.
  • the image is shown in Fig. 5.
  • the person in the target area looks bleak due to the interference of the background glare pixels.
  • the embodiment can enhance the separated target area by using the local contrast enhancement algorithm, and the enhanced image is as shown in FIG. 6.
  • FIG. 6 the character of the target area has been enhanced. It is very clear.
  • an adaptive contrast enhancement algorithm ACE algorithm
  • the mobile terminal performs image capturing through the binocular vision platform, and according to the depth information of the object in the shooting scene and the pixel information of the image, or according to the distance information and the pixel information of the image, the image is segmented and accurately separated.
  • the target area and the background area to obtain the target area.
  • the image local contrast enhancement algorithm is used to enhance the target area, while the background area remains unchanged, so that the target area and background area of the image are clear.
  • the mobile terminal realizes accurate segmentation of the target region in the image acquired by the multi-vision platform according to the depth information or the distance information, and locally enhances the target region, thereby effectively improving the image capturing effect in the backlight environment.
  • the location information may include depth information or distance information
  • the foregoing step S20 may include:
  • the depth image is divided into regions according to the distance information and the pixel information of the depth image to obtain a plurality of regions, and a region having the smallest distance information among the plurality of regions is used as a target region.
  • the segmentation region having the largest depth information may be selected as the target region, or the segmentation region having the smallest distance information may be selected as the target region, so that the target region of the partial image to be enhanced may be finally determined.
  • the preset value is used as the target area, and the preset value is flexibly set according to the specific situation.
  • the following is an example.
  • a white puppy there is a white wall at a far distance behind the puppy.
  • the white puppies are very similar to the pixel information of the white wall, classifying them into the same area.
  • the wall farther away from the wall and the puppy closer to the distance are divided into different regions, and the dog with the closer distance is located.
  • the area is used as the target area to achieve accurate division of the image.
  • image segmentation is performed by combining depth information or distance information, which greatly improves the accuracy of segmenting an image and extracting a target region.
  • the foregoing step S30 may include:
  • Step S31 Acquire a local average value of the target area pixel as a low frequency part according to a preset local area in the depth image, and use an area in the preset partial area from which the low frequency part is removed as a high frequency part of the target area;
  • Step S32 amplifying the high frequency portion to locally enhance the target area.
  • the ACE algorithm When performing local enhancement processing on the target area, the ACE algorithm employs an unsharp masking technique: first, the target area is divided into two parts. One part is a low frequency part, which can be obtained by low-pass filtering the target area, for example, a relatively smooth part of the pixel in the target area. The other part is a high frequency part, which can be obtained by subtracting the low frequency part from the original target area, for example, an edge part in the target area where the pixel variation is relatively large, that is, a portion where the adjacent pixel point has a large deviation.
  • a low frequency part which can be obtained by low-pass filtering the target area, for example, a relatively smooth part of the pixel in the target area.
  • the other part is a high frequency part, which can be obtained by subtracting the low frequency part from the original target area, for example, an edge part in the target area where the pixel variation is relatively large, that is, a portion where the adjacent pixel point has a large deviation.
  • the low frequency portion can be implemented by calculating a pixel average value of a preset local area centered on the pixel, for example, assuming that the gray value of a point in the image is x(i, j), the definition of the preset local area It is: an area with a window size of (2n+1)*(2n+1) centered on any pixel point x(i,j) in the preset local area, where n is an integer.
  • the preset partial area is not necessarily a square, and can be flexibly set according to specific conditions.
  • the local average value m x (i, j) of the pixel of this region is taken as the low frequency portion and can be calculated by the following formula (1):
  • the size of the preset local region is (2n+1) 2
  • x(k, l) is the gray value of the pixel in the preset local region
  • the range of k is in ⁇ i+n.
  • the range of l is jn ⁇ j + n, so that the high frequency part is: x (i, j) - m x (i, j).
  • the high frequency portion is then amplified to obtain an enhanced image to achieve local enhancement of the target area.
  • the pixel points in the target area are accurately divided into the low frequency part and the high frequency part, so as to enlarge the part where the adjacent pixel points change greatly, and the image capturing effect in the backlight environment is improved.
  • the foregoing step S32 may include:
  • the high frequency portion is amplified according to the amplification factor.
  • the high frequency portion in the target area is made large, and optionally, first according to the given preset local area: the area whose window size is (2n+1)*(2n+1), and the target area pixel
  • the local mean is m x (i, j)
  • the local variance of the pixel in the target area is calculated as As shown in formula (2):
  • (2n+1) 2 is the size of the preset local region
  • x(k, l) is the pixel point in the preset local region
  • the range of k is in ⁇ i+n
  • the range of l is Jn ⁇ j+n
  • m x (i,j) is the local average of the pixels in the target area
  • the local standard deviation ⁇ x (i, j) can be obtained.
  • Equation (3) M and N are the width and height of the area, the values can be set according to actual needs, ⁇ x (i, j) is the local standard deviation.
  • the amplification factor G(i,j) for amplifying the high-frequency portion is calculated according to the local standard deviation ⁇ x (i, j) and the local variance mean G ⁇ as follows:
  • R is a constant, and the specific value of R can be flexibly set according to a specific situation, for example, R can be 10.
  • the min and max operations limit the value of the amplification factor G(i,j) to the range of [1,R] to prevent the target region from being over-enhanced or not enhanced.
  • the amplification factor needs to be greater than 1, and the high-frequency component Can be enhanced.
  • the high-frequency portion in the target region can be enhanced according to the amplification factor G(i,j): G(i,j)[x(i,j)- m x (i,j)].
  • the definition f(i,j) represents the enhanced pixel value corresponding to the pixel point x(i,j). Then the ACE algorithm can be expressed as follows:
  • the local mean square error at the edge of the target area or the sharp portion of other pixels is relatively large, and the ratio of the average standard deviation to the entire target area is also large, and the portion needs to be enhanced.
  • the local mean square error of the smooth portion of the target area will be small, and the ratio of the standard deviation to the entire target area will be small, and the portion will not be enhanced.
  • the high-frequency portion of the target region is enlarged according to the calculated amplification factor, so that the target region of the image is clearly visible, and the image capturing effect in the backlight environment is effectively improved.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
  • the above technical solution realizes that the mobile terminal accurately segments the target area in the acquired image and locally enhances the target area, thereby effectively improving the image capturing effect in the backlight environment.

Abstract

一种图像增强方法,包括:移动终端通过多目视觉平台获取多个图像,根据所述多个图像获取深度图像及拍摄场景中物体的位置信息;根据所述位置信息及所述深度图像的像素信息,对所述深度图像进行图像分割处理获取目标区域;对所述目标区域进行局部增强处理。上述技术方案实现了根据拍摄场景中物体的位置信息对多视觉平台获取的图像中目标区域进行准确分割,再对目标区域进行局部增强,改善了在逆光环境下图像的拍摄效果。

Description

图像增强方法及移动终端 技术领域
本文涉及但不限于移动终端技术领域,尤其涉及一种图像增强方法及移动终端。
背景技术
在移动终端拍摄图像并对图像进行2D平面处理的过程中,通过移动终端拍摄的图像,若遇到逆光环境,则图像中的目标区域因背光而使图像变得暗淡,使得整个画面的背景过于突出,目标区域的明暗效果被压制。移动终端在进行图像处理时直接对于整个图像中的场景进行图像增强,使用该方法图像中的目标区域可以得到增强,但是图像中的背景区域在这种拍摄条件下明暗程度清晰,因此,利用该增强算法会使背景区域过曝,无法提高拍摄效果。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本发明实施例提供一种图像增强方法及移动终端,能够有效改善在逆光环境下图像的拍摄效果。
本发明实施例提供了一种图像增强的移动终端,包括:
获取模块,设置为获取多个图像,根据所述多个图像获取深度图像及拍摄场景中物体的位置信息;
分割模块,设置为根据所述位置信息及所述深度图像的像素信息,对所 述深度图像进行图像分割处理获取目标区域;
增强模块,设置为对所述目标区域进行局部增强处理。
可选地,获取模块,是设置为通过多目视觉平台获取多个图像。
可选地,所述位置信息包括深度信息和距离信息。
可选地,所述分割模块是设置为:
根据所述深度信息及所述深度图像的像素信息,对所述深度图像进行区域分割得到多个区域,将所述多个区域中深度信息最大的区域作为目标区域;或者,
根据所述距离信息及所述深度图像的像素信息,对所述深度图像进行区域分割得到多个区域,将所述多个区域中距离信息最小的区域作为目标区域。
可选地,所述深度信息为拍摄场景中同一点在所述多个图像中对应成像点之间的视差。
可选地,所述距离信息为拍摄场景中所述同一点与相机之间的距离。
可选地,当所述多个图像是通过多目视觉平台获取时,所述获取模块是通过如下方式获取距离信息:
Figure PCTCN2016103085-appb-000001
f是双目视觉平台中左右两个摄像头的焦距,T是两个摄像头之间的间距,d是所述深度信息。
可选地,所述增强模块包括:
划分单元,设置为根据所述深度图像中预设局部区域获取所述目标区域像素的局部平均值作为低频部分,将所述预设局部区域中除去低频部分的区域作为所述目标区域的高频部分;
放大单元,设置为对所述高频部分进行放大,以局部增强所述目标区域。
可选地,所述放大单元是设置为,根据所述预设局部区域及所述目标区域像素的局部平均值,获取所述目标区域像素的局部标准差;根据所述局部标准差获取所述目标区域每个像素点的局部方差平均值,并根据所述局部标准差及所述局部方差平均值获取放大系数;根据所述放大系数对所述高频部 分进行放大。
可选地,
划分单元是设置为通过如下方式获取所述低频部分和高频部分:
假设预设局部区域为:以所述预设局部区域中任意像素点x(i,j)为中心,窗口大小为(2n+1)*(2n+1)的区域,其中n为整数,则低频部分mx(i,j)为
Figure PCTCN2016103085-appb-000002
x(k,l)为所述预设局部区域中的像素点的灰度值,k的范围为i-n~i+n,l的范围为j-n~j+n;
所述高频部分为:x(i,j)-mx(i,j)。
此外,本发明实施例还提供了一种图像增强方法,包括:
移动终端获取多个图像,根据所述多个图像获取深度图像及拍摄场景中物体的位置信息;
根据所述位置信息及所述深度图像的像素信息,对所述深度图像进行图像分割处理获取目标区域;
对所述目标区域进行局部增强处理。
可选地,移动终端通过多目视觉平台获取多个图像。
可选地,所述位置信息包括深度信息和距离信息。
可选地,对所述深度图像进行图像分割处理获取目标区域包括:
根据所述深度信息及所述深度图像的像素信息,对所述深度图像进行区域分割得到多个区域,将所述多个区域中深度信息最大的区域作为目标区域;或者,
根据所述距离信息及所述深度图像的像素信息,对所述深度图像进行区域分割得到多个区域,将所述多个区域中距离信息最小的区域作为目标区域。
可选地,所述深度信息为拍摄场景中同一点在所述多个图像中对应成像点之间的视差。
可选地,所述距离信息为拍摄场景中所述同一点与相机之间的距离。
可选地,当所述多个图像是通过多目视觉平台获取时,所述距离信息通过如下方式获得:
Figure PCTCN2016103085-appb-000003
f是双目视觉平台中左右两个摄像头的焦距,T是两个摄像头之间的间距,d是所述深度信息。
可选地,所述对所述目标区域进行局部增强处理包括:
根据所述深度图像中预设局部区域获取所述目标区域像素的局部平均值作为低频部分,将所述预设局部区域中除去低频部分的区域作为所述目标区域的高频部分;
对所述高频部分进行放大,以局部增强所述目标区域。
可选地,所述对所述高频部分进行放大,以局部增强所述目标区域包括:
根据所述预设局部区域及所述目标区域像素的局部平均值,获取所述目标区域像素的局部标准差;
根据所述局部标准差获取所述目标区域每个像素点的局部方差平均值,并根据所述局部标准差及所述局部方差平均值获取放大系数;
根据所述放大系数对所述高频部分进行放大。
可选地,通过如下方式获取所述低频部分和高频部分:
假设预设局部区域为:以所述预设局部区域中任意像素点x(i,j)为中心,窗口大小为(2n+1)*(2n+1)的区域,其中n为整数,则低频部分mx(i,j)为
Figure PCTCN2016103085-appb-000004
x(k,l)为预设局部区域中的像素点的灰度值,k的范围为i-n~i+n,l的范围为j-n~j+n;
所述高频部分为:x(i,j)-mx(i,j)。
本发明实施例中移动终端通过多目视觉平台进行图像拍摄,并根据图像中物体的位置信息,对图像进行图像分割处理获取目标区域,再对目标区域 进行局部增强处理。使得移动终端实现了根据拍摄场景中物体的位置信息对多视觉平台获取的图像中目标区域进行准确分割,及对目标区域进行局部增强,有效地改善了在逆光环境下图像的拍摄效果。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1为实现本发明各个实施例的移动终端的硬件结构示意图;
图2为如图1所示的移动终端的无线通信装置示意图;
图3为本发明图像增强的移动终端第一实施例的结构模块示意图;
图4为本发明实施例双目视觉平台成像原理示意图;
图5为本发明实施例进行图像增强前的效果示意图;
图6为本发明实施例进行图像增强后的效果示意图;
图7为本发明图像增强的移动终端第二实施例的结构模块示意图;
图8为本发明图像增强方法第一实施例的流程示意图;
图9为本发明图像增强方法第二实施例的流程示意图。
本发明的实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描述中,使用设置为表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明实施例的说明,其本身并没有特定的意义。因此,“模块”与“部件”可以混合地使用。
移动终端可以以各种形式来实施。例如,本发明实施例中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人 数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。
图1为实现本发明各个实施例的移动终端的硬件结构示意。
移动终端100可以包括无线通信单元110、A/V(音频/视频)输入单元120、用户输入单元130、感测单元140、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信装置或网络之间的无线电通信。例如,无线通信单元可以包括广播接收模块111、移动通信模块112、无线互联网模块113、短程通信模块114和位置信息模块115中的至少一个。
广播接收模块111经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播信道可以包括卫星信道和/或地面信道。广播管理服务器可以是生成并发送广播信号和/或广播相关信息的服务器或者接收之前生成的广播信号和/或广播相关信息并且将其发送给终端的服务器。广播信号可以包括TV广播信号、无线电广播信号、数据广播信号等等。而且,广播信号可以进一步包括与TV或无线电广播信号组合的广播信号。广播相关信息也可以经由移动通信网络提供,并且在该情况下,广播相关信息可以由移动通信模块112来接收。广播信号可以以各种形式存在,例如,其可以以数字多媒体广播(DMB)的电子节目指南(EPG)、数字视频广播手持(DVB-H)的电子服务指南(ESG)等等的形式而存在。广播接收模块111可以通过使用各种类型的广播装置接收信号广播。特别地,广播接收模块111可以通过使用诸如多媒体广播-地面(DMB-T)、数字多媒体广播-卫星(DMB-S)、数字视频广播-手持(DVB-H),前向链路媒体(MediaFLO@)的数据广播装置、地面数字广播综合服务(ISDB-T)等等的数字广播装置接收数字广播。广播接收模块111可以被构造为适合提供广播信号的各种广播装置以及上述数字广播装置。经由广播接收模块111接收的广播信号和/或广播相关信息可以存储在存储器160(或者其它类型的存储介 质)中。
移动通信模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种类型的数据。
无线互联网模块113支持移动终端的无线互联网接入。该模块可以内部或外部地耦接到终端。该模块所涉及的无线互联网接入技术可以包括WLAN(无线LAN)(Wi-Fi)、Wibro(无线宽带)、Wimax(全球微波互联接入)、HSDPA(高速下行链路分组接入)等等。
短程通信模块114是设置为支持短程通信的模块。短程通信技术的一些示例包括蓝牙TM、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)、紫蜂TM等等。
位置信息模块115是设置为检查或获取移动终端的位置信息的模块。位置信息模块的典型示例是GPS(全球定位装置)。根据当前的技术,GPS模块115计算来自三个或更多卫星的距离信息和准确的时间信息并且对于计算的信息应用三角测量法,从而根据经度、纬度和高度准确地计算三维当前位置信息。当前,用于计算位置和时间信息的方法使用三颗卫星并且通过使用另外的一颗卫星校正计算出的位置和时间信息的误差。此外,GPS模块115能够通过实时地连续计算当前位置信息来计算速度信息。
A/V输入单元120设置为接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风122,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机121。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信模块112发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触发板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触发板以层的形式叠加在显示单元151上时,可以形成触发屏。
感测单元140检测移动终端100的当前状态,(例如,移动终端100的打开或关闭状态)、移动终端100的位置、用户对于移动终端100的接触(即,触发输入)的有无、移动终端100的取向、移动终端100的加速或将速移动和方向等等,并且生成用于控制移动终端100的操作的命令或信号。例如,当移动终端100实施为滑动型移动电话时,感测单元140可以感测该滑动型电话是打开还是关闭。另外,感测单元140能够检测电源单元190是否提供电力或者接口单元170是否与外部装置耦接。感测单元140可以包括接近传感器141将在下面结合触发屏来对此进行描述。
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、设置为连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为“识别装置”)可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以设置为接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以设置为在移动终端和外部装置之间传输数据。
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151、 音频输出模块152、警报单元153等等。
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
同时,当显示单元151和触发板以层的形式彼此叠加以形成触发屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触发屏可设置为检测触发输入压力以及触发输入位置和触发输入面积。
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括拾音器、蜂鸣器等等。
警报单元153可以提供输出以将事件的发生通知给移动终端100。典型的事件可以包括呼叫接收、消息接收、键信号输入、触发输入等等。除了音频或视频输出之外,警报单元153可以以不同的方式提供输出以通知事件的发生。例如,警报单元153可以以振动的形式提供输出,当接收到呼叫、消息或一些其它进入通信(incoming communication)时,警报单元153可以提供触觉输出(即,振动)以将其通知给用户。通过提供这样的触觉输出,即使在用户的移动电话处于用户的口袋中时,用户也能够识别出各种事件的发生。警报单元153也可以经由显示单元151或音频输出模块152提供通知事件的发生的输出。
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触发施加到触发屏时输出的各种方式的振动和音频信号的数据。
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180可以包括设置为再现(或回放)多媒体数据的多媒体模块181,多媒体模块181可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触发屏上执行的手写输入或者图片绘制输入识别为字符或图像。
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,己经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端 中的滑动型移动终端作为示例。因此,本发明实施例能够应用于任何类型的移动终端,并且不限于滑动型移动终端。
如图1中所示的移动终端100可以被构造为利用经由帧或分组发送数据的诸如有线和无线通信装置以及基于卫星的通信装置来操作。
现在将参考图2描述其中根据本发明实施例的移动终端能够操作的通信装置。
这样的通信装置可以使用不同的空中接口和/或物理层。例如,由通信装置使用的空中接口包括例如频分多址(FDMA)、时分多址(TDMA)、码分多址(CDMA)和通用移动通信装置(UMTS)(特别地,长期演进(LTE))、全球移动通信装置(GSM)等等。作为非限制性示例,下面的描述涉及CDMA通信装置,但是这样的教导同样适用于其它类型的装置。
参考图2,CDMA无线通信装置可以包括多个移动终端100、多个基站(BS)270、基站控制器(BSC)275和移动交换中心(MSC)280。MSC280被构造为与公共电话交换网络(PSTN)290形成接口。MSC280还被构造为与可以经由回程线路耦接到基站270的BSC275形成接口。回程线路可以根据若干己知的接口中的任一种来构造,所述接口包括例如E1/T1、ATM,IP、PPP、帧中继、HDSL、ADSL或xDSL。将理解的是,如图2中所示的装置可以包括多个BSC275。
每个BS270可以服务一个或多个分区(或区域),由多向天线或指向特定方向的天线覆盖的每个分区放射状地远离BS270。或者,每个分区可以由设置为分集接收的两个或更多天线覆盖。每个BS270可以被构造为支持多个频率分配,并且每个频率分配具有特定频谱(例如,1.25MHz,5MHz等等)。
分区与频率分配的交叉可以被称为CDMA信道。BS270也可以被称为基站收发器子装置(BTS)或者其它等效术语。在这样的情况下,术语“基站”可以用于笼统地表示单个BSC275和至少一个BS270。基站也可以被称为“蜂窝站”。或者,特定BS270的各分区可以被称为多个蜂窝站。
如图2中所示,广播发射器(BT)295将广播信号发送给在装置内操作的移动终端100。如图1中所示的广播接收模块111被设置在移动终端100处以接收由BT295发送的广播信号。在图2中,示出了几个全球定位装置(GPS)卫星300。卫星300帮助定位多个移动终端100中的至少一个。
在图2中,描绘了多个卫星300,但是可以理解的是,可以利用任何数目的卫星获得有用的定位信息。如图1中所示的GPS模块115通常被构造为与卫星300配合以获得想要的定位信息。替代GPS跟踪技术或者在GPS跟踪技术之外,可以使用可以跟踪移动终端的位置的其它技术。另外,至少一个GPS卫星300可以选择性地或者额外地处理卫星DMB传输。
作为无线通信装置的一个典型操作,BS270接收来自各种移动终端100的反向链路信号。移动终端100通常参与通话、消息收发和其它类型的通信。特定基站270接收的每个反向链路信号被在特定BS270内进行处理。获得的数据被转发给相关的BSC275。BSC提供通话资源分配和包括BS270之间的软切换过程的协调的移动管理功能。BSC275还将接收到的数据路由到MSC280,其提供用于与PSTN290形成接口的额外的路由服务。类似地,PSTN290与MSC280形成接口,MSC与BSC275形成接口,并且BSC275相应地控制BS270以将正向链路信号发送到移动终端100。
基于上述移动终端硬件结构、通信装置的结构,提出本发明方法各个实施例。
如图3所示,示出了本发明一种图像增强的移动终端第一实施例。该实施例的图像增强的移动终端包括:
获取模块10,设置为通过多目视觉平台获取多个图像,根据所述多个图像获取深度图像及拍摄场景中物体的位置信息;
本实施例中,图像增强的方案应用于移动终端,该移动终端的类型可根据实际需要进行设置,例如,移动终端可包括手机、相机、iPad。该移动终端设置有多目视觉平台,多目视觉平台由两个或多个数码摄像头组成,这些数码摄像头相对位置固定,能够在同一时刻从不同视角采集图像。以下将以双目视觉平台获取图像为例进行详细说明。
双目视觉是模拟人类视觉原理,使用计算机被动感知距离的方法,从两个或者多个点观察一个物体,获取在不同视角下的图像,根据图像之间像素的匹配关系,通过三角测量原理计算出像素之间的偏移来获取物体的三维信息。可选地,该双目视觉平台包括两个数码摄像头,获取模块10可在同一时 刻由双目视觉平台得到两幅图像,这两幅图像可进行立体校正及立体匹配后得到深度图像。立体匹配主要是通过找出每对图像间的对应关系,根据三角测量原理得到视差图,在获得了视差信息后,根据投影模型很容易地可以得到原始图像的深度信息和三维信息。
可以理解的是,多幅图像的获取也可以是通过多个不同的移动终端在同一水平线对同一拍摄场景拍摄得到图像后,通过蓝牙或WiFi等通信模块将得到的图像统一发送至一移动终端进行后续的处理。
为了在后续进行图像分割时能够对目标区域进行准确分离,获取模块10需要获取拍摄场景中物体的位置信息,该位置信息可包括深度信息和距离信息。其中,深度信息为拍摄场景中同一点在所述多个图像中对应成像点之间的视差,距离信息为拍摄场景中物体与相机之间的距离。如图4所示,为双目视觉平台成像原理示意图,假设P为空间中一点,Ol和Or分别为双目视觉平台中左右两个摄像头的中心,xl和xr分别为左右两边的成像点。由点P在左右图像中的成像点的视差,即可得到点P的深度信息为d=xl-xr,得到了物体的深度信息,就可以计算出物体与相机之间的实际距离,使用以下公式计算出P点的距离信息Z:
Figure PCTCN2016103085-appb-000005
需要说明的是,该距离信息也可以是在双目视觉平台上设置测距传感器,通过测距传感器主动发射红外光在场景中反射来获取场景中物体的距离信息。深度信息或距离信息为后面的目标区域和背景区域的分离提供了一个空间参考特征,有利于图像分割算法的准确性。
分割模块20,设置为根据所述位置信息及所述深度图像的像素信息,对所述深度图像进行图像分割处理获取目标区域;
本实施例中,目标区域可根据拍摄场景的具体情况而灵活设置,例如,目标区域可以是拍摄场景中的人物、动物等,也可以是其他物体。由于拍摄场景中的目标区域和背景区域距离摄像机的距离信息不同,目标区域和背景区域的深度信息也会不同,这为目标区域和背景区域的分离提供了一个空间参考特征。在上述得到深度信息后,分割模块20可根据深度信息及深度图像的像素信息,结合传统的图像分割算法或者meanshift算法对深度图像进行目 标区域和背景区域分割,得到目标区域。当然,也可以是在上述得到距离信息后,分割模块20根据距离信息及深度图像的像素信息,结合传统的图像分割算法或者meanshift算法对深度图像进行目标区域和背景区域分割,得到目标区域。相对于传统的图像分割算法在2D平面进行,缺少了拍摄场景的深度信息或距离信息这一重要特征信息,图像分割算法一般很难准确分离出拍摄场景中的目标区域和背景区域,本实施例结合深度信息或距离信息进行图像分割,有利于图像分割算法的准确性。
需要说明的是,在通过图像分割算法获取了不同的图像区域后,还需要经过形态学处理等操作,将图像的轮廓进行提取、每个区域内部空洞进行填充,保证图像分割所得到区域的完整性。
增强模块30,设置为对所述目标区域进行局部增强处理。
在上述进行图像分割处理得到目标区域后,增强模块30需要对图像中的目标区域进行局部对比度增强处理。例如,在逆光环境下进行图像拍摄时,得到的原始图像如图5所示,图5中由于背景强光像素的干扰,使得目标区域的人物看起来比较暗淡。为了更好地改善图像的效果,本实施例可利用局部对比度增强算法增强分离出来的目标区域,得到增强后的图像如图6所示,图6中,目标区域的人物已经过增强处理,看起来非常清晰。图像局部对比度增强算法有多种,本实施例可采用自适应对比度增强算法(ACE算法),以下实施例将进行详细说明。
本发明实施例移动终端通过双目视觉平台进行图像拍摄,并根据拍摄场景中物体的深度信息及图像的像素信息,或根据距离信息及图像的像素信息,对图像进行图像分割处理,准确分离出目标区域和背景区域,从而获取目标区域。然后利用图像局部对比度增强算法对目标区域进行增强处理,而背景区域保持不变,使得图像的目标区域和背景区域的效果均清晰。移动终端实现了根据深度信息或距离信息对多视觉平台获取的图像中目标区域进行准确分割,及对目标区域进行局部增强,有效地改善了在逆光环境下图像的拍摄效果。
可选地,基于上述第一实施例,本实施例中,上述位置信息包括深度信息和距离信息,上述分割模块20是设置为,根据所述深度信息及所述深度图 像的像素信息,对所述深度图像进行区域分割得到多个区域,将所述多个区域中深度信息最大的区域作为目标区域;或者,
根据所述距离信息及所述深度图像的像素信息,对所述深度图像进行区域分割得到多个区域,将所述多个区域中距离信息最小的区域作为目标区域。
本实施例中,分割模块20将具有相似性质的像素集合起来构成区域后,再对区域按照具有相近或相同深度信息进行划分,或者是对区域按照具有相近或相同距离信息进行划分,可以得到多个不同的图像区域。根据拍摄图像的目标区域离摄像头的距离一般较背景区域离相机的距离近这一先验条件,结合拍摄场景中的深度信息,依据双目成像的原理,距离摄像头越近,其深度值越大,因此,可选择深度信息最大的分割区域作为目标区域,或者是选择距离信息最小的分割区域作为目标区域,这样可以最终确定需要进行增强的局部图像的目标区域。当然,也可以根据实际需要,选择所分割的多个区域中深度信息或距离信息为预设值的区域作为目标区域,该预设值根据具体情况而灵活设置。
以下进行举例说明,假设在一拍摄场景中,有一条白色的小狗,小狗的身后较远的距离有一堵白色的墙,在进行图像分割的过程中,在按照像素进行区域划分时,由于白色的小狗与白色的墙的像素信息非常相近,将他们归类到同一区域。此时,利用本实施例的方案,再跟据图像的深度信息或距离信息,将距离较远的墙与距离较近的小狗划分为不同的区域,并将距离较近的小狗所在的区域作为目标区域,实现了对图像进行精确划分。
本实施例结合深度信息或距离信息进行图像分割,大大提高了对图像进行分割及提取目标区域的准确性。
可选地,如图7所示,基于上述第一实施例,提出了本发明图像增强的移动终端第二实施例,该实施例中上述增强模块30可包括:
划分单元31,设置为根据所述深度图像中预设局部区域获取所述目标区域像素的局部平均值作为低频部分,将所述预设局部区域中除去低频部分的区域作为所述目标区域的高频部分;
放大单元32,设置为对所述高频部分进行放大,以局部增强所述目标区域。
在对目标区域进行局部增强处理时,ACE算法采用了反锐化掩模技术:首先由划分单元31将目标区域被分成两个部分。一部分是低频部分,该低频部分可以通过对目标区域进行低通滤波获得,例如,目标区域中像素比较平滑的部分。另一部分是高频部分,该高频部分可以通过原目标区域减去低频部分获取,例如,目标区域中像素变化比较大的边缘部分,即相邻像素点偏差较大的部分。
可选地,低频部分可通过计算以像素为中心的预设局部区域的像素平均值来实现,例如,假设图像中一点的灰度值为x(i,j),预设局部区域的定义为:以预设局部区域中任意像素点为中心,窗口大小为(2n+1)*(2n+1)的区域,其中n为一个整数。当然,该预设局部区域也不一定就是正方形,可根据具体情况而灵活设置。该区域像素的局部平均值mx(i,j)作为低频部分,可以用以下公式(1)计算得到:
Figure PCTCN2016103085-appb-000006
公式(1)中,预设局部区域的大小为(2n+1)2,x(k,l)为预设局部区域中的像素点的灰度值,k的范围为i-n~i+n,l的范围为j-n~j+n,从而可得到高频部分为:x(i,j)-mx(i,j)。然后放大单元32将高频部分放大得到增强的图像,以实现对目标区域进行局部增强。
本实施例对目标区域中的像素点进行低频部分及高频部分的准确划分,以便将相邻像素点变化较大的部分进行放大,改善在逆光环境下图像的拍摄效果。
可选地,基于上述第二实施例,本实施例中,上述放大单元32是设置为,根据所述预设局部区域及所述目标区域像素的局部平均值,获取所述目标区域像素的局部标准差;根据所述局部标准差获取所述目标区域每个像素点的局部方差平均值,并根据所述局部标准差及所述局部方差平均值获取放大系数;根据所述放大系数对所述高频部分进行放大。
本实施例中,放大单元32对目标区域中高频部分进行大,可选地,首先放大单元32根据上述给定的预设局部区域:窗口大小为(2n+1)*(2n+1)的区域, 以及目标区域像素的局部平均值为mx(i,j),计算目标区域像素的局部方差为
Figure PCTCN2016103085-appb-000007
如公式(2)所示:
Figure PCTCN2016103085-appb-000008
公式(2)中,(2n+1)2为预设局部区域的大小,x(k,l)为预设局部区域中的像素点,k的范围为i-n~i+n,l的范围为j-n~j+n,mx(i,j)为目标区域像素的局部平均值,由局部方差
Figure PCTCN2016103085-appb-000009
可得到局部标准差σx(i,j)。
然后根据局部标准差σx(i,j)计算目标区域每个像素点的局部方差平均值Gδ,计算公式如下:
Figure PCTCN2016103085-appb-000010
公式(3)中,M和N为区域的宽度和高度,具体数值可根据实际需要进行设置,σx(i,j)为局部标准差。
放大单元32根据局部标准差σx(i,j)及局部方差平均值Gδ计算对高频部分进行放大的放大系数G(i,j)如下:
G(i,j)=min(max(1,δ(i,j)/Gδ),R)        公式(4)
公式(4)中,R为常数,R的具体取值可根据具体情况而灵活设置,例如,R可为10。min与max操作是将放大系数G(i,j)的值限定在[1,R]的范围内,以防止目标区域增强过度或者增强不明显,一般情况下放大系数需要大于1,高频成分才能得到增强。
因此,放大单元32在得到放大系数G(i,j)后,可根据放大系数G(i,j)对目标区域中的高频部分进行增强为:G(i,j)[x(i,j)-mx(i,j)]。
定义f(i,j)表示像素点x(i,j)对应的增强后的像素值。则ACE算法可以表示如下:
f(i,j)=mx(i,j)+G(i,j)[x(i,j)-mx(i,j)]      公式(5)
公式(5)中,在目标区域的边缘或者其他像素变化剧烈部分的局部均方差比较大,相对整个目标区域的平均标准差的比值也会大,需要对该部分进行增强。在目标区域的平滑部分的局部均方差就会很小,相对整个目标区域 的标准差的比值也会小,该部分不会被增强。
本实施例根据计算得到放大系数对目标区域的高频部分进行放大,使得图像的目标区域清晰可见,有效地改善了在逆光环境下图像的拍摄效果。
对应地,如图8所示,提出本发明一种图像增强方法第一实施例。该实施例的图像增强方法包括:
步骤S10、移动终端通过多目视觉平台获取多个图像,根据所述多个图像获取深度图像及拍摄场景中物体的位置信息;
本实施例中,图像增强的方案应用于移动终端,该移动终端的类型可根据实际需要进行设置,例如,移动终端可包括手机、相机、iPad。该移动终端设置有多目视觉平台,多目视觉平台由两个或多个数码摄像头组成,这些数码摄像头相对位置固定,能够在同一时刻从不同视角采集图像。以下将以双目视觉平台获取图像为例进行详细说明。
双目视觉是模拟人类视觉原理,使用计算机被动感知距离的方法,从两个或者多个点观察一个物体,获取在不同视角下的图像,根据图像之间像素的匹配关系,通过三角测量原理计算出像素之间的偏移来获取物体的三维信息。可选地,该双目视觉平台包括两个数码摄像头,可在同一时刻得到两幅图像,这两幅图像可进行立体校正及立体匹配后得到深度图像。立体匹配主要是通过找出每对图像间的对应关系,根据三角测量原理得到视差图,在获得了视差信息后,根据投影模型很容易地可以得到原始图像的深度信息和三维信息。
可以理解的是,多幅图像的获取也可以是通过多个不同的移动终端在同一水平线对同一拍摄场景拍摄得到图像后,通过蓝牙或WiFi等通信模块将得到的图像统一发送至一移动终端进行后续的处理。
为了在后续进行图像分割时能够对目标区域进行准确分离,需要获取拍摄场景中物体的位置信息或距离信息,该位置信息可包括深度信息和距离信息。其中,深度信息为两幅图像中对应成像点之间的视差,距离信息为拍摄场景中物体与相机之间的距离。如图4所示,为双目视觉平台成像原理示意图,假设P为空间中某一点,Ol和Or分别为双目视觉平台中左右两个摄像头的中心,xl和xr分别为左右两边的成像点。由点P在左右图像中的成像点的 视差,即可得到点P的深度信息为d=xl-xr,得到了物体的深度信息,就可以计算出物体与相机之间的实际距离,使用以下公式计算出P点的距离信息Z:
Figure PCTCN2016103085-appb-000011
需要说明的是,该距离信息也可以是在双目视觉平台上设置测距传感器,通过测距传感器主动发射红外光在场景中反射来获取场景中物体的距离信息。深度信息或距离信息为后面的目标区域和背景区域的分离提供了一个空间参考特征,有利于图像分割算法的准确性。
步骤S20、根据所述位置信息及所述深度图像的像素信息,对所述深度图像进行图像分割处理获取目标区域;
本实施例中,目标区域可根据拍摄场景的具体情况而灵活设置,例如,目标区域可以是拍摄场景中的人物、动物等,也可以是其他物体。由于拍摄场景中的目标区域和背景区域距离摄像机的距离信息不同,目标区域和背景区域的深度信息也会不同,这为目标区域和背景区域的分离提供了一个空间参考特征。在上述得到深度信息后,可根据深度信息及深度图像的像素信息,结合传统的图像分割算法或者meanshift算法对深度图像进行目标区域和背景区域分割,得到目标区域。当然,也可以是在上述得到距离信息后,根据距离信息及深度图像的像素信息,结合传统的图像分割算法或者meanshift算法对深度图像进行目标区域和背景区域分割,得到目标区域。相对于传统的图像分割算法在2D平面进行,缺少了拍摄场景的深度信息或距离信息这一重要特征信息,图像分割算法一般很难准确分离出拍摄场景中的目标区域和背景区域,本实施例结合深度信息或距离信息进行图像分割,有利于图像分割算法的准确性。
需要说明的是,在通过图像分割算法获取了不同的图像区域后,还需要经过形态学处理等操作,将图像的轮廓进行提取、每个区域内部空洞进行填充,保证图像分割所得到区域的完整性。
步骤S30、对所述目标区域进行局部增强处理。
在上述进行图像分割处理得到目标区域后,需要对图像中的目标区域进行局部对比度增强处理。例如,在逆光环境下进行图像拍摄时,得到的原始 图像如图5所示,图5中由于背景强光像素的干扰,使得目标区域的人物看起来比较暗淡。为了更好地改善图像的效果,本实施例可利用局部对比度增强算法增强分离出来的目标区域,得到增强后的图像如图6所示,图6中,目标区域的人物已经过增强处理,看起来非常清晰。图像局部对比度增强算法有多种,本实施例可采用自适应对比度增强算法(ACE算法),以下实施例将进行详细说明。
本发明实施例移动终端通过双目视觉平台进行图像拍摄,并根据拍摄场景中物体的深度信息及图像的像素信息,或根据距离信息及图像的像素信息,对图像进行图像分割处理,准确分离出目标区域和背景区域,从而获取目标区域。然后利用图像局部对比度增强算法对目标区域进行增强处理,而背景区域保持不变,使得图像的目标区域和背景区域的效果均清晰。移动终端实现了根据深度信息或距离信息对多视觉平台获取的图像中目标区域进行准确分割,及对目标区域进行局部增强,有效地改善了在逆光环境下图像的拍摄效果。
可选地,基于上述第一实施例,本实施例中,上述位置信息可包括深度信息或距离信息,上述步骤S20可包括:
根据所述深度信息及所述深度图像的像素信息,对所述深度图像进行区域分割得到多个区域,将所述多个区域中深度信息最大的区域作为目标区域;或者,
根据所述距离信息及所述深度图像的像素信息,对所述深度图像进行区域分割得到多个区域,将所述多个区域中距离信息最小的区域作为目标区域。
本实施例中,将具有相似性质的像素集合起来构成区域后,再按照具有相近或相同深度信息的进行区域划分,或者是按照具有相近或相同距离信息的进行区域划分,可以得到多个不同的图像区域。根据拍摄图像的目标区域离摄像头的距离一般较背景区域离相机的距离近这一先验条件,结合拍摄场景中的深度信息,依据双目成像的原理,距离摄像头越近,其深度值越大,因此,可选择深度信息最大的分割区域作为目标区域,或者是选择距离信息最小的分割区域作为目标区域,这样可以最终确定需要进行增强的局部图像的目标区域。当然,也可以根据实际需要,选择所分割的多个区域中深度信 息或距离信息为预设值的区域作为目标区域,该预设值根据具体情况而灵活设置。
以下进行举例说明,假设在一拍摄场景中,有一条白色的小狗,小狗的身后较远的距离有一堵白色的墙,在进行图像分割的过程中,在按照像素进行区域划分时,由于白色的小狗与白色的墙的像素信息非常相近,将他们归类到同一区域。此时,利用本实施例的方案,再跟据图像的深度信息或距离信息,将距离较远的墙与距离较近的小狗划分为不同的区域,并将距离较近的小狗所在的区域作为目标区域,实现了对图像进行精确划分。
本实施例结合深度信息或距离信息进行图像分割,大大提高了对图像进行分割及提取目标区域的准确性。
可选地,如图9所示,基于上述第一实施例,提出了本发明图像增强方法第二实施例,该实施例中上述步骤S30可包括:
步骤S31、根据所述深度图像中预设局部区域获取所述目标区域像素的局部平均值作为低频部分,将所述预设局部区域中除去低频部分的区域作为所述目标区域的高频部分;
步骤S32、对所述高频部分进行放大,以局部增强所述目标区域。
在对目标区域进行局部增强处理时,ACE算法采用了反锐化掩模技术:首先将目标区域被分成两个部分。一部分是低频部分,该低频部分可以通过对目标区域进行低通滤波获得,例如,目标区域中像素比较平滑的部分。另一部分是高频部分,该高频部分可以通过原目标区域减去低频部分获取,例如,目标区域中像素变化比较大的边缘部分,即相邻像素点偏差较大的部分。
可选地,低频部分可通过计算以像素为中心的预设局部区域的像素平均值来实现,例如,假设图像中某点的灰度值为x(i,j),预设局部区域的定义为:以预设局部区域中任意像素点x(i,j)为中心,窗口大小为(2n+1)*(2n+1)的区域,其中n为一个整数。当然,该预设局部区域也不一定就是正方形,可根据具体情况而灵活设置。该区域像素的局部平均值mx(i,j)作为低频部分,可以用以下公式(1)计算得到:
Figure PCTCN2016103085-appb-000012
公式(1)中,预设局部区域的大小为(2n+1)2,x(k,l)为预设局部区域中的像素点的灰度值,k的范围为i-n~i+n,l的范围为j-n~j+n,从而可得到高频部分为:x(i,j)-mx(i,j)。然后将高频部分放大得到增强的图像,以实现对目标区域进行局部增强。
本实施例对目标区域中的像素点进行低频部分及高频部分的准确划分,以便将相邻像素点变化较大的部分进行放大,改善在逆光环境下图像的拍摄效果。
可选地,基于上述第二实施例,本实施例中,上述步骤S32可包括:
根据所述预设局部区域及所述目标区域像素的局部平均值,获取所述目标区域像素的局部标准差;
根据所述局部标准差获取所述目标区域每个像素点的局部方差平均值,并根据所述局部标准差及所述局部方差平均值获取放大系数;
根据所述放大系数对所述高频部分进行放大。
本实施例中,对目标区域中高频部分进行大,可选地,首先根据上述给定的预设局部区域:窗口大小为(2n+1)*(2n+1)的区域,以及目标区域像素的局部平均值为mx(i,j),计算目标区域像素的局部方差为
Figure PCTCN2016103085-appb-000013
如公式(2)所示:
Figure PCTCN2016103085-appb-000014
公式(2)中,(2n+1)2为预设局部区域的大小,x(k,l)为预设局部区域中的像素点,k的范围为i-n~i+n,l的范围为j-n~j+n,mx(i,j)为目标区域像素的局部平均值,由局部方差
Figure PCTCN2016103085-appb-000015
可得到局部标准差σx(i,j)。
然后根据局部标准差σx(i,j)计算目标区域每个像素点的局部方差平均值Gδ,计算公式如下:
Figure PCTCN2016103085-appb-000016
公式(3)中,M和N为区域的宽度和高度,具体数值可根据实际需要进行设置,σx(i,j)为局部标准差。
根据局部标准差σx(i,j)及局部方差平均值Gδ计算对高频部分进行放大的放大系数G(i,j)如下:
G(i,j)=min(max(1,δ(i,j)/Gδ),R)       公式(4)
公式(4)中,R为常数,R的具体取值可根据具体情况而灵活设置,例如,R可为10。min与max操作是将放大系数G(i,j)的值限定在[1,R]的范围内,以防止目标区域增强过度或者增强不明显,一般情况下放大系数需要大于1,高频成分才能得到增强。
因此,在得到放大系数G(i,j)后,可根据放大系数G(i,j)对目标区域中的高频部分进行增强为:G(i,j)[x(i,j)-mx(i,j)]。
定义f(i,j)表示像素点x(i,j)对应的增强后的像素值。则ACE算法可以表示如下:
f(i,j)=mx(i,j)+G(i,j)[x(i,j)-mx(i,j)]       公式(5)
公式(5)中,在目标区域的边缘或者其他像素变化剧烈部分的局部均方差比较大,相对整个目标区域的平均标准差的比值也会大,需要对该部分进行增强。在目标区域的平滑部分的局部均方差就会很小,相对整个目标区域的标准差的比值也会小,该部分不会被增强。
本实施例根据计算得到放大系数对目标区域的高频部分进行放大,使得图像的目标区域清晰可见,有效地改善了在逆光环境下图像的拍摄效果。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。
以上仅为本发明的可选实施例,并非因此限制本发明的专利范围,凡是 利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。
工业实用性
上述技术方案实现了移动终端对获取的图像中目标区域进行准确分割,及对目标区域进行局部增强,有效地改善了在逆光环境下图像的拍摄效果。

Claims (20)

  1. 一种图像增强的移动终端,所述图像增强的移动终端包括:
    获取模块,设置为获取多个图像,根据所述多个图像获取深度图像及拍摄场景中物体的位置信息;
    分割模块,设置为根据所述位置信息及所述深度图像的像素信息,对所述深度图像进行图像分割处理获取目标区域;
    增强模块,设置为对所述目标区域进行局部增强处理。
  2. 如权利要求1所述的图像增强的移动终端,其中,
    获取模块,是设置为通过多目视觉平台获取多个图像。
  3. 如权利要求1所述的图像增强的移动终端,其中,所述位置信息包括深度信息和距离信息。
  4. 如权利要求3所述的图像增强的移动终端,其中,所述分割模块是设置为:
    根据所述深度信息及所述深度图像的像素信息,对所述深度图像进行区域分割得到多个区域,将所述多个区域中深度信息最大的区域作为目标区域;或者,
    根据所述距离信息及所述深度图像的像素信息,对所述深度图像进行区域分割得到多个区域,将所述多个区域中距离信息最小的区域作为目标区域。
  5. 如权利要求4所述的图像增强的移动终端,其中,所述深度信息为拍摄场景中同一点在所述多个图像中对应成像点之间的视差。
  6. 如权利要求5所述的图像增强的移动终端,其中,所述距离信息为拍摄场景中所述同一点与相机之间的距离。
  7. 如权利要求6所述的图像增强的移动终端,其中,当所述多个图像是通过多目视觉平台获取时,所述获取模块是通过如下方式获取距离信息:
    Figure PCTCN2016103085-appb-100001
    f是双目视觉平台中左右两个摄像头的焦距,T是两个摄像头之间的间距,d是所述深度信息。
  8. 如权利要求1所述的图像增强的移动终端,其中,所述增强模块包括:
    划分单元,设置为根据所述深度图像中预设局部区域获取所述目标区域像素的局部平均值作为低频部分,将所述预设局部区域中除去低频部分的区域作为所述目标区域的高频部分;
    放大单元,设置为对所述高频部分进行放大,以局部增强所述目标区域。
  9. 如权利要求8所述的图像增强的移动终端,其中,所述放大单元是设置为:
    根据所述预设局部区域及所述目标区域像素的局部平均值,获取所述目标区域像素的局部标准差;根据所述局部标准差获取所述目标区域每个像素点的局部方差平均值,并根据所述局部标准差及所述局部方差平均值获取放大系数;根据所述放大系数对所述高频部分进行放大。
  10. 如权利要求8所述的图像增强的移动终端,其中,划分单元是设置为通过如下方式获取所述低频部分和高频部分:
    假设预设局部区域为:以所述预设局部区域中任意像素点x(i,j)为中心,窗口大小为(2n+1)*(2n+1)的区域,其中n为整数,则低频部分mx(i,j)为
    Figure PCTCN2016103085-appb-100002
    x(k,l)为所述预设局部区域中的像素点的灰度值,k的范围为i-n~i+n,l的范围为j-n~j+n;
    所述高频部分为:x(i,j)-mx(i,j)。
  11. 一种图像增强方法,,所述图像增强方法包括以下步骤:
    移动终端获取多个图像,根据所述多个图像获取深度图像及拍摄场景中物体的位置信息;
    根据所述位置信息及所述深度图像的像素信息,对所述深度图像进行图像分割处理获取目标区域;
    对所述目标区域进行局部增强处理。
  12. 如权利要求11所述的图像增强方法,其中,
    移动终端通过多目视觉平台获取多个图像。
  13. 如权利要求11所述的图像增强方法,其中,所述位置信息包括深度信息和距离信息。
  14. 如权利要求13所述的图像增强方法,其中,对所述深度图像进行图像分割处理获取目标区域包括:
    根据所述深度信息及所述深度图像的像素信息,对所述深度图像进行区域分割得到多个区域,将所述多个区域中深度信息最大的区域作为目标区域;或者,
    根据所述距离信息及所述深度图像的像素信息,对所述深度图像进行区域分割得到多个区域,将所述多个区域中距离信息最小的区域作为目标区域。
  15. 如权利要求14所述的图像增强方法,其中,所述深度信息为拍摄场景中同一点在所述多个图像中对应成像点之间的视差。
  16. 如权利要求15所述的图像增强方法,其中,所述距离信息为拍摄场景中所述同一点与相机之间的距离。
  17. 如权利要求16所述的图像增强方法,其中,
    当所述多个图像是通过多目视觉平台获取时,所述距离信息通过如下方式获得:
    Figure PCTCN2016103085-appb-100003
    f是双目视觉平台中左右两个摄像头的焦距,T是两个摄像头之间的间距,d是所述深度信息。
  18. 如权利要求11所述的图像增强方法,其中,所述对所述目标区域进行局部增强处理包括:
    根据所述深度图像中预设局部区域获取所述目标区域像素的局部平均值作为低频部分,将所述预设局部区域中除去低频部分的区域作为所述目标区域的高频部分;
    对所述高频部分进行放大,以局部增强所述目标区域。
  19. 如权利要求18所述的图像增强方法,其中,所述对所述高频部分进 行放大,以局部增强所述目标区域包括:
    根据所述预设局部区域及所述目标区域像素的局部平均值,获取所述目标区域像素的局部标准差;
    根据所述局部标准差获取所述目标区域每个像素点的局部方差平均值,并根据所述局部标准差及所述局部方差平均值获取放大系数;
    根据所述放大系数对所述高频部分进行放大。
  20. 如权利要求18所述的图像增强方法,其中,通过如下方式获取所述低频部分和高频部分:
    假设预设局部区域为:以所述预设局部区域中任意像素点x(i,j)为中心,窗口大小为(2n+1)*(2n+1)的区域,其中n为整数,则低频部分mx(i,j)为
    Figure PCTCN2016103085-appb-100004
    x(k,l)为预设局部区域中的像素点的灰度值,k的范围为i-n~i+n,l的范围为j-n~j+n;
    所述高频部分为:x(i,j)-mx(i,j)。
PCT/CN2016/103085 2015-10-23 2016-10-24 图像增强方法及移动终端 WO2017067526A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510698241.2 2015-10-23
CN201510698241.2A CN105303543A (zh) 2015-10-23 2015-10-23 图像增强方法及移动终端

Publications (1)

Publication Number Publication Date
WO2017067526A1 true WO2017067526A1 (zh) 2017-04-27

Family

ID=55200767

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/103085 WO2017067526A1 (zh) 2015-10-23 2016-10-24 图像增强方法及移动终端

Country Status (2)

Country Link
CN (1) CN105303543A (zh)
WO (1) WO2017067526A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325905A (zh) * 2018-08-29 2019-02-12 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN111414800A (zh) * 2020-02-17 2020-07-14 妙微(杭州)科技有限公司 图像中小目标识别监控方法及其训练集的获取方法
CN111415380A (zh) * 2020-03-03 2020-07-14 智方达(天津)科技有限公司 一种基于景深信息的视频运动放大方法
CN112200035A (zh) * 2020-09-29 2021-01-08 深圳市优必选科技股份有限公司 用于模拟拥挤场景的图像获取方法、装置和视觉处理方法
CN114842578A (zh) * 2022-04-26 2022-08-02 深圳市凯迪仕智能科技有限公司 智能锁、拍摄控制方法及相关装置
CN115423930A (zh) * 2022-07-28 2022-12-02 荣耀终端有限公司 一种图像采集方法及电子设备
CN115861321A (zh) * 2023-02-28 2023-03-28 深圳市玄羽科技有限公司 应用于工业互联网的生产环境检测方法及系统

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303543A (zh) * 2015-10-23 2016-02-03 努比亚技术有限公司 图像增强方法及移动终端
CN105959547B (zh) * 2016-05-25 2019-09-20 努比亚技术有限公司 拍照处理装置及方法
TWI607410B (zh) * 2016-07-06 2017-12-01 虹光精密工業股份有限公司 具有分區影像處理功能的影像處理設備及影像處理方法
CN106997595A (zh) * 2017-03-09 2017-08-01 广东欧珀移动通信有限公司 基于景深的图像颜色处理方法、处理装置及电子装置
CN106991696B (zh) * 2017-03-09 2020-01-24 Oppo广东移动通信有限公司 逆光图像处理方法、逆光图像处理装置及电子装置
CN110115025B (zh) * 2017-03-09 2022-05-20 Oppo广东移动通信有限公司 基于深度的控制方法、控制装置及电子装置
CN106803920B (zh) * 2017-03-17 2020-07-10 广州视源电子科技股份有限公司 一种图像处理的方法、装置及智能会议终端
CN106851119B (zh) * 2017-04-05 2020-01-03 奇酷互联网络科技(深圳)有限公司 一种图片生成的方法和设备以及移动终端
CN107277356B (zh) * 2017-07-10 2020-02-14 Oppo广东移动通信有限公司 逆光场景的人脸区域处理方法和装置
CN108053371B (zh) * 2017-11-30 2022-04-19 努比亚技术有限公司 一种图像处理方法、终端及计算机可读存储介质
CN108038452B (zh) * 2017-12-15 2020-11-03 厦门瑞为信息技术有限公司 一种基于局部图像增强的家电手势快速检测识别方法
CN108805883B (zh) * 2018-06-08 2021-04-16 Oppo广东移动通信有限公司 一种图像分割方法、图像分割装置及电子设备
CN108932702B (zh) * 2018-06-13 2020-10-09 北京微播视界科技有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
CN109035159A (zh) * 2018-06-27 2018-12-18 努比亚技术有限公司 一种图像优化处理方法、移动终端及计算机可读存储介质
CN109472767B (zh) * 2018-09-07 2022-02-08 浙江大丰实业股份有限公司 舞台灯具缺失状态分析系统
CN109242802B (zh) * 2018-09-28 2021-06-15 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及计算机可读介质
CN111932479A (zh) * 2020-08-10 2020-11-13 中国科学院上海微系统与信息技术研究所 数据增强方法、系统以及终端
CN114255173A (zh) * 2020-09-24 2022-03-29 苏州科瓴精密机械科技有限公司 粗糙度补偿方法、系统、图像处理设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610421A (zh) * 2008-06-17 2009-12-23 深圳华为通信技术有限公司 视频通讯方法、装置及系统
US20110064299A1 (en) * 2009-09-14 2011-03-17 Fujifilm Corporation Image processing apparatus and image processing method
CN104202524A (zh) * 2014-09-02 2014-12-10 三星电子(中国)研发中心 一种逆光拍摄方法和装置
CN104992429A (zh) * 2015-04-23 2015-10-21 北京宇航时代科技发展有限公司 一种基于图像局部增强的山体裂缝检测方法
CN105303543A (zh) * 2015-10-23 2016-02-03 努比亚技术有限公司 图像增强方法及移动终端

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007031873A2 (en) * 2005-09-14 2007-03-22 Rgbright, Inc. Image enhancement and compression
CN103839231A (zh) * 2012-11-27 2014-06-04 中国科学院沈阳自动化研究所 一种基于人眼视觉最小探测概率最大化的图像增强方法
CN103632351B (zh) * 2013-12-16 2017-01-11 武汉大学 一种基于亮度基准漂移的全天候交通图像增强方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610421A (zh) * 2008-06-17 2009-12-23 深圳华为通信技术有限公司 视频通讯方法、装置及系统
US20110064299A1 (en) * 2009-09-14 2011-03-17 Fujifilm Corporation Image processing apparatus and image processing method
CN104202524A (zh) * 2014-09-02 2014-12-10 三星电子(中国)研发中心 一种逆光拍摄方法和装置
CN104992429A (zh) * 2015-04-23 2015-10-21 北京宇航时代科技发展有限公司 一种基于图像局部增强的山体裂缝检测方法
CN105303543A (zh) * 2015-10-23 2016-02-03 努比亚技术有限公司 图像增强方法及移动终端

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325905B (zh) * 2018-08-29 2023-10-13 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN109325905A (zh) * 2018-08-29 2019-02-12 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN111414800B (zh) * 2020-02-17 2023-08-01 妙微(杭州)科技有限公司 图像中小目标识别监控方法及其训练集的获取方法
CN111414800A (zh) * 2020-02-17 2020-07-14 妙微(杭州)科技有限公司 图像中小目标识别监控方法及其训练集的获取方法
CN111415380A (zh) * 2020-03-03 2020-07-14 智方达(天津)科技有限公司 一种基于景深信息的视频运动放大方法
CN111415380B (zh) * 2020-03-03 2022-08-02 智方达(天津)科技有限公司 一种基于景深信息的视频运动放大方法
CN112200035B (zh) * 2020-09-29 2023-09-05 深圳市优必选科技股份有限公司 用于模拟拥挤场景的图像获取方法、装置和视觉处理方法
CN112200035A (zh) * 2020-09-29 2021-01-08 深圳市优必选科技股份有限公司 用于模拟拥挤场景的图像获取方法、装置和视觉处理方法
CN114842578A (zh) * 2022-04-26 2022-08-02 深圳市凯迪仕智能科技有限公司 智能锁、拍摄控制方法及相关装置
CN114842578B (zh) * 2022-04-26 2024-04-05 深圳市凯迪仕智能科技股份有限公司 智能锁、拍摄控制方法及相关装置
CN115423930A (zh) * 2022-07-28 2022-12-02 荣耀终端有限公司 一种图像采集方法及电子设备
CN115423930B (zh) * 2022-07-28 2023-09-26 荣耀终端有限公司 一种图像采集方法及电子设备
CN115861321A (zh) * 2023-02-28 2023-03-28 深圳市玄羽科技有限公司 应用于工业互联网的生产环境检测方法及系统
CN115861321B (zh) * 2023-02-28 2023-09-05 深圳市玄羽科技有限公司 应用于工业互联网的生产环境检测方法及系统

Also Published As

Publication number Publication date
CN105303543A (zh) 2016-02-03

Similar Documents

Publication Publication Date Title
WO2017067526A1 (zh) 图像增强方法及移动终端
WO2018019124A1 (zh) 一种图像处理方法及电子设备、存储介质
WO2017050115A1 (zh) 一种图像合成方法和装置
CN106412324B (zh) 提示对焦对象的装置及方法
CN106454121B (zh) 双摄像头拍照方法及装置
WO2018076935A1 (zh) 图像虚化处理方法、装置、移动终端和计算机存储介质
WO2016180325A1 (zh) 图像处理方法及装置
WO2017045650A1 (zh) 一种图片处理方法及终端
WO2017020836A1 (zh) 一种虚化处理深度图像的装置和方法
US8780258B2 (en) Mobile terminal and method for generating an out-of-focus image
WO2017067390A1 (zh) 图像中弱纹理区域的深度信息获取方法及终端
WO2017016511A1 (zh) 一种图像处理方法及装置、终端
CN106909274B (zh) 一种图像显示方法和装置
WO2017140182A1 (zh) 一种图像合成方法及装置、存储介质
WO2017071476A1 (zh) 一种图像合成方法和装置、存储介质
WO2017071475A1 (zh) 一种图像处理方法及终端、存储介质
CN105956999B (zh) 缩略图生成装置和方法
WO2018019128A1 (zh) 一种夜景图像的处理方法和移动终端
WO2018076938A1 (zh) 图像处理装置及方法和计算机存储介质
WO2017041714A1 (zh) 一种获取rgb数据的方法和装置
WO2017067523A1 (zh) 图像处理方法、装置及移动终端
CN106657782B (zh) 一种图片处理方法和终端
CN106651867B (zh) 一种实现交互式图像分割的方法、装置及终端
WO2017088618A1 (zh) 图片合成方法及装置
WO2017071469A1 (zh) 一种移动终端和图像拍摄方法、计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16856957

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16856957

Country of ref document: EP

Kind code of ref document: A1