WO2018076935A1 - 图像虚化处理方法、装置、移动终端和计算机存储介质 - Google Patents

图像虚化处理方法、装置、移动终端和计算机存储介质 Download PDF

Info

Publication number
WO2018076935A1
WO2018076935A1 PCT/CN2017/100881 CN2017100881W WO2018076935A1 WO 2018076935 A1 WO2018076935 A1 WO 2018076935A1 CN 2017100881 W CN2017100881 W CN 2017100881W WO 2018076935 A1 WO2018076935 A1 WO 2018076935A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
depth
pixel point
depth value
point
Prior art date
Application number
PCT/CN2017/100881
Other languages
English (en)
French (fr)
Inventor
戴向东
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2018076935A1 publication Critical patent/WO2018076935A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image blurring processing method and apparatus, a mobile terminal, and a computer storage medium.
  • SLR camera In daily life, users can use a SLR camera to take pictures with background blur effect, so that the focus of the picture is concentrated on one thing, but the SLR camera as a professional equipment has higher cost, low popularity, complicated operation, and requires professional Knowledge can capture a better background bokeh picture.
  • the cost of ordinary digital equipment is relatively low, the popularity is high, and the operation is simple, but the existing ordinary digital equipment cannot be compared with the SLR camera in hardware, especially when the photosensitive element cannot be compared with the SLR, resulting in ordinary digital equipment. Can not take a layered background blur effect.
  • the embodiment of the invention provides an image blurring processing method and device, a mobile terminal and a computer storage medium, and aims to solve the problem that the existing ordinary digital device cannot take a picture with a background blur effect.
  • An embodiment of the present invention provides an image blurring processing method, including: collecting an image and a a depth image corresponding to the image; calculating, according to a depth value of the pixel point in the depth image, a blur radius corresponding to the pixel point; and a position of the pixel in the image according to a blur radius corresponding to the pixel point Corresponding pixels are blurred.
  • the calculating, according to the depth value of the pixel point in the depth image, the blur radius corresponding to the pixel point comprising: acquiring a depth value of the pixel point in the depth image; Determining a depth value of a pixel point where the focus is located; obtaining a Gaussian model corresponding to the focus point according to the depth value of the pixel point where the focus point is located; the Gaussian model is a pixel point and a blur radius a mapping relationship model; based on the Gaussian model, obtaining a blur radius corresponding to the pixel point.
  • the depth value of the pixel point where the focus is located in the depth value of the pixel point includes: according to the coordinate of the selected focus point when the depth image is acquired, at the depth Among the depth values corresponding to the pixel points of the image, the depth value of the pixel point where the coordinates of the focus point are located is determined.
  • performing, according to the blur radius corresponding to the pixel point performing blur processing on a pixel point corresponding to the pixel point position in the image, including: determining, in the depth image, a depth of field region An external image as a background image; a local image corresponding to the background image position is determined in the image; and the pixel is used in the partial image by using a blur radius corresponding to a pixel point in the background image The pixel corresponding to the point position is blurred.
  • the acquiring an image and the depth image corresponding to the image comprising: calling a binocular camera to acquire the image and a depth image corresponding to the image; or calling a camera and a ranging sensor, collecting by a camera The image is acquired by a ranging sensor to capture a depth image corresponding to the image.
  • the method further includes detecting an input operation, determining a focus point based on the input operation, and determining a depth of field region based on the focus point.
  • the Gaussian model satisfies:
  • R is the blur radius
  • R max is the maximum value of the blur radius
  • d is the depth value
  • dfocus is the depth value of the focus point
  • is the variance of the Gaussian curve
  • 0.5 is a constant. Indicates rounding down to get the integer R value
  • An embodiment of the present invention further provides an image blurring processing apparatus, including: an acquisition module configured to obtain an image and a depth image corresponding to the image; and a calculation module configured to select a pixel in the depth image according to the acquisition module a depth value, the blur radius corresponding to the pixel point is calculated; the processing module is configured to: according to the blur radius corresponding to the pixel point obtained by the calculation module, the image obtained by the acquisition module and the pixel The pixel corresponding to the point position is blurred.
  • the calculating module is configured to: obtain a depth value of a pixel point in the depth image; and determine a depth value of a pixel point where the focus point is located in the depth value of the pixel point; a depth value of the pixel point where the focus point is located, a Gaussian model corresponding to the focus point; the Gaussian model is a mapping relationship model of the pixel point and the blur radius; and the pixel point is obtained according to the Gaussian model Fuzzy radius.
  • the calculating module is configured to determine, according to coordinates of the selected focus point when the depth image is obtained, where the coordinates of the focus point are located in the depth value corresponding to the pixel point of the depth image. The depth value of the pixel.
  • the processing module is configured to: determine an image outside the depth of field region as a background image in the depth image; and determine a partial image corresponding to the background image position in the image And performing, by using a blur radius corresponding to the pixel point in the background image, a blurring process on a pixel point corresponding to the pixel point position in the partial image.
  • the acquisition module is configured to: obtain a location by calling a binocular camera. Declaring a clear image and a depth image corresponding to the image; or calling a camera and a ranging sensor, obtaining the image by the camera, and obtaining a depth image corresponding to the image by the ranging sensor.
  • the apparatus further includes an input detection module configured to detect an input operation
  • the processing module is configured to determine a focus point based on the input operation detected by the input detection module, and determine a depth of field region based on the focus point.
  • the Gaussian model satisfies:
  • R is the blur radius
  • R max is the maximum value of the blur radius
  • d is the depth value
  • dfocus is the depth value of the focus point
  • is the variance of the Gaussian curve
  • 0.5 is a constant. Indicates rounding down to get the integer R value
  • the embodiment of the invention further provides a mobile terminal, including:
  • a binocular camera configured to acquire an image and a depth image corresponding to the image
  • a camera configured to capture images
  • a ranging sensor configured to acquire a depth image corresponding to the image
  • a processor configured to perform the image blurring processing program to perform an operation of acquiring the image captured by the binocular camera or the camera and the collected by the binocular camera or the ranging sensor a depth image corresponding to the image; calculating a blur radius corresponding to the pixel point according to the depth value of the pixel point in the depth image; and corresponding to the pixel point position in the image according to the blur radius corresponding to the pixel point
  • the pixels are blurred.
  • the processor is configured to execute the image blurring processing program to perform an operation of: acquiring a depth value of a pixel point in the depth image; at the pixel point Determining, in the depth value, a depth value of a pixel point where the focus is located; obtaining a Gaussian model corresponding to the focus point according to the depth value of the pixel point where the focus point is located; and the Gaussian model is mapping the pixel point and the blur radius A relationship model; based on the Gaussian model, obtaining a blur radius corresponding to a pixel point.
  • the processor is configured to execute the image blurring processing program to perform an operation of: selecting, according to coordinates of a selected focus point when the depth image is acquired, a pixel point of the depth image Among the corresponding depth values, the depth value of the pixel point where the coordinates of the focus point are located is determined.
  • the processor is configured to execute the image blurring processing program to perform an operation of: determining an image outside the depth of field region as a background image in the depth image; And determining a partial image corresponding to the position of the background image; and using a blur radius corresponding to the pixel point in the background image, performing blur processing on the pixel point corresponding to the pixel position in the partial image.
  • the processor is configured to execute the image blurring processing program to perform an operation of detecting an input operation, determining a focus point based on the input operation, and determining a depth of field region based on the focus point.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the image blurring processing method according to the embodiment of the invention.
  • the image blurring processing method and device provided by the embodiments of the present invention, the mobile terminal and the computer storage medium, blur the background of the shooting, calculate the blur radius corresponding to the pixel point by using the depth image, and use the blur radius corresponding to the pixel point to clear the image.
  • the pixel corresponding to the middle position is blurred, which realizes rapid background blurring of the image, realizing the background blur effect, and improving the user's operation experience.
  • FIG. 1 is a schematic structural diagram of hardware of an optional mobile terminal embodying various embodiments of the present invention
  • FIG. 2 is a flowchart of an image blurring processing method according to a first embodiment of the present invention
  • FIG. 3 is a flow chart showing the steps of calculating a blur radius according to a second embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an imaging principle according to a second embodiment of the present invention.
  • FIG. 5 is a schematic diagram showing a relationship between a depth value and a blur radius according to a second embodiment of the present invention.
  • 6-1 is a graph showing a relationship between a depth value and a blur coefficient when a depth value of a focus point is 15 according to a third embodiment of the present invention.
  • 6-2 is a graph showing a relationship between a depth value and a blur radius when a depth value of a focus point is 15 according to a third embodiment of the present invention
  • 7-1 is a graph showing a relationship between a depth value and a blur coefficient when a depth value of a focus point is 70 according to a third embodiment of the present invention.
  • 7-2 is a graph showing a relationship between a depth value and a blur radius when the depth value of the focus point is 70 according to the third embodiment of the present invention.
  • 8-1 is a graph showing a relationship between a depth value and a blur coefficient when a depth value of a focus point is 110 according to a third embodiment of the present invention.
  • 8-2 is a graph showing a relationship between a depth value and a blur radius when a depth value of a focus point is 110 according to a third embodiment of the present invention.
  • FIG. 10 is a background blurred image when the depth value of the focus point is 70 according to the third embodiment of the present invention.
  • 11 is a background blurred image when a depth value of a focus point is 110 according to a third embodiment of the present invention.
  • Figure 12 is a block diagram showing an image blurring processing apparatus according to a fourth embodiment of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminal described in the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet), a PMP (Portable Multimedia Player), a navigation device, etc.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram showing the hardware structure of an optional mobile terminal embodying various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an A/V (Audio/Video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190. and many more.
  • Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network.
  • the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
  • the broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112.
  • the broadcast signal may exist in various forms, for example, it may exist in the form of an EPG (Electronic Program Guide) of DMB (Digital Multimedia Broadcasting), an ESG (Electronic Service Guide) of DVB-H (Digital Video Broadcasting Handheld), or the like.
  • the broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast systems.
  • the broadcast receiving module 111 can use, for example, DMB-T (Multimedia Broadcast-Ground), DMB-S (Digital Multimedia Broadcasting-Satellite), DVB-H (Digital Video Broadcasting-Handheld), MediaFLO @ (Forward Link) Digital broadcasting systems such as the data broadcasting system of the media, ISDB-T (Terrestrial Digital Broadcasting Integrated Service), etc. receive digital broadcasting.
  • the broadcast receiving module 111 can be constructed as various broadcast systems suitable for providing broadcast signals as well as the above-described digital broadcast system.
  • the broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of storage medium).
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless Broadband), Wimax (Worldwide Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), etc. .
  • the short range communication module 114 is a module for supporting short range communication.
  • Some embodiments illustrating the short-range communication technology include Bluetooth TM, the RFID (radio frequency identification), the IrDA (Infrared Data Association), the UWB (Ultra Wideband), ZigBee TM like.
  • the location information module 115 is a module for checking or acquiring location information of the mobile terminal.
  • a typical example of a location information module is GPS (Global Positioning System).
  • GPS Global Positioning System
  • the GPS module as the location information module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate based on longitude, latitude, and altitude. 3D current location information.
  • the method for calculating position and time information uses three satellites and corrects the calculated position and time information errors by using another satellite. Further, the GPS module can calculate the speed information by continuously calculating the current position information in real time.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 122 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc. Especially when the touchpad When superimposed on the display unit 151 in the form of a layer, a touch screen can be formed.
  • the sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal.
  • the sensing unit 140 can sense whether the slide type phone is turned on or off.
  • the sensing unit 140 can detect whether the power supply unit 190 provides power or whether the interface unit 170 is coupled to an external device.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a UIM (Subscriber Identity Module), a SIM (Customer Identification Module), a USIM (Universal Customer Identification Module), and the like.
  • the device having the identification module may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 can include The display unit 151, the audio output module 152, the alarm unit 153, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in the phone call mode, the display unit 151 can display a UI (User Interface) or GUI (Graphical User Interface) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI User Interface
  • GUI Graphic User Interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of an LCD (Liquid Crystal Display), a TFT-LCD (Thin Film Transistor LCD), an OLED (Organic Light Emitting Diode) display, a flexible display, a 3D (three-dimensional) display, and the like.
  • LCD Liquid Crystal Display
  • TFT-LCD Thin Film Transistor LCD
  • OLED Organic Light Emitting Diode
  • a flexible display a 3D (three-dimensional) display, and the like.
  • 3D three-dimensional display, and the like.
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a transparent organic light emitting diode (TOLED) display or the like.
  • TOLED transparent organic light emitting diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a speaker, a buzzer, and the like.
  • the alarm unit 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. apart from In addition to audio or video output, the alert unit 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations, and when a call, message, or some other incoming communication is received, the alarm unit 153 can provide a tactile output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 can also provide an output of the notification event occurrence via the display unit 151 or the audio output module 152.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (for example, SD or DX memory, etc.), a RAM (random access memory), and an SRAM (static random access).
  • Memory ROM (Read Only Memory), EEPROM (Electrically Erasable Programmable Read Only Memory), PROM (Programmable Read Only Memory), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be by using an ASIC (Application Specific Integrated Circuit), a DSP (Digital Signal Processor), a DSPD (Digital Signal Processing Device), a PLD (Programmable Logic Device), an FPGA (Field Programmable Gate)
  • An array a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by controller 180.
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • This embodiment provides an image blurring processing method.
  • the execution subject of this embodiment is a digital device.
  • the digital device includes: a mobile terminal having a camera function. The structure of the mobile terminal has been described with reference to FIG. 1 and will not be described herein. Of course, the digital device of this embodiment may also be a digital camera.
  • FIG. 2 is a flow chart of an image blurring processing method according to a first embodiment of the present invention.
  • Step S210 acquiring an image and a depth image corresponding to the image.
  • the captured image refers to an image that is normally captured.
  • the values (pixels) of the pixels of the image are hue, saturation, and brightness.
  • a depth image refers to a map in which a depth value of a pixel in a scene is taken as a pixel value.
  • the depth image corresponds to the three-dimensional physical distance in the shooting scene, which can directly reflect the geometry of the visible surface of the scene.
  • the larger the depth value the closer the object is to the camera, and the smaller the depth value, the further the object is from the camera.
  • the binocular camera may be called to acquire an image and/or a depth image corresponding to the image; or, the camera and the ranging sensor are called, the image is captured by the camera, and the depth corresponding to the image is collected by the ranging sensor.
  • the ranging sensor may be a laser ranging sensor.
  • the laser ranging sensor measures the distance of the object and performs imaging to obtain a depth image.
  • this embodiment can also acquire images and/or corresponding depth images through a stereo camera.
  • the image and the depth image correspond to the same scene, so the pixel points of the image and the depth image correspond to the same position in the scene.
  • Step S220 calculating a blur radius corresponding to the pixel point according to the depth value of the pixel point in the depth image.
  • the blur radius corresponding to each pixel point is calculated according to the depth value of each pixel in the depth image.
  • a mathematical model may be pre-built; the blur radius corresponding to each pixel point is calculated by the pre-built mathematical model and the depth value of each pixel in the depth image.
  • the data model is for example a Gaussian model.
  • Step S230 Perform blurring on the pixel points corresponding to the pixel position in the image according to the blur radius corresponding to the pixel point.
  • the depth image and the pixel position of the image correspond to each pixel point position in the depth image according to the blur radius corresponding to each pixel point in the depth image.
  • the pixels are blurred.
  • the pixel point A in the depth image corresponds to the position of the pixel point a in the image
  • the pixel point a is blurred by the blur radius corresponding to the pixel point A.
  • an image outside the depth of field range is determined in the depth image as a back a scene image; in the image, determining a partial image corresponding to the position of the background image as the image to be blurred; using the blur radius corresponding to the pixel point in the background image, treating the pixel point corresponding to the pixel position in the blurred image Perform blurring.
  • the pixel points corresponding to the pixel point position in the blurred image are subjected to blur processing.
  • the clear image can be blurred using a preset processing algorithm. For example: Gaussian blurring algorithm. Another example: taking the pixel to be processed as an intermediate point, calculating the average value of the pixel within the range of the blur radius as the pixel of the intermediate point. A blur radius of 1 indicates the pixel itself to be processed.
  • the background of the photographing calculate the blur radius corresponding to the pixel point by using the depth image, and blur the pixel corresponding to the position in the image according to the blur radius corresponding to the pixel point.
  • the image to be blurred (the background of the subject) in the image is determined by using the depth image, and the image to be blurred is blurred by using the blur radius corresponding to the pixel point calculated according to the depth image.
  • 3 is a flow chart showing the steps of calculating a blur radius in accordance with a second embodiment of the present invention.
  • Step S310 in the depth image, obtaining a depth value of the pixel point
  • the value of the pixel point in the depth image is the depth value of the pixel point.
  • step S320 in the depth value of the pixel, the depth value of the pixel point where the focus is located is determined.
  • the depth value of the pixel point where the coordinates of the focus point are located is determined in the depth value of each pixel point of the depth image according to the coordinates of the selected focus point when the depth image is acquired.
  • the coordinate value of the focus point selected by the user or the digital device is recorded; after the depth image is obtained, the pixel point at the coordinate value is queried in the depth image, and the pixel point is the focus point.
  • the depth value of the pixel is the depth value of the pixel where the focus is located.
  • the minimum depth value and the maximum depth value are queried to determine the depth value range.
  • the minimum value of the depth value is generally 0, and the maximum value of the depth value can be expressed as d max .
  • Step S330 obtaining a Gaussian model corresponding to the focus point according to the depth value of the pixel point where the focus is located; the Gaussian model is a mapping relationship model between the pixel point and the blur radius.
  • Step S340 obtaining a blur radius corresponding to the pixel point based on the Gaussian model.
  • FIG. 4 it is a schematic diagram of an imaging principle according to a second embodiment of the present invention.
  • L is the focus point
  • ⁇ L is the depth of field range
  • ⁇ L1 is the foreground depth
  • ⁇ L2 is the back depth of field.
  • ⁇ L1 and ⁇ L2 may be fixed values or may be values obtained by experiments. In the depth image, based on the depth value of the pixel, it can be determined whether the pixel is within the depth of field range ⁇ L.
  • a focus point is set on a main body whose image needs to be highlighted.
  • the focus frame is set on the flower in the preview interface, so that The smartphone focuses on the flower as it is being shot. Since the focus point is in the depth of field, the subject of the shot in the image is generally in the depth of field.
  • an input operation is detected, a focus point is determined based on the input operation, and a depth of field region is determined based on the focus point.
  • the focus point that the user wants to highlight in the image may not be located in the focus frame.
  • a certain corner in the image is used as a focus point, and an operation position (eg, a trigger operation) may be determined to determine an operation position.
  • the operation position is used as a focus point, and the depth of field area is determined based on the focus point.
  • the degree of clarity of each pixel in the partial image is the same, and the image other than the depth of field range ⁇ L is blurred, and the farther the distance is from the focus, the larger the blur radius is, the less clear the image is. Therefore, in the present embodiment, in the clear image, it is necessary to perform blurring processing on pixels other than the depth of field range ⁇ L in order to achieve the effect of background blurring.
  • FIG. 5 it is a schematic diagram showing the relationship between the depth value and the blur radius according to the second embodiment of the present invention.
  • R represents a blur radius
  • L is a focus point
  • d (depth) represents a depth value
  • ⁇ L1 L - Lp
  • ⁇ L2 Ln - L.
  • the blur radius R is 1, and the blur radius R is 1 without performing blur processing, and the partial image in the depth of field range is clear; outside the depth of field range ⁇ L, If the blur radius R>1 and the blur radius R>1 need to be blurred, then the image outside the depth of field range is blurred, and the depth value is farther from the focus point.
  • R>1 represents a range in which the pixel point is an intermediate point, and other pixel points around the pixel point need to be subjected to blurring processing within the range.
  • the relationship between the depth value and the blur radius is a reverse Gaussian curve.
  • the relationship between the depth value and the blur radius is flipped along the axis where the depth is located.
  • a curve having a Gaussian distribution is obtained, and in this embodiment, a Gaussian model can be constructed based on the relationship between the depth value and the blur radius.
  • the formula for a specific Gaussian model is as follows:
  • R max is the maximum value of the blur radius
  • C represents the blur coefficient
  • the range of C is [0 1]
  • d represents the depth value
  • dfocus represents the depth value of the focus point
  • dfocus is the mean value of the Gauss curve
  • is The variance of the Gaussian curve
  • 0.5 is a constant
  • the variance ⁇ in the formula (1) is an empirical value or a value obtained by an experiment, and ⁇ is related to the maximum depth value d max , which increases as the maximum depth value d max increases.
  • the ⁇ corresponding to the different maximum depth values d max can be determined experimentally in advance, and when the image is captured, the corresponding ⁇ can be directly selected according to the maximum depth value of the depth image.
  • the constant 0.5 in the formula (2) is for rounding off R max - C ⁇ R max .
  • R max in the formula (2) is an empirical value or a value obtained by an experiment.
  • the effect of the background blur image can be determined by iteratively adjusting R max during the experiment to determine one or more R max with better blurring effect, and one of the R max is selected by the user before the image is taken.
  • the R max and ⁇ in the Gaussian model are set. Since each depth image has a focus point, and the focus point is input into the Gaussian model, the Gaussian model corresponding to the focus point can be obtained; The depth value of each pixel is input to the Gaussian model corresponding to the focus point, and the blur radius corresponding to the pixel point is output.
  • the depth value of the depth image ranges from [0, 120], R max is set to 14, the range of the blur coefficient is [0 1], the dfocus is set to 15, 70, and 110, respectively, and the depth of field range is focused on the front and back of the focus.
  • can be set to 20.
  • the corresponding ⁇ may be set for each dfocus according to different values selected by dfocus, for example, ⁇ is 15 when dfocus is 15, and ⁇ is 20 when dfocus is 70, dfocus
  • the ⁇ at 110 is 25.
  • the ⁇ corresponding to different dfocus can be determined in advance by experiments, and the corresponding ⁇ can be directly selected during execution.
  • the present embodiment respectively gives a graph of the relationship between the depth value and the blur coefficient when the depth values of the focus are 15, 70, and 110, and the depth value and A graph of the relationship between fuzzy radii.
  • Figure 6-1 is a plot of the relationship between the depth value and the blur factor when the depth value of the focus is 15;
  • Figure 6-2 is the plot of the depth value and the blur radius when the depth value of the focus is 15. .
  • Fig. 7-1 is a graph showing the relationship between the depth value and the blur coefficient when the depth value of the focus is 70; and Fig. 7-2 is a graph showing the relationship between the depth value and the blur radius when the depth value of the focus is 70.
  • FIG. 8-1 is a graph showing a relationship between a depth value and a blur coefficient when the depth value of the focus is 110; and FIG. 8-2 is a graph showing a relationship between the depth value and the blur radius when the depth value of the focus is 110.
  • the blur radius in Fig. 6-2, Fig. 7-2 and Fig. 8-2 is discretely distributed, and the blur radius in the depth of field is 1, outside the depth of field.
  • the blur radius is greater than 1.
  • the blur radius corresponding to the pixel point can be determined.
  • the picture is blurred respectively
  • Fig. 9 is the background blurred image when the depth value of the focus is 15
  • FIG. 10 is a background blur image when the depth value of the focus is 70
  • FIG. 11 is a background blur image when the depth value of the focus is 110.
  • the circles in Figs. 9 to 11 are the positions where the focus points are located.
  • FIG. 12 is a fourth embodiment of the present invention The structure of the image blur processing device.
  • the image blurring processing apparatus of this embodiment can be provided in a digital device such as a mobile terminal.
  • the image blurring device comprises:
  • the acquisition module 1210 is configured to obtain an image and a depth image corresponding to the image.
  • the calculation module 1220 is configured to calculate a blur radius corresponding to the pixel point according to the depth value of the pixel point in the depth image obtained by the acquisition module 1210.
  • the processing module 1230 is configured to perform a blurring process on the pixel points corresponding to the pixel position in the clear image obtained by the acquiring module 1210 according to the blur radius corresponding to the pixel point obtained by the calculating module 1220.
  • the calculating module 1220 is configured to: acquire, in the depth image, a depth value of a pixel point; in a depth value of the pixel point, determine a depth value of a pixel point where the focus point is located; a depth value of the pixel point where the focus point is located, and a Gaussian model corresponding to the focus point; the Gaussian model is a mapping relationship model between the pixel point and the blur radius, and the blur corresponding to the pixel point is obtained based on the Gaussian model radius.
  • the calculating module 1220 is configured to determine, according to the coordinates of the selected focus point when the depth image is obtained, the coordinates of the focus point in the depth value corresponding to the pixel point of the depth image. The depth value of the pixel.
  • the processing module 1230 is configured to determine an image outside the depth of field region as a background image in the depth image, and determine a local portion corresponding to the background image position in the image.
  • An image is blurred by a pixel corresponding to the pixel position in the partial image by using a blur radius corresponding to a pixel point in the background image.
  • the acquisition module 1210 is configured to obtain an image and a depth image corresponding to the image by calling a binocular camera; or, by calling a camera and a ranging sensor, obtaining an image through the camera, by using the camera
  • the ranging sensor obtains a depth image corresponding to the image.
  • the device further includes an input detection module configured to detect an input operation
  • the processing module 1230 is configured to determine a focus point based on the input operation detected by the input detection module, and determine a depth of field region based on the focus point.
  • the Gaussian model satisfies:
  • R is the blur radius
  • R max is the maximum value of the blur radius
  • d is the depth value
  • dfocus is the depth value of the focus point
  • is the variance of the Gaussian curve
  • 0.5 is a constant. Indicates rounding down to get the integer R value
  • the image blur processing device may be implemented by a terminal device having a camera in an actual application; the acquisition module 1210, the calculation module 1220, the processing module 1230, and the input detection module in the device are in practical applications. Both can be implemented by a central processing unit (CPU), a digital signal processor (DSP), a micro control unit (MCU), or a field-programmable gate array (FPGA).
  • CPU central processing unit
  • DSP digital signal processor
  • MCU micro control unit
  • FPGA field-programmable gate array
  • the image blurring processing apparatus provided in the above embodiment is only illustrated by the division of each of the above-mentioned program modules when performing image processing. In actual applications, the above processing may be assigned to different program modules as needed. Upon completion, the internal structure of the device is divided into different program modules to perform all or part of the processing described above.
  • the image blurring processing device provided by the above embodiment is the same concept as the image blurring processing method embodiment, and For details of the implementation process, refer to the method embodiment, which is not described here.
  • the embodiment of the present invention further provides a mobile terminal, and the hardware component structure of the mobile terminal is as shown in FIG. 1 .
  • the mobile terminal includes:
  • a binocular camera configured to acquire an image and a depth image corresponding to the image
  • a camera configured to capture images
  • a ranging sensor configured to acquire a depth image corresponding to the image
  • a processor configured to perform the image blurring processing program to perform an operation of acquiring the image captured by the binocular camera or the camera and the collected by the binocular camera or the ranging sensor a depth image corresponding to the image; calculating a blur radius corresponding to the pixel point according to the depth value of the pixel point in the depth image; and corresponding to the pixel point position in the image according to the blur radius corresponding to the pixel point
  • the pixels are blurred.
  • the processor is configured to execute the image blurring processing program to perform an operation of: acquiring, in the depth image, a depth value of a pixel point; in a depth value of the pixel point Determining a depth value of a pixel point where the focus is located; obtaining a Gaussian model corresponding to the focus point according to the depth value of the pixel point where the focus point is located; the Gaussian model is a mapping relationship model between the pixel point and the blur radius; Based on the Gaussian model, a blur radius corresponding to a pixel point is obtained.
  • the processor is configured to execute the image blurring processing program to perform an operation of: selecting, according to coordinates of a selected focus point when the depth image is acquired, a pixel point of the depth image Among the corresponding depth values, the depth value of the pixel point where the coordinates of the focus point are located is determined.
  • the processor is configured to execute the image blurring processing program to perform an operation of: determining an image outside the depth of field region as the back in the depth image a scene image; in the image, determining a partial image corresponding to the background image position; using a blur radius corresponding to a pixel point in the background image, a pixel corresponding to the pixel position in the partial image The point is blurred.
  • the processor is configured to execute the image blurring processing program to perform an operation of detecting an input operation, determining a focus point based on the input operation, and determining a depth of field region based on the focus point.
  • the binocular camera or camera can be implemented by the camera 121 in FIG. 1; the ranging sensor can be disposed in the mobile terminal, not shown in FIG. 1; the memory can be implemented by the memory 160 in FIG. 1; The device can be implemented by the controller 180 in FIG.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer executable instructions, where the computer executable instructions are configured to: acquire an image and a depth image corresponding to the image; a depth value of the pixel in the depth image, and calculating a blur radius corresponding to the pixel point; and performing blur processing on the pixel point corresponding to the pixel point position in the image according to the blur radius corresponding to the pixel point.
  • the computer executable instructions are configured to: obtain, in the depth image, a depth value of a pixel point; and determine, in a depth value of the pixel point, a depth value of a pixel point where the focus point is located Obtaining a Gaussian model corresponding to the focus point according to a depth value of the pixel point where the focus point is located; the Gaussian model is a mapping relationship model of a pixel point and a blur radius; and obtaining the pixel point based on the Gaussian model Corresponding blur radius.
  • the computer executable instructions are configured to: determine coordinates of a focus point in a depth value corresponding to a pixel point of the depth image according to coordinates of a selected focus point when the depth image is acquired The depth value of the pixel at which it is located.
  • the computer executable instructions are configured to: in the depth image, determine an image outside the depth of field region as a background image; in the image, determine and a partial image corresponding to the background image position; and a blurring radius corresponding to the pixel point in the partial image is performed by using a blur radius corresponding to the pixel point in the background image.
  • the computer executable instructions are configured to: detect an input operation, determine a focus point based on the input operation, and determine a depth of field region based on the focus point.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
  • the technical solution of the embodiment of the present invention uses the depth image to calculate the blur radius corresponding to the pixel point, and according to the blur radius corresponding to the pixel point, blurs the pixel corresponding to the position in the clear image, thereby realizing rapid background illusion on the image. Realize the background blur effect and enhance the user's operating experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种图像虚化处理方法、装置、移动终端和计算机存储介质。该方法包括:采集图像以及所述图像对应的深度图像(S210);根据所述深度图像中像素点的深度值,计算所述像素点对应的模糊半径(S220);根据所述像素点对应的模糊半径,对所述图像中与所述像素点位置对应的像素点进行模糊处理(S230)。上述方法利用深度图像计算像素点对应的模糊半径,根据该像素点对应的模糊半径,对清晰图像中位置对应的像素点进行虚化处理,进而可以快速地对图像进行背景虚化,实现背景虚化效果。

Description

图像虚化处理方法、装置、移动终端和计算机存储介质
相关申请的交叉引用
本申请基于申请号为201610926610.3、申请日为2016年10月31日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本发明涉及图像处理技术领域,尤其涉及一种图像虚化处理方法、装置、移动终端和计算机存储介质。
背景技术
在日常生活中,用户可以使用单反相机拍摄出具有背景虚化效果的图片,使图片焦点集中在一件事物上,但是单反相机作为专业设备成本较高、普及度低,操作复杂,且需要专业知识才能拍摄出效果较好的背景虚化图片。目前,普通数码设备成本相对较低、普及度高、操作简便,但是现有的普通数码设备在硬件上无法和单反相机相比,尤其是在感光元件上面无法和单反相比,导致普通数码设备不能拍出层次感分明的背景虚化效果。
发明内容
本发明实施例提供一种图像虚化处理方法、装置、移动终端和计算机存储介质,旨在解决现有普通数码设备无法拍摄出背景虚化效果的图片的问题。
针对上述技术问题,本发明是通过以下技术方案来解决的:
本发明实施例提供了一种图像虚化处理方法,包括:采集图像以及所 述图像对应的深度图像;根据所述深度图像中像素点的深度值,计算所述像素点对应的模糊半径;根据所述像素点对应的模糊半径,对所述图像中与所述像素点位置对应的像素点进行模糊处理。
在一实施例中,所述根据所述深度图像中像素点的深度值,计算所述像素点对应的模糊半径,包括:在所述深度图像中,获取像素点的深度值;在所述像素点的深度值中,确定对焦点所在的像素点的深度值;根据所述对焦点所在的像素点的深度值,得到所述对焦点对应的高斯模型;所述高斯模型为像素点和模糊半径的映射关系模型;基于所述高斯模型,获得所述像素点对应的模糊半径。
在一实施例中,所述在所述像素点的深度值中,确定对焦点所在的像素点的深度值,包括:根据采集所述深度图像时选定的对焦点的坐标,在所述深度图像的像素点对应的深度值中,确定对焦点的坐标所在的像素点的深度值。
在一实施例中,所述根据所述像素点对应的模糊半径,对所述图像中与所述像素点位置对应的像素点进行模糊处理,包括:在所述深度图像中,确定景深区域之外的图像,作为背景图像;在所述图像中,确定与所述背景图像位置对应的局部图像;利用所述背景图像中的像素点对应的模糊半径,对所述局部图像中与所述像素点位置对应的像素点进行模糊处理。
在一实施例中,所述采集图像以及所述图像对应的深度图像,包括:调用双目摄像头采集所述图像以及所述图像对应的深度图像;或者,调用摄像头以及测距传感器,通过摄像头采集所述图像,通过测距传感器采集所述图像对应的深度图像。
在一实施例中,所述方法还包括:检测输入操作,基于所述输入操作确定对焦点,基于所述对焦点确定景深区域。
在一实施例中,所述高斯模型满足:
Figure PCTCN2017100881-appb-000001
其中,R表示模糊半径;Rmax为模糊半径的最大值,d表示深度值,dfocus表示对焦点的深度值,δ为高斯曲线的方差,0.5为常数,
Figure PCTCN2017100881-appb-000002
表示向下取整,用于得到整数的R值;
其中,基于dfocus的不同配置不同的δ。
本发明实施例还提供了一种图像虚化处理装置,包括:采集模块,配置为获得图像以及所述图像对应的深度图像;计算模块,配置为根据所述采集模块获得的深度图像中像素点的深度值,计算所述像素点对应的模糊半径;处理模块,配置为根据所述计算模块获得的所述像素点对应的模糊半径,对所述采集模块获得的所述图像中与所述像素点位置对应的像素点进行模糊处理。
在一实施例中,所述计算模块,配置为:在所述深度图像中,获取像素点的深度值;在所述像素点的深度值中,确定对焦点所在的像素点的深度值;根据所述对焦点所在的像素点的深度值,得到所述对焦点对应的高斯模型;所述高斯模型为像素点和模糊半径的映射关系模型;基于所述高斯模型,获得所述像素点对应的模糊半径。
在一实施例中,所述计算模块,配置为:根据获得所述深度图像时选定的对焦点的坐标,在所述深度图像的像素点对应的深度值中,确定对焦点的坐标所在的像素点的深度值。
在一实施例中,所述处理模块,配置为:在所述深度图像中,确定景深区域之外的图像,作为背景图像;在所述图像中,确定与所述背景图像位置对应的局部图像;利用所述背景图像中的像素点对应的模糊半径,对所述局部图像中与所述像素点位置对应的像素点进行模糊处理。
在一实施例中,所述采集模块,配置为:通过调用双目摄像头获得所 述清晰图像以及所述图像对应的深度图像;或者,调用摄像头以及测距传感器,通过所述摄像头获得所述图像,通过所述测距传感器获得所述图像对应的深度图像。
在一实施例中,所述装置还包括输入检测模块,配置为检测输入操作;
所述处理模块,配置为基于所述输入检测模块检测的所述输入操作确定对焦点,基于所述对焦点确定景深区域。
在一实施例中,所述高斯模型满足:
Figure PCTCN2017100881-appb-000003
其中,R表示模糊半径;Rmax为模糊半径的最大值,d表示深度值,dfocus表示对焦点的深度值,δ为高斯曲线的方差,0.5为常数,
Figure PCTCN2017100881-appb-000004
表示向下取整,用于得到整数的R值;
其中,基于dfocus的不同配置不同的δ。
本发明实施例还提供了一种移动终端,包括:
双目摄像头,配置为采集图像以及所述图像对应的深度图像;或者,
摄像头,配置为采集图像;
测距传感器,配置为采集所述图像对应的深度图像;
存储有图像虚化处理程序的存储器;
处理器,配置为执行所述图像虚化处理程序以执行下述操作:获取所述双目摄像头或所述摄像头采集的所述图像以及所述双目摄像头或所述测距传感器采集的所述图像对应的深度图像;根据所述深度图像中像素点的深度值,计算所述像素点对应的模糊半径;根据所述像素点对应的模糊半径,对所述图像中与所述像素点位置对应的像素点进行模糊处理。
在一实施例中,所述处理器,配置为执行所述图像虚化处理程序以执行下述操作:在所述深度图像中,获取像素点的深度值;在所述像素点的 深度值中,确定对焦点所在的像素点的深度值;根据所述对焦点所在的像素点的深度值,得到所述对焦点对应的高斯模型;所述高斯模型为像素点和模糊半径的映射关系模型;基于所述高斯模型,获得像素点对应的模糊半径。
在一实施例中,所述处理器,配置为执行所述图像虚化处理程序以执行下述操作:根据采集所述深度图像时选定的对焦点的坐标,在所述深度图像的像素点对应的深度值中,确定对焦点的坐标所在的像素点的深度值。
在一实施例中,所述处理器,配置为执行所述图像虚化处理程序以执行下述操作:在所述深度图像中,确定景深区域之外的图像,作为背景图像;在所述图像中,确定与所述背景图像位置对应的局部图像;利用所述背景图像中的像素点对应的模糊半径,对所述局部图像中与所述像素点位置对应的像素点进行模糊处理。
在一实施例中,所述处理器,配置为执行所述图像虚化处理程序以执行下述操作:检测输入操作,基于所述输入操作确定对焦点,基于所述对焦点确定景深区域。
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行本发明实施例所述的图像虚化处理方法。
本发明实施例提供的图像虚化处理方法、装置、移动终端和计算机存储介质,虚化拍摄的背景,利用深度图像计算像素点对应的模糊半径,根据该像素点对应的模糊半径,对清晰图像中位置对应的像素点进行虚化处理,实现了对图像进行快速地背景虚化,实现背景虚化效果,提升了用户的操作体验。
附图说明
图1为实现本发明各个实施例一可选的移动终端的硬件结构示意图;
图2是根据本发明第一实施例的图像虚化处理方法的流程图;
图3是根据本发明第二实施例的计算模糊半径的步骤流程图;
图4是根据本发明第二实施例的成像原理示意图;
图5是根据本发明第二实施例的深度值和模糊半径的关系曲线示意图;
图6-1是根据本发明第三实施例的对焦点的深度值为15时的深度值和模糊系数的关系曲线图;
图6-2是根据本发明第三实施例的对焦点的深度值为15时的深度值与模糊半径的关系曲线图;
图7-1是根据本发明第三实施例的对焦点的深度值为70时的深度值和模糊系数的关系曲线图;
图7-2是根据本发明第三实施例的对焦点的深度值为70时的深度值与模糊半径的关系曲线图;
图8-1是根据本发明第三实施例的对焦点的深度值为110时的深度值和模糊系数的关系曲线图;
图8-2是根据本发明第三实施例的对焦点的深度值为110时的深度值与模糊半径的关系曲线图;
图9是根据本发明第三实施例的对焦点的深度值为15时的背景虚化图像;
图10是根据本发明第三实施例的对焦点的深度值为70时的背景虚化图像;
图11是根据本发明第三实施例的对焦点的深度值为110时的背景虚化图像;
图12是根据本发明第四实施例的图像虚化处理装置的结构图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。
图1为实现本发明各个实施例一可选的移动终端的硬件结构示意。
移动终端100可以包括无线通信单元110、A/V(音频/视频)输入单元120、用户输入单元130、感测单元140、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信系统或网络之间的无线电通信。例如,无线通信单元可以包括广播接收模块111、移动通信模块112、无线互联网模块113、短程通信模块114和位置信息模块115中的至少一个。
广播接收模块111经由广播信道从外部广播管理服务器接收广播信号 和/或广播相关信息。广播信道可以包括卫星信道和/或地面信道。广播管理服务器可以是生成并发送广播信号和/或广播相关信息的服务器或者接收之前生成的广播信号和/或广播相关信息并且将其发送给终端的服务器。广播信号可以包括TV广播信号、无线电广播信号、数据广播信号等等。而且,广播信号可以进一步包括与TV或无线电广播信号组合的广播信号。广播相关信息也可以经由移动通信网络提供,并且在该情况下,广播相关信息可以由移动通信模块112来接收。广播信号可以以各种形式存在,例如,其可以以DMB(数字多媒体广播)的EPG(电子节目指南)、DVB-H(数字视频广播手持)的ESG(电子服务指南)等等的形式而存在。广播接收模块111可以通过使用各种类型的广播系统接收信号广播。特别地,广播接收模块111可以通过使用诸如DMB-T(多媒体广播-地面)、DMB-S(数字多媒体广播-卫星)、DVB-H(数字视频广播-手持),MediaFLO@(前向链路媒体)的数据广播系统、ISDB-T(地面数字广播综合服务)等等的数字广播系统接收数字广播。广播接收模块111可以被构造为适合提供广播信号的各种广播系统以及上述数字广播系统。经由广播接收模块111接收的广播信号和/或广播相关信息可以存储在存储器160(或者其它类型的存储介质)中。
移动通信模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种类型的数据。
无线互联网模块113支持移动终端的无线互联网接入。该模块可以内部或外部地耦接到终端。该模块所涉及的无线互联网接入技术可以包括WLAN(无线LAN)(Wi-Fi)、Wibro(无线宽带)、Wimax(全球微波互联接入)、HSDPA(高速下行链路分组接入)等等。
短程通信模块114是用于支持短程通信的模块。短程通信技术的一些示 例包括蓝牙TM、RFID(射频识别)、IrDA(红外数据协会)、UWB(超宽带)、紫蜂TM等等。
位置信息模块115是用于检查或获取移动终端的位置信息的模块。位置信息模块的典型示例是GPS(全球定位系统)。根据一种实施方式,作为位置信息模块115的GPS模块计算来自三个或更多卫星的距离信息和准确的时间信息并且对于计算的信息应用三角测量法,从而根据经度、纬度和高度准确地计算三维当前位置信息。当前,用于计算位置和时间信息的方法使用三颗卫星并且通过使用另外的一颗卫星校正计算出的位置和时间信息的误差。此外,GPS模块能够通过实时地连续计算当前位置信息来计算速度信息。
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风122,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机121。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信模块112发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板 以层的形式叠加在显示单元151上时,可以形成触摸屏。
感测单元140检测移动终端100的当前状态,(例如,移动终端100的打开或关闭状态)、移动终端100的位置、用户对于移动终端100的接触(即,触摸输入)的有无、移动终端100的取向、移动终端100的加速或减速移动和方向等等,并且生成用于控制移动终端100的操作的命令或信号。例如,当移动终端100实施为滑动型移动电话时,感测单元140可以感测该滑动型电话是打开还是关闭。另外,感测单元140能够检测电源单元190是否提供电力或者接口单元170是否与外部装置耦接。
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括UIM(用户识别模块)、SIM(客户识别模块)、USIM(通用客户识别模块)等等。另外,具有识别模块的装置(下面称为"识别装置")可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端和外部装置之间传输数据。
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括 显示单元151、音频输出模块152、警报单元153等等。
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的UI(用户界面)或GUI(图形用户界面)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括LCD(液晶显示器)、TFT-LCD(薄膜晶体管LCD)、OLED(有机发光二极管)显示器、柔性显示器、3D(三维)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为透明有机发光二极管(TOLED)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触摸输入位置和触摸输入面积。
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括扬声器、蜂鸣器等等。
警报单元153可以提供输出以将事件的发生通知给移动终端100。典型的事件可以包括呼叫接收、消息接收、键信号输入、触摸输入等等。除了 音频或视频输出之外,警报单元153可以以不同的方式提供输出以通知事件的发生。例如,警报单元153可以以振动的形式提供输出,当接收到呼叫、消息或一些其它进入通信(incoming communication)时,警报单元153可以提供触觉输出(即,振动)以将其通知给用户。通过提供这样的触觉输出,即使在用户的移动电话处于用户的口袋中时,用户也能够识别出各种事件的发生。警报单元153也可以经由显示单元151或音频输出模块152提供通知事件的发生的输出。
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、RAM(随机访问存储器)、SRAM(静态随机访问存储器)、ROM(只读存储器)、EEPROM(电可擦除可编程只读存储器)、PROM(可编程只读存储器)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180可以包括用于再现(或回放)多媒体数据的多媒体模块181,多媒体模块181可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用ASIC(特定用途集成电路)、DSP(数字信号处理器)、DSPD(数字信号处理装置)、PLD(可编程逻辑装置)、FPGA(现场可编程门阵列)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,己经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本发明能够应用于任何类型的移动终端,并且不限于滑动型移动终端。
基于上述移动终端硬件结构,提出本发明各个实施例。
实施例一
本实施例提供一种图像虚化处理方法。本实施例的执行主体为数码设备。该数码设备包括:具有摄像功能的移动终端。该移动终端的结构已经通过图1进行了描述,在此不作赘述。当然,本实施例的数码设备还可以是数码相机。
图2是根据本发明第一实施例的图像虚化处理方法的流程图。
步骤S210,采集图像以及所述图像对应的深度图像。
这里,采集的图像是指正常拍摄的图像。图像的像素点的值(像素)为色调、饱和度和亮度。
深度图像(depth image)是指将场景中像素点的深度值作为像素值的图 像,深度图像对应拍摄场景中的三维物理距离,可以直接反映景物可见表面的几何形状。在深度图像中,深度值越大,表示物体距离摄像头越近,深度值越小,表示物体距离摄像头越远。
在一种实施方式中,可以调用双目摄像头采集图像和/或所述图像对应的深度图像;或者,调用摄像头以及测距传感器,通过摄像头采集图像,通过测距传感器采集所述图像对应的深度图像。其中,测距传感器可以是激光测距传感器。激光测距传感器测量物体距离并进行成像,进而得到深度图像。当然,本实施例还可以通过立体照相机采集图像和/或对应的深度图像。
在本实施例中,图像和深度图像对应同一场景,所以图像和深度图像的像素点对应场景中的同一位置。
步骤S220,根据所述深度图像中像素点的深度值,计算所述像素点对应的模糊半径。
根据深度图像中每个像素点的深度值,计算每个像素点对应的模糊半径。作为一种实施方式,可以预先构建数学模型;通过该预先构建的数学模型以及深度图像中每个像素点的深度值,计算每个像素点对应的模糊半径。该数据模型例如是高斯模型。
步骤S230,根据所述像素点对应的模糊半径,对所述图像中与所述像素点位置对应的像素点进行模糊处理。
由于深度图像和图像对应同一场景,所以深度图像和图像的像素点位置对应,则根据深度图像中每个像素点对应的模糊半径,在图像中,分别对深度图像中每个像素点位置对应的像素点进行模糊处理。
例如:深度图像中的像素点A和图像中的像素点a位置对应,则利用像素点A对应的模糊半径对像素点a进行模糊处理。
作为一种实施方式,在深度图像中确定景深范围之外的图像,作为背 景图像;在图像中,确定与背景图像位置对应的局部图像,作为待虚化图像;利用背景图像中的像素点对应的模糊半径,对待虚化图像中与所述像素点位置对应的像素点进行模糊处理。换言之,根据背景图像中每个像素点对应的模糊半径,对待虚化图像中与该像素点位置对应的像素点进行模糊处理。
可以采用预设的处理算法对清晰图像进行模糊处理。例如:高斯模糊处理算法。又如:以待处理的像素点为中间点,计算模糊半径范围之内的像素平均值,作为该中间点的像素。模糊半径为1时表示待处理的像素点本身。
本实施例为了在图像中突出拍摄的主体,虚化拍摄的背景,利用深度图像计算像素点对应的模糊半径,根据该像素点对应的模糊半径,对图像中位置对应的像素点进行虚化处理,对拍摄的图像实现高效、快速的背景虚化效果。其中,利用深度图像确定图像中待虚化图像(主体的背景),利用根据深度图像计算出的像素点对应的模糊半径,对该待虚化图像进行模糊处理。将本实施例应用在普通的数码设备(移动终端)中即可在拍摄图像时得到背景虚化的图像,操作简便,成本低,不需要用户具备专业的摄影知识,用户体验好。
实施例二
下面对计算模糊半径的步骤进行进一步地描述。图3是根据本发明第二实施例的计算模糊半径的步骤流程图。
步骤S310,在深度图像中,获取像素点的深度值;
这里,深度图像中像素点的值即为该像素点的深度值。
步骤S320,在像素点的深度值中,确定对焦点所在的像素点的深度值。
这里,根据采集深度图像时选定的对焦点的坐标,在深度图像的每个像素点的深度值中,确定对焦点的坐标所在的像素点的深度值。作为一种 实施方式,在采集深度图像时,记录用户选取或者数码设备选取的对焦点的坐标值;在获得深度图像之后,在该深度图像中查询该坐标值处的像素点,该像素点即为对焦点所在的像素点,该像素点的深度值即为对焦点所在的像素点的深度值。
在所有像素点的深度值中,查询最小的深度值和最大的深度值,进而确定出深度值范围。深度值的最小值一般为0,深度值的最大值可表示为dmax
步骤S330,根据对焦点所在的像素点的深度值,得到该对焦点对应的高斯模型;高斯模型为像素点和模糊半径的映射关系模型。
步骤S340,基于所述高斯模型,获得像素点对应的模糊半径。
具体而言,如图4所示,为根据本发明第二实施例的成像原理示意图。
在图4中,L为对焦点,ΔL为景深范围,ΔL1为前景深,ΔL2为后景深。ΔL1和ΔL2可以是固定值,也可以是通过实验获得的值。在深度图像中,根据像素点的深度值,可以确定该像素点是否处于景深范围ΔL之内。
一般而言,在拍摄图像时,会在图像需要突出的主体上设置对焦点,例如:用户在使用智能手机拍摄自然环境中的花朵时,会在预览界面中将对焦框设置在花朵上,使智能手机在拍摄时在该花朵上对焦。由于对焦点处于景深范围内,因此图像中的拍摄的主体一般处于景深范围内。作为另一种实施方式,检测输入操作,基于所述输入操作确定对焦点,基于所述对焦点确定景深区域。本实施方式中,用户想要在图像中突出的对焦点可并非位于对焦框内,例如图像中的某个角落作为对焦点,则可通过输入操作(例如触发操作)确定一操作位置,将该操作位置作为对焦点,基于该对焦点确定景深区域。
在本实施例中,为了虚化显示图像中的背景,清晰显示图像中的拍摄主体,需要在采集的清晰图像中,使景深范围ΔL内的局部图像清晰显示, 且该局部图像中的各个像素点的清晰程度是相同的,使景深范围ΔL以外的图像虚化显示,且距离对焦点越远,模糊半径越大,图像越不清晰。因此本实施例在清晰图像中,需要对景深范围ΔL之外的像素点进行虚化处理,以便达到背景虚化的效果。
通过上述分析,如图5所示,为根据本发明第二实施例的深度值和模糊半径的关系曲线示意图。在图4中,R表示模糊半径;L为对焦点;d(depth)表示深度值;ΔL1=L-Lp;ΔL2=Ln-L。在图4中可以看到,在景深范围ΔL内,模糊半径R为1,模糊半径R为1不需要进行模糊处理,在该景深范围内的局部图像是清晰的;在景深范围ΔL之外,模糊半径R>1,模糊半径R>1需要进行模糊处理,那么在景深范围之外的图像是模糊的,且深度值距离对焦点越远,模糊半径R越大,模糊半径R越大图像越模糊。其中,R=1表示像素点本身,所以像素点自身无需进行模糊处理,R>1表示以像素点为中间点的范围,涉及像素点周围的其他像素点,需要在该范围内进行模糊处理。
根据图5中深度值和模糊半径的关系曲线可以得到,深度值和模糊半径的关系曲线为倒高斯曲线,换言之,深度值和模糊半径的关系曲线沿深度(depth)所在的轴进行翻转,可以得到呈高斯分布的曲线,那么在本实施例中,根据深度值和模糊半径的关系曲线可以构建高斯模型。具体的高斯模型的公式如下:
Figure PCTCN2017100881-appb-000005
则将上述高斯模型变形为下述公式(1)和公式(2):
Figure PCTCN2017100881-appb-000006
Figure PCTCN2017100881-appb-000007
其中,Rmax为模糊半径的最大值,C表示模糊系数(blurcoefficient),C的范围是[0 1],d表示深度值,dfocus表示对焦点的深度值,dfocus为高斯曲线的均值,δ为高斯曲线的方差,0.5为常数,
Figure PCTCN2017100881-appb-000008
为向下取整符号,用于得到整数的R值,R表示模糊半径。
公式(1)中的方差δ为经验值或通过实验获得的值,δ与最大深度值dmax有关,δ随着最大深度值dmax的增加而增加。可以预先通过实验确定不同最大深度值dmax对应的δ,在拍摄图像时,直接根据深度图像的最大深度值选取对应的δ即可。
公式(2)中的常数0.5是为了对Rmax-C×Rmax进行四舍五入的取整。
公式(2)中的Rmax为经验值或通过实验获得的值。可以在实验过程中通过反复调整Rmax确定背景虚化图像的效果,确定一个或多个虚化效果较好的Rmax,在拍摄图像之前,由用户选择其中一个Rmax
在采集到深度图像之后,设置高斯模型中的Rmax和δ,由于每张深度图像有一个对焦点,将该对焦点输入高斯模型,就可以得到该对焦点对应的高斯模型;将深度图像中的每个像素点的深度值输入该对焦点对应的高斯模型,即可输出该像素点对应的模糊半径。
实施例三
下面给出三个仿真实例,说明本发明的深度值和模糊系数的关系以及深度值和模糊半径的关系。
在本实施例中,深度图像的深度值范围为[0,120],Rmax设置为14,模糊系数的范围是[0 1],dfocus分别设置15、70和110,景深范围对焦点dfocus前、后各10个单位,δ可以设置为20。
作为一种实施方式,可以根据dfocus选取的值不同,为每个dfocus设置对应的δ,例如:dfocus为15时的δ为15,dfocus为70时的δ为20,dfocus 为110时的δ为25。不同dfocus对应的δ可以通过实验预先进行确定,执行时可以直接选取对应的δ即可。
基于实施例二给出的公式(1)和公式(2),本实施例分别给出对焦点的深度值为15、70和110时的深度值和模糊系数的关系曲线图、以及深度值与模糊半径的关系曲线图。
图6-1是对焦点的深度值为15时的深度值和模糊系数的关系(对照)曲线图;图6-2是对焦点的深度值为15时的深度值与模糊半径的关系曲线图。
图7-1是对焦点的深度值为70时的深度值和模糊系数的关系曲线图;图7-2是对焦点的深度值为70时的深度值与模糊半径的关系曲线图。
图8-1是对焦点的深度值为110时的深度值和模糊系数的关系曲线图;图8-2是对焦点的深度值为110时的深度值与模糊半径的关系曲线图。
通过图6-1、图7-1和图8-1可以看到深度值和模糊系数的关系曲线为高斯曲线。通过图6-2、图7-2和图8-2可以看到深度值和模糊半径的关系曲线为倒高斯曲线。
由于在计算模糊半径使采用了下取整的做法,使得图6-2、图7-2和图8-2中的模糊半径呈离散分布,景深范围内的模糊半径为1,景深范围之外的模糊半径大于1。
由于深度值和模糊半径的关系曲线给出了深度值和模糊半径的对应关系,所以在得到像素点的深度值之后,就可以确定该像素点对应的模糊半径。根据图6-2、图7-2和图8-2给出的深度值和模糊半径的关系曲线,分别对图片进行模糊处理,图9是对焦点的深度值为15时的背景虚化图像;图10是对焦点的深度值为70时的背景虚化图像;图11是对焦点的深度值为110时的背景虚化图像。图9~图11中的圆圈为对焦点所在的位置。
实施例四
本实施例提供一种图像虚化处理装置。图12是根据本发明第四实施例 的图像虚化处理装置的结构图。本实施例的图像虚化处理装置可以设置在数码设备(如移动终端)中。
该图像虚化装置,包括:
采集模块1210,配置为获得图像以及所述图像对应的深度图像。
计算模块1220,配置为根据所述采集模块1210获得的深度图像中像素点的深度值,计算所述像素点对应的模糊半径。
处理模块1230,配置为根据所述计算模块1220获得的所述像素点对应的模糊半径,对所述采集模块1210获得的所述清晰图像中与所述像素点位置对应的像素点进行模糊处理。
在一个实施例中,所述计算模块1220,配置为在所述深度图像中,获取像素点的深度值;在所述像素点的深度值中,确定对焦点所在的像素点的深度值;根据所述对焦点所在的像素点的深度值,得到所述对焦点对应的高斯模型;所述高斯模型为像素点和模糊半径的映射关系模型基于所述高斯模型,获得所述像素点对应的模糊半径。
在另一实施例中,所述计算模块1220,配置为根据获得所述深度图像时选定的对焦点的坐标,在所述深度图像的像素点对应的深度值中,确定对焦点的坐标所在的像素点的深度值。
在又一实施例中,所述处理模块1230,配置为在所述深度图像中,确定景深区域之外的图像,作为背景图像;在所述图像中,确定与所述背景图像位置对应的局部图像;利用所述背景图像中的像素点对应的模糊半径,对所述局部图像中与所述像素点位置对应的像素点进行模糊处理。
在再一实施例中,所述采集模块1210,配置为通过调用双目摄像头获得图像以及所述图像对应的深度图像;或者,调用摄像头以及测距传感器,通过所述摄像头获得图像,通过所述测距传感器获得所述图像对应的深度图像。
作为一种实施方式,所述装置还包括输入检测模块,配置为检测输入操作;
所述处理模块1230,配置为基于所述输入检测模块检测的所述输入操作确定对焦点,基于所述对焦点确定景深区域。
作为一种实施方式,所述高斯模型满足:
Figure PCTCN2017100881-appb-000009
其中,R表示模糊半径;Rmax为模糊半径的最大值,d表示深度值,dfocus表示对焦点的深度值,δ为高斯曲线的方差,0.5为常数,
Figure PCTCN2017100881-appb-000010
表示向下取整,用于得到整数的R值;
其中,基于dfocus的不同配置不同的δ。
本实施例所述的装置的功能已经在图1~11所示的实施例中进行了描述,故本实施例的描述中未详尽之处,可以参见前述实施例中的相关说明,在此不做赘述。
本发明实施例中,所述图像虚化处理装置在实际应用中可由具有摄像头的终端设备实现;所述装置中的采集模块1210、计算模块1220、处理模块1230和输入检测模块,在实际应用中均可由中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)、微控制单元(MCU,Microcontroller Unit)或可编程门阵列(FPGA,Field-Programmable Gate Array)实现。
需要说明的是:上述实施例提供的图像虚化处理装置在进行图像处理时,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将装置的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的图像虚化处理装置与图像虚化处理方法实施例属于同一构思,其 具体实现过程详见方法实施例,这里不再赘述。
实施例五
本发明实施例还提供了一种移动终端,该移动终端的硬件组成结构可参照图1所示,具体的,移动终端,包括:
双目摄像头,配置为采集图像以及所述图像对应的深度图像;或者,
摄像头,配置为采集图像;
测距传感器,配置为采集所述图像对应的深度图像;
存储有图像虚化处理程序的存储器;
处理器,配置为执行所述图像虚化处理程序以执行下述操作:获取所述双目摄像头或所述摄像头采集的所述图像以及所述双目摄像头或所述测距传感器采集的所述图像对应的深度图像;根据所述深度图像中像素点的深度值,计算所述像素点对应的模糊半径;根据所述像素点对应的模糊半径,对所述图像中与所述像素点位置对应的像素点进行模糊处理。
作为一种实施方式,所述处理器,配置为执行所述图像虚化处理程序以执行下述操作:在所述深度图像中,获取像素点的深度值;在所述像素点的深度值中,确定对焦点所在的像素点的深度值;根据所述对焦点所在的像素点的深度值,得到所述对焦点对应的高斯模型;所述高斯模型为像素点和模糊半径的映射关系模型;基于所述高斯模型,获得像素点对应的模糊半径。
作为一种实施方式,所述处理器,配置为执行所述图像虚化处理程序以执行下述操作:根据采集所述深度图像时选定的对焦点的坐标,在所述深度图像的像素点对应的深度值中,确定对焦点的坐标所在的像素点的深度值。
作为一种实施方式,所述处理器,配置为执行所述图像虚化处理程序以执行下述操作:在所述深度图像中,确定景深区域之外的图像,作为背 景图像;在所述图像中,确定与所述背景图像位置对应的局部图像;利用所述背景图像中的像素点对应的模糊半径,对所述局部图像中与所述像素点位置对应的像素点进行模糊处理。
作为一种实施方式,所述处理器,配置为执行所述图像虚化处理程序以执行下述操作:检测输入操作,基于所述输入操作确定对焦点,基于所述对焦点确定景深区域。
本实施例中,双目摄像头或摄像头可通过图1中的相机121实现;测距传感器可设置在移动终端内,图1中并未示出;存储器可通过图1中的存储器160实现;处理器可通过图1中的控制器180实现。
实施例六
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行:获取图像以及所述图像对应的深度图像;根据所述深度图像中像素点的深度值,计算所述像素点对应的模糊半径;根据所述像素点对应的模糊半径,对所述图像中与所述像素点位置对应的像素点进行模糊处理。
作为一种实施方式,所述计算机可执行指令用于执行:在所述深度图像中,获取像素点的深度值;在所述像素点的深度值中,确定对焦点所在的像素点的深度值;根据所述对焦点所在的像素点的深度值,得到所述对焦点对应的高斯模型;所述高斯模型为像素点和模糊半径的映射关系模型;基于所述高斯模型,获得所述像素点对应的模糊半径。
作为一种实施方式,所述计算机可执行指令用于执行:根据采集所述深度图像时选定的对焦点的坐标,在所述深度图像的像素点对应的深度值中,确定对焦点的坐标所在的像素点的深度值。
作为一种实施方式,所述计算机可执行指令用于执行:在所述深度图像中,确定景深区域之外的图像,作为背景图像;在所述图像中,确定与 所述背景图像位置对应的局部图像;利用所述背景图像中的像素点对应的模糊半径,对所述局部图像中与所述像素点位置对应的像素点进行模糊处理。
作为一种实施方式,所述计算机可执行指令用于执行:检测输入操作,基于所述输入操作确定对焦点,基于所述对焦点确定景深区域。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。
工业实用性
本发明实施例的技术方案利用深度图像计算像素点对应的模糊半径,根据该像素点对应的模糊半径,对清晰图像中位置对应的像素点进行虚化处理,实现了对图像进行快速地背景虚化,实现背景虚化效果,提升了用户的操作体验。

Claims (20)

  1. 一种图像虚化处理方法,包括:
    采集图像以及所述图像对应的深度图像;
    根据所述深度图像中像素点的深度值,计算所述像素点对应的模糊半径;
    根据所述像素点对应的模糊半径,对所述图像中与所述像素点位置对应的像素点进行模糊处理。
  2. 根据权利要求1所述的方法,其中,所述根据所述深度图像中像素点的深度值,计算所述像素点对应的模糊半径,包括:
    在所述深度图像中,获取像素点的深度值;
    在所述像素点的深度值中,确定对焦点所在的像素点的深度值;
    根据所述对焦点所在的像素点的深度值,得到所述对焦点对应的高斯模型;所述高斯模型为像素点和模糊半径的映射关系模型;
    基于所述高斯模型,获得所述像素点对应的模糊半径。
  3. 根据权利要求2所述的方法,其中,所述在所述像素点的深度值中,确定对焦点所在的像素点的深度值,包括:
    根据采集所述深度图像时选定的对焦点的坐标,在所述深度图像的像素点对应的深度值中,确定对焦点的坐标所在的像素点的深度值。
  4. 根据权利要求1所述的方法,其中,所述根据所述像素点对应的模糊半径,对所述图像中与所述像素点位置对应的像素点进行模糊处理,包括:
    在所述深度图像中,确定景深区域之外的图像,作为背景图像;
    在所述图像中,确定与所述背景图像位置对应的局部图像;
    利用所述背景图像中的像素点对应的模糊半径,对所述局部图像中与所述像素点位置对应的像素点进行模糊处理。
  5. 根据权利要求1至4中任一项所述的方法,其中,所述采集图像以及所述图像对应的深度图像,包括:
    调用双目摄像头采集所述图像以及所述图像对应的深度图像;或者,
    调用摄像头以及测距传感器,通过摄像头采集所述图像,通过测距传感器采集所述图像对应的深度图像。
  6. 根据权利要求4所述的方法,其中,所述方法还包括:检测输入操作,基于所述输入操作确定对焦点,基于所述对焦点确定景深区域。
  7. 根据权利要求2所述的方法,其中,所述高斯模型满足:
    Figure PCTCN2017100881-appb-100001
    其中,R表示模糊半径;Rmax为模糊半径的最大值,d表示深度值,dfocus表示对焦点的深度值,δ为高斯曲线的方差,0.5为常数,
    Figure PCTCN2017100881-appb-100002
    表示向下取整,用于得到整数的R值;
    其中,基于dfocus的不同配置不同的δ。
  8. 一种图像虚化处理装置,包括:
    采集模块,配置为获得图像以及所述图像对应的深度图像;
    计算模块,配置为根据所述采集模块获得的深度图像中像素点的深度值,计算所述像素点对应的模糊半径;
    处理模块,配置为根据所述计算模块获得的所述像素点对应的模糊半径,对所述采集模块获得的所述图像中与所述像素点位置对应的像素点进行模糊处理。
  9. 根据权利要求8所述的装置,其中,所述计算模块,配置为:
    在所述深度图像中,获取像素点的深度值;
    在所述像素点的深度值中,确定对焦点所在的像素点的深度值;
    根据所述对焦点所在的像素点的深度值,得到所述对焦点对应的高斯 模型;所述高斯模型为像素点和模糊半径的映射关系模型;
    基于所述高斯模型,获得所述像素点对应的模糊半径。
  10. 根据权利要求9所述的装置,其中,所述计算模块配置为:
    根据获得所述深度图像时选定的对焦点的坐标,在所述深度图像的像素点对应的深度值中,确定对焦点的坐标所在的像素点的深度值。
  11. 根据权利要求8所述的装置,其中,所述处理模块,配置为:
    在所述深度图像中,确定景深区域之外的图像,作为背景图像;
    在所述图像中,确定与所述背景图像位置对应的局部图像;
    利用所述背景图像中的像素点对应的模糊半径,对所述局部图像中与所述像素点位置对应的像素点进行模糊处理。
  12. 根据权利要求8至11中任一项所述的装置,其中,所述采集模块,配置为:
    通过调用双目摄像头获得所述图像以及所述图像对应的深度图像;或者,
    调用摄像头以及测距传感器,通过所述摄像头获得所述图像,通过所述测距传感器获得所述清晰图像对应的深度图像。
  13. 根据求11所述的装置,其中,所述装置还包括输入检测模块,配置为检测输入操作;
    所述处理模块,配置为基于所述输入检测模块检测的所述输入操作确定对焦点,基于所述对焦点确定景深区域。
  14. 根据权利要求9所述的装置,其中,所述高斯模型满足:
    Figure PCTCN2017100881-appb-100003
    其中,R表示模糊半径;Rmax为模糊半径的最大值,d表示深度值,dfocus表示对焦点的深度值,δ为高斯曲线的方差,0.5为常数,
    Figure PCTCN2017100881-appb-100004
    表示向下取整, 用于得到整数的R值;
    其中,基于dfocus的不同配置不同的δ。
  15. 一种移动终端,包括:
    双目摄像头,配置为采集图像以及所述图像对应的深度图像;或者,
    摄像头,配置为采集图像;
    测距传感器,配置为采集所述图像对应的深度图像;
    存储有图像虚化处理程序的存储器;
    处理器,配置为执行所述图像虚化处理程序以执行下述操作:获取所述双目摄像头或所述摄像头采集的所述图像以及所述双目摄像头或所述测距传感器采集的所述图像对应的深度图像;根据所述深度图像中像素点的深度值,计算所述像素点对应的模糊半径;根据所述像素点对应的模糊半径,对所述图像中与所述像素点位置对应的像素点进行模糊处理。
  16. 根据权利要求15所述的移动终端,其中,所述处理器,配置为执行所述图像虚化处理程序以执行下述操作:在所述深度图像中,获取像素点的深度值;在所述像素点的深度值中,确定对焦点所在的像素点的深度值;根据所述对焦点所在的像素点的深度值,得到所述对焦点对应的高斯模型;所述高斯模型为像素点和模糊半径的映射关系模型;基于所述高斯模型,获得像素点对应的模糊半径。
  17. 根据权利要求16所述的移动终端,其中,所述处理器,配置为执行所述图像虚化处理程序以执行下述操作:根据采集所述深度图像时选定的对焦点的坐标,在所述深度图像的像素点对应的深度值中,确定对焦点的坐标所在的像素点的深度值。
  18. 根据权利要求15所述的移动终端,其中,所述处理器,配置为执行所述图像虚化处理程序以执行下述操作:在所述深度图像中,确定景深区域之外的图像,作为背景图像;在所述图像中,确定与所述背景图像位 置对应的局部图像;利用所述背景图像中的像素点对应的模糊半径,对所述局部图像中与所述像素点位置对应的像素点进行模糊处理。
  19. 根据权利要求18所述的移动终端,其中,所述处理器,配置为执行所述图像虚化处理程序以执行下述操作:检测输入操作,基于所述输入操作确定对焦点,基于所述对焦点确定景深区域。
  20. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1至7任一项所述的图像虚化处理方法。
PCT/CN2017/100881 2016-10-31 2017-09-07 图像虚化处理方法、装置、移动终端和计算机存储介质 WO2018076935A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610926610.3A CN106530241B (zh) 2016-10-31 2016-10-31 一种图像虚化处理方法和装置
CN201610926610.3 2016-10-31

Publications (1)

Publication Number Publication Date
WO2018076935A1 true WO2018076935A1 (zh) 2018-05-03

Family

ID=58292364

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/100881 WO2018076935A1 (zh) 2016-10-31 2017-09-07 图像虚化处理方法、装置、移动终端和计算机存储介质

Country Status (2)

Country Link
CN (1) CN106530241B (zh)
WO (1) WO2018076935A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807745A (zh) * 2019-10-25 2020-02-18 北京小米智能科技有限公司 图像处理方法及装置、电子设备
CN110827204A (zh) * 2018-08-14 2020-02-21 阿里巴巴集团控股有限公司 图像处理方法、装置及电子设备
CN113129207A (zh) * 2019-12-30 2021-07-16 武汉Tcl集团工业研究院有限公司 一种图片的背景虚化方法及装置、计算机设备、存储介质

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106530241B (zh) * 2016-10-31 2020-08-11 努比亚技术有限公司 一种图像虚化处理方法和装置
CN108668069B (zh) * 2017-03-27 2020-04-14 华为技术有限公司 一种图像背景虚化方法及装置
CN108230234B (zh) 2017-05-19 2019-08-20 深圳市商汤科技有限公司 图像虚化处理方法、装置、存储介质及电子设备
CN107231529A (zh) * 2017-06-30 2017-10-03 努比亚技术有限公司 图像处理方法、移动终端及存储介质
CN107454332B (zh) * 2017-08-28 2020-03-10 厦门美图之家科技有限公司 图像处理方法、装置及电子设备
CN109474780B (zh) * 2017-09-07 2023-07-25 虹软科技股份有限公司 一种用于图像处理的方法和装置
CN108024058B (zh) * 2017-11-30 2019-08-02 Oppo广东移动通信有限公司 图像虚化处理方法、装置、移动终端和存储介质
CN108076286B (zh) * 2017-11-30 2019-12-27 Oppo广东移动通信有限公司 图像虚化方法、装置、移动终端和存储介质
US11386527B2 (en) 2018-01-30 2022-07-12 Sony Corporation Image processor and imaging processing method
CN108449589A (zh) * 2018-03-26 2018-08-24 德淮半导体有限公司 处理图像的方法、装置及电子设备
CN111145100B (zh) * 2018-11-02 2023-01-20 深圳富泰宏精密工业有限公司 动态影像生成方法及系统、计算机装置、及可读存储介质
CN111311482B (zh) * 2018-12-12 2023-04-07 Tcl科技集团股份有限公司 背景虚化方法、装置、终端设备及存储介质
CN110349080B (zh) * 2019-06-10 2023-07-04 北京迈格威科技有限公司 一种图像处理方法及装置
CN110992284A (zh) * 2019-11-29 2020-04-10 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、电子设备和计算机可读存储介质
CN113256482B (zh) * 2020-02-10 2023-05-12 武汉Tcl集团工业研究院有限公司 一种拍照背景虚化方法、移动终端及存储介质
CN111199514B (zh) * 2019-12-31 2022-11-18 无锡宇宁智能科技有限公司 图像背景虚化方法、装置、设备及可读存储介质
CN113766090B (zh) * 2020-06-02 2023-08-01 武汉Tcl集团工业研究院有限公司 一种图像处理方法、终端以及存储介质
CN112785512B (zh) * 2020-06-30 2023-05-12 青岛经济技术开发区海尔热水器有限公司 一种高斯模糊图像处理的优化方法
CN113570501B (zh) * 2021-09-28 2021-12-28 泰山信息科技有限公司 一种图片虚化方法、装置及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103037075A (zh) * 2011-10-07 2013-04-10 Lg电子株式会社 移动终端及其离焦图像生成方法
CN103871051A (zh) * 2014-02-19 2014-06-18 小米科技有限责任公司 图像处理方法、装置和电子设备
CN105163042A (zh) * 2015-08-03 2015-12-16 努比亚技术有限公司 一种虚化处理深度图像的装置和方法
CN106060423A (zh) * 2016-06-02 2016-10-26 广东欧珀移动通信有限公司 虚化照片生成方法、装置和移动终端
CN106530241A (zh) * 2016-10-31 2017-03-22 努比亚技术有限公司 一种图像虚化处理方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7711201B2 (en) * 2006-06-22 2010-05-04 Sony Corporation Method of and apparatus for generating a depth map utilized in autofocusing
US7911513B2 (en) * 2007-04-20 2011-03-22 General Instrument Corporation Simulating short depth of field to maximize privacy in videotelephony
CN103945118B (zh) * 2014-03-14 2017-06-20 华为技术有限公司 图像虚化方法、装置及电子设备
CN105592271A (zh) * 2015-12-21 2016-05-18 深圳市金立通信设备有限公司 一种图像处理的方法及终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103037075A (zh) * 2011-10-07 2013-04-10 Lg电子株式会社 移动终端及其离焦图像生成方法
CN103871051A (zh) * 2014-02-19 2014-06-18 小米科技有限责任公司 图像处理方法、装置和电子设备
CN105163042A (zh) * 2015-08-03 2015-12-16 努比亚技术有限公司 一种虚化处理深度图像的装置和方法
CN106060423A (zh) * 2016-06-02 2016-10-26 广东欧珀移动通信有限公司 虚化照片生成方法、装置和移动终端
CN106530241A (zh) * 2016-10-31 2017-03-22 努比亚技术有限公司 一种图像虚化处理方法和装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110827204A (zh) * 2018-08-14 2020-02-21 阿里巴巴集团控股有限公司 图像处理方法、装置及电子设备
CN110807745A (zh) * 2019-10-25 2020-02-18 北京小米智能科技有限公司 图像处理方法及装置、电子设备
CN110807745B (zh) * 2019-10-25 2022-09-16 北京小米智能科技有限公司 图像处理方法及装置、电子设备
CN113129207A (zh) * 2019-12-30 2021-07-16 武汉Tcl集团工业研究院有限公司 一种图片的背景虚化方法及装置、计算机设备、存储介质

Also Published As

Publication number Publication date
CN106530241A (zh) 2017-03-22
CN106530241B (zh) 2020-08-11

Similar Documents

Publication Publication Date Title
WO2018076935A1 (zh) 图像虚化处理方法、装置、移动终端和计算机存储介质
CN106453924B (zh) 一种图像拍摄方法和装置
WO2017067526A1 (zh) 图像增强方法及移动终端
CN107798669B (zh) 图像去雾方法、装置及计算机可读存储介质
CN109891874B (zh) 一种全景拍摄方法及装置
WO2017020836A1 (zh) 一种虚化处理深度图像的装置和方法
WO2018019124A1 (zh) 一种图像处理方法及电子设备、存储介质
WO2016180325A1 (zh) 图像处理方法及装置
US8817160B2 (en) Mobile terminal and method of controlling the same
WO2017050115A1 (zh) 一种图像合成方法和装置
WO2017071481A1 (zh) 一种移动终端及其实现分屏的方法
WO2018019128A1 (zh) 一种夜景图像的处理方法和移动终端
CN104954689A (zh) 一种利用双摄像头获得照片的方法及拍摄装置
WO2017071475A1 (zh) 一种图像处理方法及终端、存储介质
WO2018076938A1 (zh) 图像处理装置及方法和计算机存储介质
CN105100603A (zh) 一种智能终端内嵌的触发拍照装置及其方法
WO2017067523A1 (zh) 图像处理方法、装置及移动终端
CN106791135B (zh) 一种自动局部缩放显示方法及移动终端
WO2017071532A1 (zh) 一种自拍合影的方法和装置
CN106851113A (zh) 一种基于双摄像头的拍照方法及移动终端
WO2017071572A1 (zh) 一种信息处理方法及装置、终端、存储介质
WO2018032917A1 (zh) 一种移动终端、获取对焦值的方法以及计算机可读存储介质
CN106506965A (zh) 一种拍摄方法及终端
CN105681654A (zh) 拍照方法及移动终端
WO2017071468A1 (zh) 一种信息处理方法、移动终端及计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17864522

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18.10.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17864522

Country of ref document: EP

Kind code of ref document: A1